banner
云野阁

云野阁

闲云野鹤,八方逍遥

高可用容器雲建設(k8s集群+ceph集群)

任務目標#

  1. 完成高可用 k8s 集群安裝部署

任務平台#

  1. 實體設備 --
  2. 作業系統:openEuler 22.03 LTS SP2

部署指南#

集群拓撲圖

image-20240323222825713-1711204108611-3

一:部署 ceph 集群#

任務一:配置準備#

  1. 重命名 hostname
# 將10.10.1.80的主機名改為future-k8s-master0
hostnamectl set-hostname future-k8s-master0 && bash
# 將10.10.1.81的主機名改為future-k8s-master1
hostnamectl set-hostname future-k8s-master1 && bash
# 將10.10.1.82的主機名改為future-k8s-master2
hostnamectl set-hostname future-k8s-master2 && bash
# 將10.10.1.16的主機名改為k8s-ceph-node0
hostnamectl set-hostname k8s-ceph-node0 && bash
# 將10.10.1.17的主機名改為k8s-ceph-node1
hostnamectl set-hostname k8s-ceph-node1 && bash
# 將10.10.1.18的主機名改為k8s-ceph-node2
hostnamectl set-hostname k8s-ceph-node2 && bash
# 將10.10.1.15的主機名改為k8s-ceph-node2
hostnamectl set-hostname k8s-ceph-node3 && bash
  1. 安裝前的配置修改
# 關閉防火牆
systemctl stop firewalld
systemctl disable firewalld
firewall-cmd --state
 
# selinux永久關閉
setenforce 0
 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
cat /etc/selinux/config

# swap永久關閉
swapoff --all
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
cat /etc/fstab

# 添加hosts
cat >> /etc/hosts << EOF
10.10.1.80 future-k8s-master0
10.10.1.81 future-k8s-master1
10.10.1.82 future-k8s-master2
10.10.1.16 k8s-ceph-node0
10.10.1.17 k8s-ceph-node1
10.10.1.18 k8s-ceph-node2
10.10.1.15 k8s-ceph-node3
10.10.1.83 future-k8s-vip
EOF
#查看
cat /etc/hosts


# 添加網橋過濾及內核轉發配置文件
cat > /etc/sysctl.d/k8s.conf << EOF
 net.bridge.bridge-nf-call-ip6tables = 1
 net.bridge.bridge-nf-call-iptables = 1
 net.ipv4.ip_forward = 1
EOF
# 查看
cat /etc/sysctl.d/k8s.conf
# 加載br_netfilter模組
modprobe br_netfilter
# 查看是否加載
lsmod | grep br_netfilter
# 加載網橋過濾及內核轉發配置文件
sysctl -p /etc/sysctl.d/k8s.conf

#同步時間
yum install ntp -y
systemctl start ntpd
systemctl enable ntpd
yum install chrony  -y
systemctl start chronyd
systemctl enable chronyd
#修改配置,添加內容
echo "
server 10.10.3.70 iburst
allow 10.10.3.0/24
allow 10.10.1.0/24
" >> /etc/chrony.conf
timedatectl set-ntp true
systemctl restart chronyd
timedatectl status
date
  1. 安裝 ipset 及 ipvsadm
 # 安裝ipset及ipvsadm
 yum -y install ipset ipvsadm
 配置ipvsadm模組加載方式
 # 添加需要加載的模組
echo ' #!/bin/bash
 modprobe -- ip_vs
 modprobe -- ip_vs_rr
 modprobe -- ip_vs_wrr
 modprobe -- ip_vs_sh
 modprobe -- nf_conntrack
' > /etc/sysconfig/modules/ipvs.modules
#查看
cat /etc/sysconfig/modules/ipvs.modules
 # 授權、運行、檢查是否加載
chmod 755 /etc/sysconfig/modules/ipvs.modules 
bash /etc/sysconfig/modules/ipvs.modules 
lsmod | grep -e ip_vs -e nf_conntrack

#重啟
reboot

配置準備完成後,所有節點都需重啟

任務二:配置 python 環境#

下載 python2

  1. 安裝zlib庫,不然安裝 pip 時會報錯(還要重新編譯 python)
 yum -y install zlib*
  1. 安裝 GCC 包,如果沒有安裝 GCC,請使用以下命令進行安裝
yum -y install gcc openssl-devel bzip2-devel
  1. 下載 Python-2.7.18
 cd /usr/src
 yum -y install wget tar
 wget https://www.python.org/ftp/python/2.7.18/Python-2.7.18.tgz
 tar xzf Python-2.7.18.tgz
  1. 在編譯之前還需要在安裝源文件中修改 Modules/Setup.dist 文件,將註釋去掉
sed -i 's/#zlib zlibmodule.c -I$(prefix)/zlib zlibmodule.c -I$(prefix)/'  Python-2.7.18/Modules/Setup.dist
  1. 編譯 Python-2.7.18(make altinstall用於防止替換默認的 python 二進制文件 /usr/bin/python)
cd /usr/src/Python-2.7.18
./configure --enable-optimizations
yum install -y make
make altinstall

不要覆蓋或鏈接原始的 Python 二進制文件,這可能會損壞系統

  1. 設置環境變量
echo "
export PYTHON_HOME=/usr/local/
PATH=\$PATH:\$PYTHON_HOME/bin
" >> /etc/profile
cat /etc/profile
source /etc/profile
  1. 方法一:
curl "https://bootstrap.pypa.io/pip/2.7/get-pip.py" -o "get-pip.py"
python2.7 get-pip.py 

下載 ceph

#k8s-ceph-node0下載
#方法一:使用pip下載
pip2 install ceph-deploy
yum install -y ceph ceph-radosgw
#其他節點下載
yum install -y ceph ceph-radosgw
#檢查安裝包是否完整
rpm -qa |egrep -i "ceph|rados|rbd"

任務三:部署 ceph 集群#

  1. admin 節點#

  2. 部署 Monitor#
  3. 創建配置文件目錄,並創建配置文件

mkdir /etc/ceph/
touch /etc/ceph/ceph.conf
  1. 為集群生成一個 FSDI:
uuidgen
30912204-0c26-413f-8e00-6d55c9c0af03
  1. 集群創建一個鑰匙串,為 Monitor 服務創建一個密鑰:
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
  1. 創建一個管理員鑰匙串,生成一個 client.admin 用戶,並將此用戶添加到鑰匙串中:
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
  1. 創建 bootstrap-osd 鑰匙串,將 client.bootstrap-osd 用戶添加到此鑰匙串中:
ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd'
  1. 將生成的 key 加入 ceph.mon.keyring.
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring

ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
  1. 使用主機名和 IP 地址以及 FSID 生成 monitor map:
monmaptool --create --add k8s-ceph-node0 10.10.1.16 --fsid 30912204-0c26-413f-8e00-6d55c9c0af03 /tmp/monmap
  1. 創建 mon 的目錄,使用 集群名稱-主機名的形式:
 mkdir  /var/lib/ceph/mon/ceph-k8s-ceph-node0
  1. 填入第一個 mon 守護進程的信息:
ceph-mon --mkfs -i k8s-ceph-node0 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
  1. 配置 /etc/ceph/ceph.conf 文件:
cat /etc/ceph/ceph.conf
################################################
[global]
fsid = 30912204-0c26-413f-8e00-6d55c9c0af03     # 生成的FSID
mon initial members =k8s-ceph-node0
mon host = 10.10.1.16
public network = 10.10.1.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1

################################################
  1. 由於我們使用使用 root 操作的,需要設置權限為 ceph(也可以修改 systemd 的啟動文件,將 ceph 用戶改為 root),並啟動 Monitor
chown  -R ceph:ceph /var/lib/ceph
systemctl start [email protected]
systemctl enable [email protected]
  1. 確認服務已經正常啟動:
ceph -s
yum install -y net-tools
netstat -lntp|grep ceph-mon
  1. 部署 Manager#

當我們配置好 ceph-mon 服務之後,就需要配置 ceph-mgr 服務。

  1. 生成一個認證密鑰 (ceph-mgr 為自定義的名稱):
#10.10.1.16
ceph auth get-or-create mgr.ceph-mgr mon 'allow profile mgr' osd 'allow *' mds 'allow *'
[mgr.ceph-mgr]
        key = AQANDD9lfWg2LBAAHY0mprdbuKFBPJDkE7/I5Q==
        
#10.10.1.17
ceph auth get-or-create mgr.ceph-mgr1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
[mgr.ceph-mgr1]
        key = AQDbRTZlgjXWBBAAGew4Xta+t9vgIWPCWC8EVg==
  1. 創建存放此密鑰的文件的目錄
#10.10.1.16
sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-ceph-mgr
#將產生的密鑰文件存入此目錄下,並命名為keyring
vi /var/lib/ceph/mgr/ceph-ceph-mgr/keyring 
[mgr.ceph-mgr]
        key = AQANDD9lfWg2LBAAHY0mprdbuKFBPJDkE7/I5Q==
        
#10.10.1.17
sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-ceph-mgr1
#將產生的密鑰文件存入此目錄下,並命名為keyring
vi /var/lib/ceph/mgr/ceph-ceph-mgr1/keyring 
[mgr.ceph-mgr1]
        key = AQDbRTZlgjXWBBAAGew4Xta+t9vgIWPCWC8EVg==
  1. 啟動 ceph-mgr 服務
ceph-mgr -i ceph-mgr
ceph-mgr -i ceph-mgr1
systemctl enable ceph-mgr@k8s-ceph-node0
systemctl enable ceph-mgr@k8s-ceph-node1
#檢查服務是否啟動,查看ceph狀態
ceph -s
#查看當前mgr中可用的模組
ceph mgr module ls
  1. 創建 OSD#
ceph-volume lvm create --data /dev/sda8
#查看當前的lvm邏輯卷
ceph-volume lvm list
#查看ceph狀態
ceph -s
  1. 安裝配置 Ceph-dashboard#
  2. 開啟 dashboard 功能

ceph mgr module enable dashboard
  1. 創建證書
ceph dashboard create-self-signed-cert
  1. 配置 web 登錄的用戶名和密碼
 #創建/etc/ceph/dashboard.key,並將密碼寫入
 echo "qishi#09319" >/etc/ceph/dashboard.key
 ceph dashboard ac-user-create k8s administrator -i /etc/ceph/dashboard.key
  1. 修改 dashboard 默認端口 (可選)

配置端口,默認端口是 8443,修改為 18443,修改後需重啟 mgr,修改端口才生效。

ceph config set mgr mgr/dashboard/server_port 18443
systemctl restart ceph-mgr.target
  1. 查看發布服務地址並登錄
ceph mgr services

{

​ "dashboard": "https://k8s-ceph-node0:8443/"

}

image-20240323222910679-1711204151978-5

  1. node 節點#

  2. 擴展 Monitor#
  3. 修改 master 節點上的配置

vi /etc/ceph/ceph.conf
[global]
fsid = 30912204-0c26-413f-8e00-6d55c9c0af03     # 生成的FSID
mon initial members =k8s-ceph-node0,k8s-ceph-node1,k8s-ceph-node2,k8s-ceph-node3            # 主機名
mon host = 10.10.1.16,10.10.1.17,10.10.1.18,10.10.1.15                       # 對應的IP
public network = 10.10.1.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1
[mon]
mon allow pool delete = true

[mds.k8s-ceph-node0]
host = k8s-ceph-node0
  1. 將配置和密鑰文件分發到其它的節點上(master 節點)
#生成公鑰,複製到node節點主機上
ssh-keygen -t rsa
ssh-copy-id 10.10.1.17
ssh-copy-id 10.10.1.18
ssh-copy-id 10.10.1.15
#複製認證密鑰
scp /etc/ceph/*  10.10.1.17:/etc/ceph/
scp /etc/ceph/*  10.10.1.18:/etc/ceph/
scp /etc/ceph/*  10.10.1.15:/etc/ceph/
  1. 在 node 節點創建 ceph 相關目錄,並添加權限:
mkdir -p  /var/lib/ceph/{bootstrap-mds,bootstrap-mgr,bootstrap-osd,bootstrap-rbd,bootstrap-rgw,mds,mgr,mon,osd}
chown  -R ceph:ceph /var/lib/ceph

sudo -u ceph mkdir /var/lib/ceph/mon/ceph-k8s-ceph-node1
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-k8s-ceph-node2
  1. 修改 node 節點的配置文件,以 node1 為例(其他節點相似)
[global]
fsid = 30912204-0c26-413f-8e00-6d55c9c0af03     # 生成的FSID
mon initial members =k8s-ceph-node0,k8s-ceph-node1,k8s-ceph-node2,k8s-ceph-node3           # 主機名
mon host = 10.10.1.16,10.10.1.17,10.10.1.18,10.10.1.15                       # 對應的IP
public network = 10.10.1.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1
[mon]
mon allow pool delete = true

[mon.k8s-ceph-node1]
mon_addr = 10.10.1.17:6789
host = k8s-ceph-node1
  1. 獲取集群中的密鑰和 map, 以 node1 為例(其他節點相似)
ceph auth get mon. -o /tmp/monkeyring
ceph mon getmap -o /tmp/monmap
  1. 使用已有的密鑰和 map 添加一個新的 Monitor, 指定主機名,以 node1 為例(其他節點相似)
sudo -u ceph ceph-mon --mkfs -i k8s-ceph-node1 --monmap /tmp/monmap --keyring /tmp/monkeyring
  1. 啟動服務,以 node1 為例(其他節點相似)
systemctl start ceph-mon@k8s-ceph-node1
systemctl enable ceph-mon@k8s-ceph-node1
#查看mon狀態
ceph -s
ceph mon stat
  1. 添加 OSD#

從已經存在的 osd 的 master 節點上拷貝初始化的密鑰文件

scp -p  /var/lib/ceph/bootstrap-osd/ceph.keyring  10.10.1.17:/var/lib/ceph/bootstrap-osd/
scp -p  /var/lib/ceph/bootstrap-osd/ceph.keyring  10.10.1.18:/var/lib/ceph/bootstrap-osd/
scp -p  /var/lib/ceph/bootstrap-osd/ceph.keyring  10.10.1.15:/var/lib/ceph/bootstrap-osd/

在 node 節點添加 osd

ceph-volume lvm create --data /dev/sdb

systemctl enable ceph-osd@k8s-ceph-node1
#查看狀態
ceph -s
  1. 添加 Mds(以 node0 為例)#

#創建目錄
sudo -u ceph mkdir -p /var/lib/ceph/mds/ceph-k8s-ceph-node0
#創建密鑰
ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-k8s-ceph-node0/keyring --gen-key -n mds.k8s-ceph-node0
#導入密鑰,並設置caps
ceph auth add mds.k8s-ceph-node0 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-k8s-ceph-node0/keyring
#手動啟動服務
ceph-mds --cluster ceph -i k8s-ceph-node0 -m k8s-ceph-node0:6789
 chown -R ceph:ceph /var/lib/ceph/mds/
 systemctl start ceph-mds@k8s-ceph-node0
 systemctl enable ceph-mds@k8s-ceph-node0
 #檢查服務是否啟動
 ps -ef|grep ceph-mds
 #檢查ceph 集群狀態
 ceph -s
  1. 創建 CephFS#

創建 pools

#存儲數據
ceph osd pool create cephfs_data 64
#存儲元數據
ceph osd pool create cephfs_metadata 64
#啟用cephfs文件系統
ceph fs new cephfs cephfs_metadata cephfs_data
#查看文件系統狀態
ceph fs ls
ceph mds stat
  1. 創建 rbd 池#

#創建rbd池
ceph osd pool create rbd-k8s 64 64
#啟用 
ceph osd pool application enable rbd-k8s rbd
#初始化
rbd pool init rbd-k8s
#查看
ceph osd lspools

二:部署高可用 k8s 集群#

任務一:配置準備(與 ceph 集群一樣)#

任務二:安裝 docker#

  1. 配置 Docker CE 的 yum 存儲庫。打開docker-ce.repo的文件,並將以下內容複製到文件中:
echo '
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://download.docker.com/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg ' > /etc/yum.repos.d/docker-ce.repo

保存並退出文件。

  1. 安裝 Docker CE。運行以下命令來安裝 Docker CE:
 yum -y install docker-ce docker-ce-cli  containerd.io
#啟動docker並設置開機自啟
systemctl start docker  
systemctl enable docker
#查看版本
docker -v
docker compose version
  1. Docker 配置修改,設置 cgroup 驅動,使用 systemd,配置修改為如下。
#將配置寫入daemon.json文件
echo '{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "data-root": "/data/docker"
} ' > /etc/docker/daemon.json
#查看
cat /etc/docker/daemon.json
systemctl daemon-reload
systemctl restart docker
docker info
  1. 創建所需目錄
cd /data
mkdir  cri-dockerd calico    dashboard   metrics-server  script  ingress-nginx

任務三:安裝 cri-dockerd (k8s 1.24 及以上版本)#

cd /data/cri-dockerd
# 下載cri-dockerd安裝包
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4-3.el8.x86_64.rpm
# 安裝cri-dockerd
rpm -ivh cri-dockerd-0.3.4-3.el8.x86_64.rpm
docker pull registry.aliyuncs.com/google_containers/pause:3.9
# 修改鏡像地址為國內,否則kubelet拉取不了鏡像導致啟動失敗
sed -i.bak 's|ExecStart=.*$|ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9|g' /usr/lib/systemd/system/cri-docker.service
cat /usr/lib/systemd/system/cri-docker.service
# 啟動cri-dockerd
systemctl daemon-reload 
systemctl start cri-docker.service
systemctl enable cri-docker.service

任務四:安裝高可用組件#

部署高可用集群需要安裝 **keepalived 和 haproxy,實現master節點高可用,**在各 master 節點操作

  1. 安裝 keepalived 與 haproxy
yum install keepalived haproxy -y
  1. 備份 keepalived 與 haproxy 配置文件
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
  1. 修改各 master 節點的/etc/keepalived/keepalived.conf文件
    1. future-k8s-master0

    2. echo '
      global_defs {
         router_id k8s
      }
      
      vrrp_script check_haproxy {
          script "killall -0 haproxy"
          interval 3
          weight -2
          fall 10
          rise 2
      }
      
      vrrp_instance VI_1 {
          state MASTER  #主節點 則為MASTER ,其他則為 BACKUP
          interface ens192  #網卡名稱
          virtual_router_id 51
          priority 250   #優先級
          nopreempt   #設置非搶佔模式
          advert_int 1
          authentication {
              auth_type PASS
              auth_pass ceb1b3ec013d66163d6ab
          }
          virtual_ipaddress {
              10.10.1.83/24   #虛擬ip
          }
          track_script {
              check_haproxy
          }
      }    
      ' > /etc/keepalived/keepalived.conf
      cat /etc/keepalived/keepalived.conf
      
    3. future-k8s-master1

    4. echo '
      global_defs {
         router_id k8s
      }
      
      vrrp_script check_haproxy {
          script "killall -0 haproxy"
          interval 3
          weight -2
          fall 10
          rise 2
      }
      
      vrrp_instance VI_1 {
          state BACKUP  #主節點 則為MASTER ,其他則為 BACKUP
          interface ens192  #網卡名稱
          virtual_router_id 51
          priority 200   #優先級
          nopreempt   #設置非搶佔模式
          advert_int 1
          authentication {
              auth_type PASS
              auth_pass ceb1b3ec013d66163d6ab
          }
          virtual_ipaddress {
              10.10.1.83/24   #虛擬ip
          }
          track_script {
              check_haproxy
          }
      }    
      ' > /etc/keepalived/keepalived.conf
      cat  /etc/keepalived/keepalived.conf
      
    5. future-k8s-master2

    6. echo '
      global_defs {
         router_id k8s
      }
      
      vrrp_script check_haproxy {
          script "killall -0 haproxy"
          interval 3
          weight -2
          fall 10
          rise 2
      }
      
      vrrp_instance VI_1 {
          state BACKUP  #主節點 則為MASTER ,其他則為 BACKUP
          interface ens192  #網卡名稱
          virtual_router_id 51
          priority 150   #優先級
          nopreempt   #設置非搶佔模式
          advert_int 1
          authentication {
              auth_type PASS
              auth_pass ceb1b3ec013d66163d6ab
          }
          virtual_ipaddress {
              10.10.1.83/24   #虛擬ip
          }
          track_script {
              check_haproxy
          }
      }    
      ' > /etc/keepalived/keepalived.conf
      cat  /etc/keepalived/keepalived.conf
      
  2. 修改各 master 節點的/etc/haproxy/haproxy.cfg文件,(三個 master 節點的配置文件相同)
echo "
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
    mode                 tcp
    bind                 *:16443 #高可用監控端口,初始化k8s集群時會用
    option               tcplog
    default_backend      kubernetes-apiserver
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
    mode        tcp
    balance     roundrobin
    server      future-k8s-master0   10.10.1.80:6443 check
    server      future-k8s-master1   10.10.1.81:6443 check
    server      future-k8s-master2   10.10.1.82:6443 check

#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
    bind                 *:1080
    stats auth           admin:awesomePassword
    stats refresh        5s
    stats realm          HAProxy\ Statistics
    stats uri            /admin?stats

" > /etc/haproxy/haproxy.cfg

cat /etc/haproxy/haproxy.cfg
  1. 啟動(各 master 節點按順序啟動)
#啟動keepalived  
systemctl enable keepalived  && systemctl start keepalived  
#啟動haproxy 
systemctl enable haproxy && systemctl start haproxy
systemctl status keepalived
systemctl status haproxy
  1. 在 future-k8s-master0 查看綁定的 vip 地址

ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:50:56:9a:eb:48 brd ff:ff:ff:ff:ff inet 10.10.1.80/24 brd 10.10.3.255 scope global noprefixroute ens192 valid_lft forever preferred_lft forever inet 10.10.1.83/24 scope global ens192 valid_lft forever preferred_lft forever inet6 fe80::250:56ff:fe9a/64 scope link noprefixroute valid_lft forever preferred_lft forever

任務五:部署 k8s 集群#

  1. 添加 yum 軟件源#

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
  1. 安裝 kubeadm,kubelet 和 kubectl#

# 安裝kubelet、kubeadm、kubectl
yum install -y kubelet-1.28.0 kubeadm-1.28.0 kubectl-1.28.0 --disableexcludes=kubernetes

#將cgroup改為systemd
echo 'KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"' > /etc/sysconfig/kubelet
# 查看
cat /etc/sysconfig/kubelet
# 設置開機啟動
systemctl start kubelet.service
systemctl enable kubelet.service

#查看版本
 kubeadm version
 kubelet --version
 kubectl version 
  1. 初始化 k8s 集群(future-k8s-master0 節點)#

    方式一:使用配置文件初始化#
    1. 導出默認配置文件 (可選)
    kubeadm config print init-defaults > kubeadm-config.yaml
    
    1. 配置文件
    echo '
    apiVersion: kubeadm.k8s.io/v1beta3
    kind: InitConfiguration
    localAPIEndpoint:
      advertiseAddress: 10.10.1.83  #虛擬ip
      bindPort: 6443
    nodeRegistration:
      criSocket: unix:///var/run/cri-dockerd.sock
    ---
    apiServer:
      certSANs:    #master節點與對應主機名
        - future-k8s-master0
        - future-k8s-master1
        - future-k8s-master2
        - future-k8s-vip
        - 10.10.1.80
        - 10.10.1.81
        - 10.10.1.82
        - 10.10.1.83
        - 127.0.0.1
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta3
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controlPlaneEndpoint: "future-k8s-vip:16443" #虛擬ip及高可用配置的端口號
    controllerManager: {}
    dns: {}
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: registry.aliyuncs.com/google_containers
    kind: ClusterConfiguration
    kubernetesVersion: 1.28.0
    networking:
      dnsDomain: cluster.local
      podSubnet: 10.244.0.0/16
      serviceSubnet: 10.96.0.0/12
    scheduler: {}
    ' > /data/script/kubeadm-config.yaml
    cat /data/script/kubeadm-config.yaml
    
    1. 集群初始化
    kubeadm init --config kubeadm-config.yaml --upload-certs
    
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    方式二:使用命令初始化#
    1. 部署 master 節點,在 10.10.1.80 執行,初始化 master 節點
    kubeadm init \
      --apiserver-advertise-address=10.10.1.80 \
      --image-repository registry.aliyuncs.com/google_containers \
      --kubernetes-version v1.28.0 \
      --control-plane-endpoint=future-k8s-vip:16443 \  #虛擬ip(未定)
      --control-plane-endpoint=future-k8s-vip \  #虛擬ip(未定)
      --service-cidr=10.96.0.0/12 \
      --pod-network-cidr=10.244.0.0/16 \
      --cri-socket=unix:///var/run/cri-dockerd.sock \
      --ignore-preflight-errors=all 
      
      
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    1. 配置 ssh 免密
    #在10.10.1.80上生成公鑰,複製到其他master節點上
    ssh-keygen -t rsa
    ssh-copy-id 10.10.1.81
    ssh-copy-id 10.10.1.82
    
    1. 將 10.10.1.80 上的證書拷貝到其他 master 節點
    #在其他master節點創建證書存放目錄
    cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/
    
    #將future-k8s-master0的證書複製到future-k8s-master1
    scp /etc/kubernetes/pki/ca.crt 10.10.1.81:/etc/kubernetes/pki/ 
    scp /etc/kubernetes/pki/ca.key 10.10.1.81:/etc/kubernetes/pki/ 
    scp /etc/kubernetes/pki/sa.key 10.10.1.81:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/sa.pub 10.10.1.81:/etc/kubernetes/pki/ 
    scp /etc/kubernetes/pki/front-proxy-ca.crt 10.10.1.81:/etc/kubernetes/pki/ 
    scp /etc/kubernetes/pki/front-proxy-ca.key 10.10.1.81:/etc/kubernetes/pki/ 
    scp /etc/kubernetes/pki/etcd/ca.crt 10.10.1.81:/etc/kubernetes/pki/etcd/
    scp /etc/kubernetes/pki/etcd/ca.key 10.10.1.81:/etc/kubernetes/pki/etcd/
    
    #將future-k8s-master0的證書複製到future-k8s-master2
    scp /etc/kubernetes/pki/ca.crt 10.10.1.82:/etc/kubernetes/pki/ 
    scp /etc/kubernetes/pki/ca.key 10.10.1.82:/etc/kubernetes/pki/ 
    scp /etc/kubernetes/pki/sa.key 10.10.1.82:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/sa.pub 10.10.1.82:/etc/kubernetes/pki/ 
    scp /etc/kubernetes/pki/front-proxy-ca.crt 10.10.1.82:/etc/kubernetes/pki/ 
    scp /etc/kubernetes/pki/front-proxy-ca.key 10.10.1.82:/etc/kubernetes/pki/ 
    scp /etc/kubernetes/pki/etcd/ca.crt 10.10.1.82:/etc/kubernetes/pki/etcd/
    scp /etc/kubernetes/pki/etcd/ca.key 10.10.1.82:/etc/kubernetes/pki/etcd/
    
  2. 初始化其他 master 節點#

 kubeadm join future-k8s-vip:16443 --token yjphdh.guefcomqw3am4ask \
        --discovery-token-ca-cert-hash sha256:ed44c7deada0ea0fe5a54212ab4e5aa6fc34672ffe2a2c87a31ba73306e75c21 \
        --control-plane --certificate-key 4929b83577eafcd5933fc0b6506cb6d82e7bc481751e442888c4c2b32b5d0c9c  --cri-socket=unix:///var/run/cri-dockerd.sock
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. 初始化 node 節點#

kubeadm join future-k8s-vip:16443 --token yjphdh.guefcomqw3am4ask \
        --discovery-token-ca-cert-hash sha256:ed44c7deada0ea0fe5a54212ab4e5aa6fc34672ffe2a2c87a31ba73306e75c21 --cri-socket=unix:///var/run/cri-dockerd.sock
  1. 設置 master 節點允許調度 POD (可選)#

默認配置下 Kubernetes 不會將 Pod 調度到 Master 節點。如果希望將 k8s-master 也當作 Node 使用,需去除污點,開啟調度。

#查看默認配置的污點
kubectl describe node future-k8s-master2 |grep Taints

Taints: node-role.kubernetes.io/control-plane

#去除污點
kubectl taint nodes future-k8s-master2 node-role.kubernetes.io/control-plane-

添加 woker 標記

#添加worker標記
kubectl label nodes future-k8s-master2 node-role.kubernetes.io/worker=
#刪除worker標記
kubectl label nodes future-k8s-master2 node-role.kubernetes.io/worker-

任務六:安裝網絡插件 (master)#

安裝 calico

mkdir /data/calico
wget https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
#修改calico.yaml找到CALICO_IPV4POOL_CIDR
vi calico.yaml
##############修改內容###################
 value: "10.244.0.0/16"
 ##############修改內容###################
 #在master節點上安裝calico
 kubectl apply -f calico.yaml

查看節點狀態

# 查看所有的節點
kubectl get nodes
kubectl get nodes -o wide
#查看集群健康情況
 kubectl get cs

任務七:安裝 nginx 進行測試#

# 創建Nginx程序
kubectl create deployment nginx --image=nginx
# 開放80端口
kubectl expose deployment nginx --port=80 --type=NodePort
# 查看pod狀態
kubectl get pod
#查看service狀態
kubectl get service
##########################################################################
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        5d1h
nginx        NodePort    10.98.221.224   <none>        80:32743/TCP   23s
##########################################################################
# 访问网页测试(端口号以查看service状态得到的为准)
http://10.10.1.80:32743/

任務八:安裝Dashboard 界面#

  1. 下載 yaml 文件
#創建存放目錄
mkdir dashboard
cd dashboard/
#2.7
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
  1. 修改 yaml 文件
vi recommended.yaml
#將副本設置為2
#################修改內容#######################
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 32009   #添加這一行,注意縮進對齊
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort          #添加這一行,注意縮進對齊
  #################修改內容#######################
  1. 應用安裝,查看 pod 和 svc
#安裝
kubectl apply -f recommended.yaml
#查看pod和svc
kubectl get pod,svc -o wide -n kubernetes-dashboard
#########################################################
NAME                                             READY   STATUS              RESTARTS   AGE   IP       NODE    NOMINATED NODE   READINESS GATES
pod/dashboard-metrics-scraper-5cb4f4bb9c-mg569   0/1     ContainerCreating   0          9s    <none>   node1   <none>           <none>
pod/kubernetes-dashboard-6967859bff-2968p        0/1     ContainerCreating   0          9s    <none>   node1   <none>           <none>

NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE   SELECTOR
service/dashboard-metrics-scraper   ClusterIP   10.100.129.191   <none>        8000/TCP        9s    k8s-app=dashboard-metrics-scraper
service/kubernetes-dashboard        NodePort    10.106.130.53    <none>        443:31283/TCP   9s    k8s-app=kubernetes-dashboard
########################################################

使用所查看的 svc,所提供的端口訪問Dashboard

  1. 創建 dashboard 服務賬戶
#創建一個admin-user的服務賬戶並與集群綁定
vi dashboard-adminuser.yaml
##################內容####################
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
  
---
# 創建密鑰,獲取服務帳戶的長期持有者令牌
apiVersion: v1
kind: Secret
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
  annotations:
    kubernetes.io/service-account.name: "admin-user"
type: kubernetes.io/service-account-token
  ##################內容####################
 
  #執行生效
  kubectl apply -f dashboard-adminuser.yaml
  1. 登錄方式

方案一:獲取長期可用 token

#將其保存在/data/dashboard/的admin-user.token文件中
cd /data/dashboard/
kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={".data.token"} | base64 -d > admin-user.token 

獲取長期可用 token 腳本

#!/bin/bash
#作者:雲
#############描述#############
:<<!
獲取長期可用token腳本
將token存放在admin-user.token文件中
!
#############描述#############
kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={".data.token"} | base64 -d > admin-user.token

echo -e "\033[1;32m創建token成功,請在admin-user.token文件中查看\033[m"

方案二:使用使用 Kubeconfig 文件登錄

 #定義 token 變量
 DASH_TOCKEN=$(kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={".data.token"} | base64 -d)
 #設置 kubeconfig 集群條目
 kubectl config set-cluster kubernetes --server=10.10.1.80:6433 --kubeconfig=/root/dashbord-admin.conf
 #設置 kubeconfig 用戶條目
 kubectl config set-credentials admin-user --token=$DASH_TOCKEN --kubeconfig=/root/dashbord-admin.conf
 #設置 kubeconfig 上下文條目
 kubectl config set-context admin-user@kubernetes --cluster=kubernetes --user=admin-user --kubeconfig=/root/dashbord-admin.conf
 #設置 kubeconfig 當前上下文
 kubectl config use-context admin-user@kubernetes  --kubeconfig=/root/dashbord-admin.conf

將生成的 dashbord-admin.conf 文件放到本地主機上,登錄時選擇Kubeconfig選項,選擇 kubeconfig 文件登錄

任務九:安裝 metrics-server#

下載部署文件

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server-components.yaml

修改 yaml 文件中的 Deployment 內容

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls  #添加
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.4 #修改
        imagePullPolicy: IfNotPresent

#安裝
kubectl apply -f metrics-server-components.yaml

查看 metrics-server 的 pod 狀態

kubectl get pods --all-namespaces | grep metrics

等待一些時間,查看查看各類監控圖像已成功顯示。

image-20240323222707075-1711204031457-1

任務十:kubectl 命令自動補全#

yum -y install bash-completion
source /usr/share/bash-completion/bash_completion
echo 'source <(kubectl completion bash)' >>  ~/.bashrc
bash

任務十一:ingress-nginx 控制器安裝#

#下載yaml文件
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/baremetal/deploy.yaml
#修改yaml文件中拉取鏡像的地址
#####################修改內容######################
willdockerhub/ingress-nginx-controller:v1.0.0
hzde0128/kube-webhook-certgen:v1.0
#####################修改內容######################
#修改Deployment修改成DaemonSet
#修改網絡模式為host network
#####################修改內容######################
template:
  spec:
    hostNetwork: true
    dnsPolicy: ClusterFirstWithHostNet
    tolerations:  #使用親和性配置可在所有節點部署
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
     nodeSelector:
          kubernetes.io/os: linux
          custem/ingress-controller-ready: 'true'
      containers:
        - name: controller
#####################修改內容######################
#為工作節點設置標籤(必需)
kubectl label nodes future-k8s-master0 custem/ingress-controller-ready=true
kubectl label nodes future-k8s-master1 custem/ingress-controller-ready=true
kubectl label nodes future-k8s-master2 custem/ingress-controller-ready=true
kubectl label nodes future-k8s-node3 custem/ingress-controller-ready=true

#安裝
kubectl apply -f deploy.yaml

#查看狀態
kubectl get pods -n ingress-nginx
################狀態##################
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-2lz4v       0/1     Completed   0          5m46s
ingress-nginx-admission-patch-c6896        0/1     Completed   0          5m46s
ingress-nginx-controller-7575fb546-q29qn   1/1     Running     0          5m46s

任務十二:配置Dashboard 代理#

echo '
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: k8s-dashboard
  namespace: kubernetes-dashboard
  labels:
    ingress: k8s-dashboard
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /  #重寫路徑
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"   #http自動轉https
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  ingressClassName: nginx 
  rules:
    - host: k8s.yjs.51xueweb.cn
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: kubernetes-dashboard
                port:
                  number: 443
' > /data/dashboard/dashboard-ingress.yaml

三:對接 k8s 集群與 ceph 集群#

任務一:安裝 ceph 客戶端(ceph-common)#

在 k8s 集群的每個節點安裝 ceph-common

yum install ceph-common -y

任務二:同步 cpeh 集群配置文件#

將 ceph 集群的 /etc/ceph/{ceph.conf,ceph.client.admin.keyring} 文件同步到 k8s 所有節點上

#配置ssh免密
ssh-keygen -t rsa
ssh-copy-id 10.10.1.80
ssh-copy-id 10.10.1.81
ssh-copy-id 10.10.1.82

#拷貝文件
scp -r /etc/ceph/{ceph.conf,ceph.client.admin.keyring} 10.10.1.80:/etc/ceph
scp -r /etc/ceph/{ceph.conf,ceph.client.admin.keyring} 10.10.1.81:/etc/ceph
scp -r /etc/ceph/{ceph.conf,ceph.client.admin.keyring} 10.10.1.82:/etc/ceph

任務三:部署 ceph-csi(使用 rbd)#

  1. 下載 ceph-csi 組件 (k8s 中的一個 master 節點)
#下載文件
wget https://github.com/ceph/ceph-csi/archive/refs/tags/v3.9.0.tar.gz
#解壓
mv v3.9.0.tar.gz ceph-csi-v3.9.0.tar.gz
tar -xzf ceph-csi-v3.9.0.tar.gz
#進入目錄
cd  ceph-csi-3.9.0/deploy/rbd/kubernetes
mkdir /data/cephfs/csi
#拷進csi中,共6六個文件
cp * /data/cephfs/csi
  1. 拉取 csi 組件所需鏡像
#查看所需鏡像
grep image csi-rbdplugin-provisioner.yaml
grep image csi-rbdplugin.yaml

在所有 k8s 節點上拉取所需的鏡像

cd /data/script
./pull-images.sh registry.k8s.io/sig-storage/csi-provisioner:v3.5.0
./pull-images.sh registry.k8s.io/sig-storage/csi-resizer:v1.8.0
./pull-images.sh registry.k8s.io/sig-storage/csi-snapshotter:v6.2.2
docker pull  quay.io/cephcsi/cephcsi:v3.9.0
./pull-images.sh registry.k8s.io/sig-storage/csi-attacher:v4.3.0
./pull-images.sh registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.8.0
  1. 創建命名空間cephfs
echo '
apiVersion: v1
kind: Namespace
metadata:
  labels:
    kubernetes.io/metadata.name: cephfs
  name: cephfs
  ' > ceph-namespace.yaml
  
 #執行
  kubectl apply -f ceph-namespace.yaml 
  1. 創建連接 ceph 集群的秘鑰文件 csi-rbd-secret.yaml
echo '
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret
  namespace: cephfs
stringData:
  adminID: admin 
  adminKey: AQANDD9lfWg2LBAAHY0mprdbuKFBPJDkE7/I5Q==
  userID: admin  
  userKey: AQANDD9lfWg2LBAAHY0mprdbuKFBPJDkE7/I5Q==
  ' > csi-rbd-secret.yaml
  
   #執行
    kubectl apply -f csi-rbd-secret.yaml
  1. 創建 ceph-config-map.yaml
echo '
apiVersion: v1
kind: ConfigMap
data:
  ceph.conf: |
     [global]
     fsid = 30912204-0c26-413f-8e00-6d55c9c0af03     # 生成的FSID
     mon initial members =k8s-ceph-node0,k8s-ceph-node1,k8s-ceph-node2            # 主機名
     mon host = 10.10.1.16,10.10.1.17,10.10.1.18                       # 對應的IP
     public network = 10.10.1.0/24
     auth cluster required = cephx
     auth service required = cephx
     auth client required = cephx
     osd journal size = 1024
     osd pool default size = 3
     osd pool default min size = 2
     osd pool default pg num = 333
     osd pool default pgp num = 333
     osd crush chooseleaf type = 1
     [mon]
     mon allow pool delete = true

     [mds.k8s-ceph-node0]    
     host = k8s-ceph-node0
  keyring: |
metadata:
  name: ceph-config
  namespace: cephfs
' > ceph-config-map.yaml

 #執行
 kubectl apply -f ceph-config-map.yaml  
  1. 修改 csi-config-map.yaml,配置連接 ceph 集群的信息
echo '
apiVersion: v1
kind: ConfigMap
metadata:
  name: ceph-csi-config
  namespace: cephfs
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
data:
  config.json: |-
    [{"clusterID":"30912204-0c26-413f-8e00-6d55c9c0af03","monitors":["10.10.1.16:6789","10.10.1.17:6789","10.10.1.18:6789"]}]
' > csi-config-map.yaml
  1. 修改 csi 組件配置文件

    1. 拷貝進/data/cephfs/csi目錄中的所有 yaml 文件中的命名空間由default改為cephfs

    2. cd /data/cephfs/csi
      sed -i "s/namespace: default/namespace: cephfs/g" $(grep -rl "namespace: default" ./)
      sed -i -e "/^kind: ServiceAccount/{N;N;a\  namespace: cephfs}" $(egrep -rl "^kind: ServiceAccount" ./)
      
    3. csi-rbdplugin-provisioner.yamlcsi-rbdplugin.yaml 中的 kms 部分配置註釋掉

    4. # - name: KMS_CONFIGMAP_NAME

      ​ # value: encryptionConfig

      #- name: ceph-csi-encryption-kms-config

      ​ # configMap:

      ​ # name: ceph-csi-encryption-kms-config

 #執行,安裝csi組件
 kubectl apply -f csi-config-map.yaml
 kubectl apply -f csi-nodeplugin-rbac.yaml
 kubectl apply -f csidriver.yaml
 kubectl apply -f csi-provisioner-rbac.yaml
 kubectl apply -f csi-rbdplugin-provisioner.yaml
 kubectl apply -f csi-rbdplugin.yaml

任務四:創建 storageclass#

echo '
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    k8s.kuboard.cn/storageType: cephfs_provisioner
  name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
  # fsName: cephfs  (cephfs方式使用)
  clusterID: 30912204-0c26-413f-8e00-6d55c9c0af03 
  pool: rbd-k8s 
  imageFeatures: layering
  csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
  csi.storage.k8s.io/provisioner-secret-namespace: cephfs
  csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
  csi.storage.k8s.io/controller-expand-secret-namespace: cephfs
  csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
  csi.storage.k8s.io/node-stage-secret-namespace: cephfs
  csi.storage.k8s.io/fstype: xfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
mountOptions:
  - discard
 ' > storageclass.yaml
 
 #執行
  kubectl apply -f storageclass.yaml

任務五:創建 PVC#

echo '
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc 
  namespace: cephfs
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi 
  storageClassName: csi-rbd-sc
  ' > pvc.yaml
  
#執行
 kubectl apply -f pvc.yaml
#查看PVC是否創建成功
kubectl get pvc -n cephfs
#查看PV是否創建成功
kubectl get pv -n cephfs

#查看ceph集群中的cephfs_data存儲池中是否創建了image
 rbd ls -p rbd-k8s

任務六:創建 pod,進行測試驗證#

echo '
apiVersion: v1
kind: Pod
metadata:
  name: csi-rbd-demo-pod
  namespace: cephfs
spec:
  containers:
    - name: web-server
      image: nginx:latest
      volumeMounts:
        - name: mypvc
          mountPath: /var/lib/www/html
  volumes:
    - name: mypvc
      persistentVolumeClaim:
        claimName: rbd-pvc 
        readOnly: false
' > pod.yaml

#執行
 kubectl apply -f pod.yaml
 #進入容器查看掛載信息
kubectl exec -it csi-rbd-demo-pod -n cephfs -- bash
 lsblk -l|grep rbd
載入中......
此文章數據所有權由區塊鏈加密技術和智能合約保障僅歸創作者所有。