Network#
Kubernetes Network Model Principles
Pods in the cluster can communicate with any other Pod without using Network Address Translation (NAT).
Programs running on cluster nodes can communicate with any Pod on the same node without using Network Address Translation (NAT).
Each Pod has its own IP address (IP-per-Pod), and any other Pod can access it via the same address.
With the help of the CNI standard, Kubernetes can solve container networking issues. Various network plugins can be integrated in a plugin-based manner to achieve internal communication within the cluster, as long as the core interface operations defined in the CNI standard (ADD, to add a container to the network; DEL, to remove a container from the network; CHECK, to check if the container's network meets expectations, etc.) are implemented. CNI plugins typically focus on container-to-container network communication.
The CNI interface does not refer to HTTP or gRPC interfaces; the CNI interface refers to the invocation (exec) of executable programs. The default CNI plugin path for Kubernetes nodes is /opt/cni/bin.
CNI describes network configurations through JSON format configuration files. When container networking needs to be set up, the container runtime is responsible for executing the CNI plugin and passing configuration file information through the standard input (stdin) of the CNI plugin, receiving the execution results of the plugin through standard output (stdout). The functions of network plugins can be divided into five categories:
- Main plugins: Create specific network devices (bridge: bridge device connecting container and host; ipvlan: adds an ipvlan network card for the container; loopback: IO device; macvlan: creates a MAC address for the container; ptp: creates a pair of VethPair; vlan: allocates a vlan device; host-device: moves an existing device into the container).
- IPAM plugins: Responsible for allocating IP addresses (dhcp: the container requests from the DHCP server to issue or reclaim IP addresses for Pods; host-local: uses pre-configured IP address ranges for allocation; static: allocates a static IPv4/IPv6 address for the container, mainly for debugging).
- META plugins: Other functional plugins (tuning: adjusts network device parameters via sysctl; portmap: configures port mapping via iptables; bandwidth: uses Token Bucket Filter for rate limiting; sbr: sets source-based routing for network cards; firewall: limits incoming and outgoing traffic for container networks via iptables).
- Windows plugins: CNI plugins specifically for the Windows platform (win-bridge and win-overlay network plugins).
- Third-party network plugins: There are many third-party open-source network plugins, each with its own advantages and applicable scenarios, making it difficult to form a unified standard component. Common ones include Flannel, Calico, Cilium, and OVN network plugins.
Provider | Network Model | Routing Distribution | Network Policy | Mesh | External Data Storage | Encryption | Ingress/Egress Policy |
---|---|---|---|---|---|---|---|
Canal | Encapsulation (VXLAN) | No | Yes | No | k8s API | Yes | Yes |
Flannel | Encapsulation (VXLAN) | No | No | No | k8s API | Yes | No |
Calico | Encapsulation (VXLAN, IPIP) or Unencapsulated | Yes | Yes | Yes | Etcd and k8s API | Yes | Yes |
Weave | Encapsulation | Yes | Yes | Yes | No | Yes | Yes |
Cilium | Encapsulation (VXLAN) | Yes | Yes | Yes | Etcd and k8s API | Yes | Yes |
- Network Model: Encapsulated or Unencapsulated.
- Routing Distribution: An external gateway protocol used to exchange routing and reachability information over the Internet. BGP can help with networking between Pods across clusters. This feature is essential for unencapsulated CNI network plugins and is typically handled by BGP. If you want to build a cluster split across network segments, routing distribution is a good feature.
- Network Policy: Kubernetes provides the ability to enforce rules that determine which services can communicate with each other using network policies. This has been a stable feature since Kubernetes 1.7 and can be used with certain network plugins.
- Mesh: Allows network communication between services across different Kubernetes clusters.
- External Data Storage: CNI network plugins with this feature require an external data store to store data.
- Encryption: Allows for encrypted and secure network control and data planes.
- Ingress/Egress Policy: Allows you to manage routing control for Kubernetes and non-Kubernetes communications.
Calico is a pure Layer 3 virtual network that does not reuse Docker's docker0 bridge but implements its own. The Calico network does not perform additional encapsulation on packets, does not require NAT or port mapping.
Calico Architecture:
Felix
- Manages network interfaces and writes routes.
- Writes ACLs.
- Reports status.
bird (BGP Client)
The BGP Client broadcasts to the remaining Calico nodes via the BGP protocol, achieving network intercommunication.
confd
Listens to etcd to understand changes in BGP configurations and global defaults. Confd dynamically generates BGP configuration files based on updates to ETCD data, triggering BIRD reload when configuration files change.
Calico Network Mode - VXLAN:
What is VXLAN?
VXLAN, or Virtual Extensible LAN, is a type of network virtualization technology supported by Linux. VXLAN can fully implement encapsulation and decapsulation in kernel mode, thereby constructing an overlay network through a "tunneling" mechanism.
Layer 2 communication based on Layer 3, where VXLAN packets are encapsulated in UDP packets, requiring UDP to be reachable between k8s nodes at Layer 3; Layer 2 means that the source MAC address and destination MAC address of the VXLAN packet are the MAC addresses of its own VXLAN device and the MAC address of the opposite VXLAN device for communication.
Packet Encapsulation: Encapsulation replaces the source and destination MAC of the packets sent by the Pod on the VXLAN device with the MAC of the local VXLAN network card and the MAC of the opposite node's VXLAN network card. The outer UDP destination IP address is obtained based on routing and the MAC FDB table of the opposite VXLAN.
Advantages: As long as there is Layer 3 connectivity between k8s nodes, it can cross network segments without special requirements for the host gateway routing. Each node achieves Layer 2 intercommunication based on Layer 3 through the VXLAN device, where VXLAN packets are encapsulated in UDP packets, requiring UDP to be reachable between k8s nodes at Layer 3; Layer 2 means that the source MAC address and destination MAC address of the VXLAN packet are the MAC addresses of its own VXLAN device and the MAC address of the opposite VXLAN device.
Disadvantages: The encapsulation and decapsulation of VXLAN packets may incur some performance overhead.
Calico Configuration to Enable VXLAN
#Enable IPIP
-name:CALICO_IPV4POOL_IPIP
-value:"Never"
#Enable or Disable VXLAN on the default IP pool.
-name:CALICO_IPV4POOL_VXLAN
-value:"Always"
#Enable or Disable VXLAN on the default IPv6 IP pool.
-name:CALICO_IPV6POOL_VXLAN
-value:"Always"
#calico_backend:"bird"
calico_backend:"vxlan"
#Comment out liveness and readiness probes
#--bird-live
#--bird-ready
Calico Network Mode - IPIP:
Native kernel support in Linux.
The working principle of IPIP tunnels is to encapsulate the source host's IP packets in a new IP packet, with the destination address of the new IP packet being the other end of the tunnel. At the other end of the tunnel, the receiver decapsulates the original IP packet and passes it to the target host. IPIP tunnels can establish connections between different networks, such as between IPv4 and IPv6 networks.
Packet Encapsulation: Encapsulation removes the MAC layer from the packets sent by the Pod on the tunl0 device, leaving the IP layer encapsulated. The outer packet's destination IP address is obtained based on routing.
Advantages: As long as there is Layer 3 connectivity between k8s nodes, it can cross network segments without special requirements for the host gateway routing.
Disadvantages: The encapsulation and decapsulation of IPIP packets may incur some performance overhead.
Calico Configuration to Enable IPIP
#Enable IPIP
-name:CALICO_IPV4POOL_IPIP
-value:"Always"
#Enable or Disable VXLAN on the default IP pool.
-name:CALICO_IPV4POOL_VXLAN
-value:"Never"
#Enable or Disable VXLAN on the default IPv6 IP pool.
-name:CALICO_IPV6POOL_VXLAN
-value:"Never"
Calico Network Mode - BGP:
Border Gateway Protocol (BGP) is a core decentralized autonomous routing protocol on the Internet. It achieves reachability between autonomous systems (AS) by maintaining IP routing tables or 'prefix' tables and is classified as a path vector routing protocol. BGP does not use metrics from traditional Interior Gateway Protocols (IGPs) but instead uses path-based, network policy, or rule sets to determine routes. Therefore, it is more appropriately referred to as a vector protocol rather than a routing protocol. In simple terms, BGP integrates multiple lines (such as telecom, Unicom, mobile, etc.) connected to the data center into one, achieving multi-line single IP. The advantage of BGP data centers is that servers only need to set one IP address, and the best access route is determined by the backbone routers on the network based on routing hops and other technical indicators, without occupying any system resources on the server.
Packet Encapsulation: No need for packet encapsulation.
Advantages: No encapsulation and decapsulation are required; BGP protocol can achieve Layer 3 reachability of Pod networks between hosts.
Disadvantages: Configuration is more complex when crossing network segments, and network requirements are higher; the host gateway routing also needs to act as a BGP Speaker.
Calico Configuration to Enable BGP
#Enable IPIP
-name:CALICO_IPV4POOL_IPIP
-value:"Off"
#Enable or Disable VXLAN on the default IP pool.
-name:CALICO_IPV4POOL_VXLAN
-value:"Never"
#Enable or Disable VXLAN on the default IPv6 IP pool.
-name:CALICO_IPV6POOL_VXLAN
-value:"Never"
Service#
In a Kubernetes cluster, each Node runs a kube-proxy
process. kube-proxy
is responsible for implementing a VIP (Virtual IP) form for Services.
In Kubernetes version 1.0, the proxy was entirely in userspace. In Kubernetes version 1.1, an iptables proxy was added, but it was not the default running mode. Starting from Kubernetes version 1.2, the default is the iptables proxy. In Kubernetes version 1.8.0-beta.0, an ipvs proxy was added.
Userspace
kube-proxy:
- Listens to the APISERVER and modifies local iptables rules based on Service changes.
- Proxies user requests to the current node's pods.
Iptables
kube-proxy:
- Listens to the APISERVER and modifies local iptables rules.
Compared to the userspace method, kube-proxy's functionality is less decoupled and has less pressure.
IPVS
kube-proxy:
- Listens to the APISERVER and modifies local ipvs rules.
Secret#
Kubernetes ensures the security of Secrets by distributing them only to the machine nodes where the Pods that need access to the Secrets are located. Secrets are only stored in the node's memory and are never written to physical storage, so when a Secret is deleted from a node, there is no need to wipe disk data.
Starting from Kubernetes version 1.7, etcd stores Secrets in an encrypted form, which guarantees the security of Secrets to some extent.
Secret Types:
Downward API#
The Downward API is a feature in Kubernetes that allows containers to obtain information about themselves from the Kubernetes API server at runtime. This information can be injected into the container as environment variables or files, allowing the container to access various information about its runtime environment, such as Pod name, namespace, labels, etc.
- Provides container metadata.
- Dynamic configuration.
- Integration with the Kubernetes environment.
HELM#
Helm is an official package manager similar to YUM, encapsulating the deployment environment process. Helm has two important concepts: chart and release.
- Chart: A collection of information for creating an application, including configuration templates for various Kubernetes objects, parameter definitions, dependencies, documentation, etc. A chart is a self-contained logical unit for application deployment. You can think of a chart as a software installation package in apt or yum.
- Release: An instance of a chart that is running, representing a running application. When a chart is installed into a Kubernetes cluster, a release is generated. A chart can be installed multiple times in the same cluster, with each installation being a separate release.
- Helm CLI: The helm client component responsible for communicating with the Kubernetes API server.
- Repository: A repository for publishing and storing Charts.
Download and Install#
Download Helm
wget https://get.helm.sh/helm-v3.18.4-linux-amd64.tar.gz
tar -zxvf helm-v3.18.4-linux-amd64.tar.gz
cp -a linux-amd64/helm /usr/local/bin/
chmod a+x /usr/local/bin/helm
helm version
Add domestic source for chart repository
helm repo add bitnami https://helm-charts.itboon.top/bitnami --force-update
helm repo update
# Search repository content
helm search repo bitnami
Install Chart Example#
# View apache package configuration
helm show values bitnami/apache
# Install apache
helm install bitnami/apache --generate-name
# View
helm list -n default
kubectl get svc
kubectl get pod
# View basic information of the chart
helm show chart bitnami/apache
# View all information of the chart
helm show all bitnami/apache
# Uninstall version
helm uninstall apache-1753181984
# Keep historical versions
helm uninstall apache-1753181984 --keep-history
# View information of this version
helm status apache-1753182488
Expansion
# Search for wordpress chart package in the current repository
helm search repo wordpress
# Search for wordpress chart package in the official repository
helm search hub wordpress
# Install apache and specify the name as apache-1753234488
helm install apache-1753234488 bitnami/apache
Install Custom Chart
# View apache package configuration
helm show values bitnami/apache
# Create yaml file and add parameters to modify
vi apache.yml
service:
type: NodePort
# Override configuration parameters and install apache
helm install -f apache.yml bitnami/apache --generate-name
In addition to using a yaml file to override, you can also use --set:
to override specified items via command line. If both methods are used simultaneously, the values in --set
will be merged into --values
, but the values in --set
have a higher priority, and the content overridden in --set
will be saved in ConfigMap. You can view the values set in --set
in a specified release by running helm get values <release-name>
. You can also clear the values set in --set
by running helm upgrade
and specifying the --reset-values
field.
The format and limitations of --set
The --set
option uses zero or more name/value pairs. The simplest usage is similar to: --set name=value
, which is equivalent to the following YAML format:
name:value
Multiple values are separated by commas, so --set a=b,c=d
in YAML representation is:
a: b
c: d
More complex expressions are supported. For example, --set outer.inner=value
is converted to:
outer:
inner: value
Lists are represented using curly braces ({}
), for example, --set name={a,b,c}
is converted to:
name:
- a
- b
- c
Certain name/keys can be set to null or an empty array, for example, --set name=[],a=null
name: []
a: null
Upgrade and Rollback#
helm upgrade
performs a minimally invasive upgrade, only updating the content that has changed since the last release.
# helm upgrade -f yaml file version name chart package
helm upgrade -f apache.yml apache-1753183272 bitnami/apache
Version rollback
# View existing versions
# helm history version name
helm history apache-1753183272
# Execute rollback
# helm rollback version name version number
helm rollback apache-1753183272 1
Create Custom Chart Package#
# Create chart package
helm create test
# Delete unnecessary files
# Create yaml resource list in templates
vi nodePort.yaml
############################
apiVersion: v1
kind: Service
metadata:
name: myapp-test-202401110926-svc
labels:
app: myapp-test
spec:
type: NodePort
selector:
app: myapp-test
ports:
- name: "80-80"
protocol: TCP
port: 80
targetPort: 80
nodePort: 31111
############################
vi deplyment.yaml
############################
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-test-202401110926-deploy
labels:
app: myapp-test
spec:
replicas: 5
selector:
matchLabels:
app: myapp-test
template:
metadata:
labels:
app: myapp-test
spec:
containers:
- name: myapp
image: wangyanglinux/myapp:v1.0
############################
# Publish deployment
helm install test test/
Complete Example
vi templates/NOTES.txt
############################
1. This is a test myapp chart.
2. myapp release name: myapp-test-{{ now | date "20060102030405" }}-deploy
3. service name: myapp-test-{{ now | date "20060102030405" }}-svc
############################
vi templates/deplyment.yaml
############################
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-test-{{ now | date "20060102030405" }}-deploy
labels:
app: myapp-test
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: myapp-test
template:
metadata:
labels:
app: myapp-test
spec:
containers:
- name: myapp
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
############################
vi templates/service.yaml
############################
apiVersion: v1
kind: Service
metadata:
name: myapp-test-{{ now | date "20060102030405" }}-svc
labels:
app: myapp-test
spec:
type: {{ .Values.service.type | quote }}
selector:
app: myapp-test
ports:
- name: "80-80"
protocol: TCP
port: 80
targetPort: 80
{{- if eq .Values.service.type "NodePort" }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
############################
# At the same level as the templates directory
vi values.yaml
############################
replicaCount: 5
image:
repository: wangyanglinux/myapp
tag: "v1.0"
service:
type: NodePort
nodePort: 32321
############################
Binary High Availability Kubernetes Cluster Deployment#
Foreword#
Deploy a highly available Kubernetes cluster using binary methods on five servers with three masters and two slaves.
Cluster Architecture#
(1) Basic Environment
Operating System: Rocky Linux release 10.0
Software: Kubernetes-1.33.4, docker-28.3.3
(2) Environment Preparation
Hostname | IP | Cluster and Component Roles |
---|---|---|
k8s-master01 | 192.168.0.111 | master, api-server, control manager, scheduler, etcd, kubelet, kube-proxy, nginx |
k8s-master02 | 192.168.0.112 | master, api-server, control manager, scheduler, etcd, kubelet, kube-proxy, nginx |
k8s-master03 | 192.168.0.113 | master, api-server, control manager, scheduler, etcd, kubelet, kube-proxy, nginx |
k8s-node01 | 192.168.0.114 | worker, kubelet, kube-proxy, nginx |
k8s-node02 | 192.168.0.115 | worker, kubelet, kube-proxy, nginx |
Environment Initialization#
(1) Change the system software source and download dependency software
sed -e 's|^mirrorlist=|#mirrorlist=|g' \
-e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.aliyun.com/rockylinux|g' \
-i.bak \
/etc/yum.repos.d/[Rr]ocky*.repo
dnf makecache
# Download dependency software
yum install -y wget openssl gcc gcc-c++ zlib-devel openssl-devel make redhat-rpm-config
(2) Rename hostname
hostnamectl set-hostname k8s-master01 && bash
hostnamectl set-hostname k8s-master02 && bash
hostnamectl set-hostname k8s-node01 && bash
hostnamectl set-hostname k8s-node02 && bash
hostnamectl set-hostname k8s-node03 && bash
(3) Modify system environment
# Stop firewalld firewall
systemctl stop firewalld
systemctl disable firewalld
firewall-cmd --state
# Install iptables
yum install -y iptables-services
systemctl start iptables
iptables -F
systemctl enable iptables
# Permanently disable selinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
cat /etc/selinux/config
# Permanently disable swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
cat /etc/fstab
# Set timezone
timedatectl set-timezone Asia/Shanghai
date
# Add hosts
cat >> /etc/hosts << EOF
192.168.0.111 k8s-master01
192.168.0.112 k8s-master02
192.168.0.113 k8s-master03
192.168.0.114 k8s-node01
192.168.0.115 k8s-node02
EOF
# View
cat /etc/hosts
(4) Install ipvs
# Install ipvs
yum -y install ipvsadm sysstat conntrack libseccomp
# Enable IP forwarding
echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
sysctl -p
# Load ipvs modules
cat >> /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
systemctl restart systemd-modules-load.service
lsmod | grep -e ip_vs -e nf_conntrack
(5) Exclude calico network card from being managed by NetworkManager
# Exclude calico network card from being managed by NetworkManager
cat > /etc/NetworkManager/conf.d/calico.conf << EOF
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*
EOF
systemctl restart NetworkManager
(6) Configure time synchronization server
# Modify chrony configuration file on k8s-master01
sed -i -e 's/2\.rocky\.pool\.ntp\.org/ntp.aliyun.com/g' -e 's/#allow 192\.168\.0\.0\/16/allow 192.168.0.0\/24/g' -e 's/#local stratum 10/local stratum 10/g' /etc/chrony.conf
# Modify chrony configuration file on k8s-master02
sed -i -e 's/2\.rocky\.pool\.ntp\.org/ntp.aliyun.com/g' -e 's/#allow 192\.168\.0\.0\/16/allow 192.168.0.0\/24/g' -e 's/#local stratum 10/local stratum 11/g' /etc/chrony.conf
# Modify chrony configuration file on k8s-master03
sed -i -e 's/2\.rocky\.pool\.ntp\.org/ntp.aliyun.com/g' -e 's/#allow 192\.168\.0\.0\/16/allow 192.168.0.0\/24/g' -e 's/#local stratum 10/local stratum 12/g' /etc/chrony.conf
# Modify chrony configuration files on k8s-node01, k8s-node02, k8s-node03
sed -i 's/^pool 2\.rocky\.pool\.ntp\.org iburst$/pool 192.168.0.111 iburst\
pool 192.168.0.112 iburst\
pool 192.168.0.113 iburst/g' /etc/chrony.conf
# Restart chronyd
systemctl restart chronyd
# Verify
chronyc sources -v
(7) Set the maximum number of open files for processes
# Configure ulimit
ulimit -SHn 65535
cat >> /etc/security/limits.conf << EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF
(8) Modify kernel parameters
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
EOF
sysctl --system
Install Docker#
(1) Install Docker using a script
bash <(curl -sSL https://linuxmirrors.cn/docker.sh)
(2) Modify Docker configuration
cat >/etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": [
"http://hub-mirror.c.163.com",
"https://hub.rat.dev",
"https://docker.mirrors.ustc.edu.cn",
"https://docker.1panel.live",
"https://docker.m.daocloud.io",
"https://docker.1ms.run"
],
"max-concurrent-downloads": 10,
"log-driver": "json-file",
"log-level": "warn",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"data-root": "/data/dockerData"
}
EOF
systemctl daemon-reload
systemctl restart docker
Install cri-dockerd#
(1) Download and install cri-dockerd
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.18/cri-dockerd-0.3.18.amd64.tgz
tar xvf cri-dockerd-*.amd64.tgz
cp -r cri-dockerd/* /usr/bin/
chmod +x /usr/bin/cri-dockerd
(2) Add cri-docker service configuration file
cat > /usr/lib/systemd/system/cri-docker.service <<EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.10
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
(3) Add cri-docker socket configuration file
cat > /usr/lib/systemd/system/cri-docker.socket <<EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF
(4) Start cri-dockerd to make the configuration take effect
systemctl daemon-reload
systemctl enable --now cri-docker.service
systemctl status cri-docker.service
Install etcd cluster (master nodes)#
(1) Download etcd package and install
wget https://github.com/etcd-io/etcd/releases/download/v3.6.4/etcd-v3.6.4-linux-amd64.tar.gz
tar -xf etcd-*.tar.gz
mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/
ls /usr/local/bin/
etcdctl version
Install Kubernetes Cluster#
(1) Download Kubernetes binary package and install
wget https://dl.k8s.io/v1.33.2/kubernetes-server-linux-amd64.tar.gz
# Execute on master nodes
tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
# Execute on node nodes
tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,-proxy}
ls /usr/local/bin/
kubelet --version
# All nodes execute, store CNI plugins
mkdir -p /opt/cni/bin
Generate Related Certificates (master nodes)#
Install cfssl Certificate Management Tool#
wget https://hub.gitmirror.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.5/cfssl-certinfo_1.6.5_linux_amd64 -O /usr/local/bin/cfssl-certinfo
wget https://hub.gitmirror.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.5/cfssljson_1.6.5_linux_amd64 -O /usr/local/bin/cfssljson
wget https://hub.gitmirror.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.5/cfssl_1.6.5_linux_amd64 -O /usr/local/bin/cfssl -O /usr/local/bin/cfssl
# Add executable permissions and check version
chmod +x /usr/local/bin/cfssl*
cfssl version
Generate ETCD Certificate#
mkdir -p /etc/etcd/ssl && cd /etc/etcd/ssl
# Create configuration file for generating certificates
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "876000h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "876000h"
}
}
}
}
EOF
# Create certificate signing request file
cat > etcd-ca-csr.json << EOF
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "etcd",
"OU": "Etcd Security"
}
],
"ca": {
"expiry": "876000h"
}
}
EOF
Issue ETCD's CA certificate and key
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
Create configuration file for generating ETCD server certificates
cat > etcd-csr.json << EOF
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "etcd",
"OU": "Etcd Security"
}
]
}
EOF
Issue ETCD's server certificate
cfssl gencert -ca=/etc/etcd/ssl/etcd-ca.pem -ca-key=/etc/etcd/ssl/etcd-ca-key.pem -config=ca-config.json -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.0.111,192.168.0.112,192.168.0.113 -profile=kubernetes etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
Generate Kubernetes Certificates#
mkdir -p /etc/kubernetes/pki && cd /etc/kubernetes/pki
# Create certificate signing request file
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "Kubernetes",
"OU": "Kubernetes-manual"
}
],
"ca": {
"expiry": "876000h"
}
}
EOF
Issue Kubernetes's CA certificate and key
cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
Generate ApiServer Certificate#
# Create certificate signing request file
cat > apiserver-csr.json << EOF
{
"CN": "kube-apiserver",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "Kubernetes",
"OU": "Kubernetes-manual"
}
]
}
EOF
# Create configuration file for generating certificates
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "876000h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "876000h"
}
}
}
}
EOF
Issue ApiServer's CA certificate and key
cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.0.111,192.168.0.112,192.168.0.113,192.168.0.114,192.168.0.115,192.168.0.116,192.168.0.117,192.168.0.118,192.168.0.119,192.168.0.120 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
10.96.0.1 is the default address for Kubernetes services.
kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local are the default resolution domain names for Kubernetes.
Generate ApiServer Aggregation Certificate#
# Create certificate signing request file
cat > front-proxy-ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"ca": {
"expiry": "876000h"
}
}
EOF
Issue ApiServer aggregation certificate and key
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
Generate Client Certificate for ApiServer Aggregation Certificate#
# Create certificate signing request file
cat > front-proxy-client-csr.json << EOF
{
"CN": "front-proxy-client",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF
Issue client certificate and key for ApiServer aggregation certificate
cfssl gencert \
-ca=/etc/kubernetes/pki/front-proxy-ca.pem \
-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \
-config=ca-config.json \
-profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
Generate Controller-Manager Certificate#
# Create certificate signing request file
cat > manager-csr.json << EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-controller-manager",
"OU": "Kubernetes-manual"
}
]
}
EOF
Issue controller-manager certificate and key
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
Generate kubeconfig configuration file for controller-manager
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:8443 \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/etc/kubernetes/pki/controller-manager.pem \
--client-key=/etc/kubernetes/pki/controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
Generate Kube-Scheduler Certificate#
# Create certificate signing request file
cat > scheduler-csr.json << EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-scheduler",
"OU": "Kubernetes-manual"
}
]
}
EOF
Issue kube-scheduler certificate and key
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
Generate kube-scheduler specific kubeconfig configuration file
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:8443 \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=/etc/kubernetes/pki/scheduler.pem \
--client-key=/etc/kubernetes/pki/scheduler-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Generate Admin Certificate#
# Create certificate signing request file
cat > admin-csr.json << EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:masters",
"OU": "Kubernetes-manual"
}
]
}
EOF
Issue admin certificate and key
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
Generate admin specific kubeconfig configuration file
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:8443 \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-credentials kubernetes-admin \
--client-certificate=/etc/kubernetes/pki/admin.pem \
--client-key=/etc/kubernetes/pki/admin-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-context kubernetes-admin@kubernetes \
--cluster=kubernetes \
--user=kubernetes-admin \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
Generate Kube-Proxy Certificate#
# Create certificate signing request file
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-proxy",
"OU": "Kubernetes-manual"
}
]
}
EOF
Issue kube-proxy certificate and key
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy
Generate kube-proxy specific kubeconfig configuration file
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:8443 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/pki/kube-proxy.pem \
--client-key=/etc/kubernetes/pki/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config set-context kube-proxy@kubernetes \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config use-context kube-proxy@kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
Create ServiceAccount Encryption Key#
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
Add Component Configuration and Start Services#
ETCD Component (master nodes)#
(1) k8s-master01 configuration file
cat > /etc/etcd/etcd.config.yml << EOF
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.0.111:2380'
listen-client-urls: 'https://192.168.0.111:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.0.111:2380'
advertise-client-urls: 'https://192.168.0.111:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.0.111:2380,k8s-master02=https://192.168.0.112:2380,k8s-master03=https://192.168.0.113:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF
(2) k8s-master02 configuration file
cat > /etc/etcd/etcd.config.yml << EOF
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.0.112:2380'
listen-client-urls: 'https://192.168.0.112:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.0.112:2380'
advertise-client-urls: 'https://192.168.0.112:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.0.111:2380,k8s-master02=https://192.168.0.112:2380,k8s-master03=https://192.168.0.113:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF
(3) k8s-master03 configuration file
cat > /etc/etcd/etcd.config.yml << EOF
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.0.113:2380'
listen-client-urls: 'https://192.168.0.113:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.0.113:2380'
advertise-client-urls: 'https://192.168.0.113:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.0.111:2380,k8s-master02=https://192.168.0.112:2380,k8s-master03=https://192.168.0.113:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF
(4) Create etcd service startup configuration file
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
EOF
(5) Start etcd service
mkdir -p /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd.service
systemctl status etcd.service
(6) Health status of etcd cluster
export ETCDCTL_API=3
etcdctl --endpoints="192.168.0.111:2379,192.168.0.112:2379,192.168.0.113:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
+--------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
| ENDPOINT | ID | VERSION | STORAGE VERSION | DB SIZE | IN USE | PERCENTAGE NOT IN USE | QUOTA | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | DOWNGRADE TARGET VERSION | DOWNGRADE ENABLED |
+--------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
| 192.168.0.111:2379 | 9c35553b47538310 | 3.6.4 | 3.6.0 | 20 kB | 16 kB | 20% | 0 B | true | false | 3 | 6 | 6 | | | false |
| 192.168.0.112:2379 | 545bae002651f913 | 3.6.4 | 3.6.0 | 20 kB | 16 kB | 20% | 0 B | false | false | 2 | 7 | 7 | | | false |
| 192.168.0.113:2379 | d7497b3a31d15f9e | 3.6.4 | 3.6.0 | 20 kB | 16 kB | 20% | 0 B | false | false | 2 | 7 | 7 | | | false |
+--------------------+------------------+---------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+# Save etcd-related firewall policies to prevent etcd service from failing to start after reboot service iptables save
If the etcd cluster service fails to start after rebooting the server, clear the firewall rules, start the etcd service, and save the policy.
Install Nginx for High Availability#
(1) Download and install Nginx
wget https://nginx.org/download/nginx-1.28.0.tar.gz
tar xvf nginx-1.28.0.tar.gz
cd nginx-1.28.0
# Compile and install, where --with-stream enables Layer 4 proxy
./configure --with-stream --without-http --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
make && make install
(2) Create configuration file
cat > /usr/local/nginx/conf/kube-nginx.conf <<EOF
worker_processes 1;
events {
worker_connections 1024;
}
stream {
upstream backend {
least_conn;
hash $remote_addr consistent;
server 192.168.0.111:6443 max_fails=3 fail_timeout=30s;
server 192.168.0.112:6443 max_fails=3 fail_timeout=30s;
server 192.168.0.113:6443 max_fails=3 fail_timeout=30s;
}
server {
listen 127.0.0.1:8443;
proxy_connect_timeout 1s;
proxy_pass backend;
}
}
EOF
(6) Add nginx service
cat > /etc/systemd/system/kube-nginx.service <<EOF
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=forking
ExecStartPre=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -t
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx
ExecReload=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
(7) Start the service
systemctl daemon-reload
systemctl enable --now kube-nginx.service
systemctl status kube-nginx.service
ApiServer Component#
# Create required directories on all nodes
mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
(1) Add apiserver service on k8s-master01 node
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--v=2 \\
--allow-privileged=true \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--advertise-address=192.168.0.111 \\
--service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\
--service-node-port-range=30000-32767 \\
--etcd-servers=https://192.168.0.111:2379,https://192.168.0.112:2379,https://192.168.0.113:2379 \\
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
--client-ca-file=/etc/kubernetes/pki/ca.pem \\
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\
--service-account-key-file=/etc/kubernetes/pki/sa.pub \\
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\
--authorization-mode=Node,RBAC \\
--enable-bootstrap-token-auth=true \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\
--requestheader-allowed-names=aggregator \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF
(2) Add apiserver service on k8s-master02 node
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--v=2 \\
--allow-privileged=true \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--advertise-address=192.168.0.112 \\
--service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\
--service-node-port-range=30000-32767 \\
--etcd-servers=https://192.168.0.111:2379,https://192.168.0.112:2379,https://192.168.0.113:2379 \\
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
--client-ca-file=/etc/kubernetes/pki/ca.pem \\
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\
--service-account-key-file=/etc/kubernetes/pki/sa.pub \\
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\
--authorization-mode=Node,RBAC \\
--enable-bootstrap-token-auth=true \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\
--requestheader-allowed-names=aggregator \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF
(3) Add apiserver service on k8s-master03 node
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--v=2 \\
--allow-privileged=true \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--advertise-address=192.168.0.113 \\
--service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\
--service-node-port-range=30000-32767 \\
--etcd-servers=https://192.168.0.111:2379,https://192.168.0.112:2379,https://192.168.0.113:2379 \\
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
--client-ca-file=/etc/kubernetes/pki/ca.pem \\
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\
--service-account-key-file=/etc/kubernetes/pki/sa.pub \\
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\
--authorization-mode=Node,RBAC \\
--enable-bootstrap-token-auth=true \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\
--requestheader-allowed-names=aggregator \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF
(4) Start kube-apiserver service
systemctl daemon-reload
systemctl enable --now kube-apiserver
systemctl status kube-apiserver
Controller-Manager Component#
Add controller-manager service on all master nodes
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--v=2 \\
--bind-address=0.0.0.0 \\
--root-ca-file=/etc/kubernetes/pki/ca.pem \\
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \\
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\
--leader-elect=true \\
--use-service-account-credentials=true \\
--node-monitor-grace-period=40s \\
--node-monitor-period=5s \\
--controllers=*,bootstrapsigner,tokencleaner \\
--allocate-node-cidrs=true \\
--service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\
--cluster-cidr=172.16.0.0/12,fc00:2222::/112 \\
--node-cidr-mask-size-ipv4=24 \\
--node-cidr-mask-size-ipv6=120 \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
Start controller-manager service
systemctl daemon-reload
systemctl enable --now kube-controller-manager
systemctl status kube-controller-manager
Kube-Scheduler Component (master nodes)#
Add kube-scheduler service on all master nodes
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes