A Kubernetes cluster deployed a few years ago needs to be gradually updated. Since I haven’t recorded it for a long time, I am a little unfamiliar with the deployment method. Now I try to sort out the updates for the new version of the cluster.

1.、env

1.1.0 Cluster env

HostnameIPRolesoftware
k8s-master-0110.88.12.60Masterkube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、haproxy、containerd
k8s-master-0210.88.12.61Masterkube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、containerd
k8s-master-0310.88.12.62Masterkube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、containerd
k8s-worker-0110.88.12.63Workerkubelet、containerd
k8s-worker-0210.88.12.64Workerkubelet、containerd
k8s-worker-0310.88.12.65Workerkubelet、containerd
10.88.12.100VIPThis address is usually a drift address bound by keepalived. For simplicity, we will directly configure the new network card

I think someone has noticed that there is no kube-proxy in the above cluster components, because this cluster uses cilium instead of kube-proxy, but kube-proxy is still included when deployed. At the same time, please note that each node must have the runc program

1.2.0 Network

TypeIP/Maskdescribe
Host10.88.12/24Physical or virtual machine network
Cluster10.90.0.0/16Cluster Services Network
Pod10.100.0.0/16Cluster Pod Network

1.3.0 Software Version

SoftwareVersion
cfssl1.6.5
containerdv2.1.0
crictl0.1.0
runc1.3.0
helmv3.18.0
os_kernel6.1.0-29

As for other basic system configurations, such as installing software dependency environments, modifying system kernel parameters, modifying system file descriptors, enabling certain kernel module functions (such as ipvs), cluster host time synchronization, disabling selinux, disabling swap partitions, etc., you can search and set them online according to your needs, so I will not describe them here.

2.Install the basic components of the Kubernetes cluster Master

2.1.0 Deploy containerd

All nodes in the cluster execute

Terminal window
# Unzip to the /usr/local/bin/ directory
kevn@k8s-master-01:~$ sudo tar -xvf containerd-2.1.0-linux-amd64.tar.gz -C /usr/local/
# Create a startup service file
koevn@k8s-master-01:~$ sudo cat > /etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStart=/usr/local/bin/containerd --config /etc/containerd/config.toml
Delegate=yes
KillMode=process
Restart=always
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
# Create containerd configuration file
koevn@k8s-master-01:~$ sudo mkdir -pv /etc/containerd/certs.d/harbor.koevn.com
koevn@k8s-master-01:~$ sudo containerd config default | tee /etc/containerd/config.toml
touch /etc/containerd/certs.d/harbor.koevn.com/hosts.toml
# Modify/etc/containerd/config.toml
[plugins]
[plugins.'io.containerd.cri.v1.images']
snapshotter = 'overlayfs'
disable_snapshot_annotations = true
discard_unpacked_layers = false
max_concurrent_downloads = 3
image_pull_progress_timeout = '5m0s'
image_pull_with_sync_fs = false
stats_collect_period = 10
[plugins.'io.containerd.cri.v1.images'.pinned_images]
sandbox = 'harbor.koevn.com/k8s/pause:3.10' # This image is pulled according to your own definition. Here I point to a private warehouse
[plugins.'io.containerd.cri.v1.images'.registry]
config_path = "/etc/containerd/certs.d" # Configure the mirror repository related information here
[plugins.'io.containerd.cri.v1.images'.image_decryption]
key_model = 'node'
# Configuration /etc/containerd/certs.d/harbor.koevn.com/hosts.toml
server = "https://harbor.koevn.com"
[host."https://harbor.koevn.com"]
capabilities = ["pull", "resolve", "push"]
skip_verify = true
username = "admin"
password = "K123456"
# Reload systemd management
koevn@k8s-master-01:~$ sudo systemctl daemon-reload
# containerd service started
koevn@k8s-master-01:~$ sudo systemctl start containerd.service
# containerd service status
koevn@k8s-master-01:~$ sudo systemctl status containerd.service
# Containerd service starts automatically at boot
koevn@k8s-master-01:~$ sudo systemctl enable --now containerd.service

2.1.1 Configuring crictl

Terminal window
# Unzip
koevn@k8s-master-01:~$ sudo tar xf crictl-v*-linux-amd64.tar.gz -C /usr/sbin/
# Generate configuration files
koevn@k8s-master-01:~$ sudo cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
EOF
# Check
crictl info

2.2.0 Deploy etcd

This service is deployed on three Master nodes

2.2.1 Install etcd env

Terminal window
# Move the downloaded cfssl tool to the /usr/local/bin directory
koevn@k8s-master-01:/tmp$ sudo mv cfssl_1.6.5_linux_amd64 /usr/local/bin/cfssl
koevn@k8s-master-01:/tmp$ sudo mv cfssljson_1.6.5_linux_amd64 /usr/local/bin/cfssljson
koevn@k8s-master-01:/tmp$ sudo mv cfssl-certinfo_1.6.5_linux_amd64 /usr/local/bin/cfssl-certinfo
koevn@k8s-master-01:/tmp$ sudo chmod +x /usr/local/bin/cfss*
# Create etcd service directory
koevn@k8s-master-01:~$ sudo mkdir -pv /opt/etcd/{bin,cert,config}
# Unzip the etcd installation file
koevn@k8s-master-01:/tmp$ sudo tar -xvf etcd*.tar.gz && mv etcd-*/etcd /opt/etcd/bin/
# Add etcd system environment variables
koevn@k8s-master-01:/tmp$ sudo echo "export PATH=/opt/etcd/bin:$PATH" > /etc/profile.d/etcd.sh && sudo source /etc/profile
# Create a certificate generation directory
koevn@k8s-master-01:/tmp$ sudo mkdir -pv /opt/cert/etcd

2.2.2 Generate etcd certificate

Create a ca configuration file

Terminal window
koevn@k8s-master-01:/tmp$ cd /opt/cert
koevn@k8s-master-01:/opt/cert$ sudo cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "876000h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "876000h"
}
}
}
}
EOF

Create a CA certificate signing request

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > etcd-ca-csr.json << EOF
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"ST": "California",
"L": "Los Angeles",
"O": "Koevn",
"OU": "Koevn Security"
}
],
"ca": {
"expiry": "876000h"
}
}
EOF

Generate etcd ca certificate

Terminal window
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /opt/cert/etcd

Create etcd request certificate

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > etcd-csr.json << EOF
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"ST": "California",
"L": "Los Angeles",
"O": "Koevn",
"OU": "Koevn Security"
}
]
}
EOF

Generate etcd certificate

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert \
-ca=/opt/cert/etcd/etcd-ca.pem \
-ca-key=/opt/cert/etcd/etcd-ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1,k8s-master-01,k8s-master-02,k8s-master-03,10.88.12.60,10.88.12.61,10.88.12.62 \
-profile=kubernetes \
etcd-csr.json | cfssljson -bare /opt/cert/etcd

Copy the etcd*.pem certificates generated in the /opt/cert/etcd directory to the /opt/etcd/cert/ directory.

2.2.3 Create etcd configuration file

Terminal window
koevn@k8s-master-01:~$ sudo cat > /oet/etcd/config/etcd.config.yml << EOF
name: 'k8s-master-01' # Note modify the name of each node
data-dir: /data/etcd/data
wal-dir: /data/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.88.12.60:2380' # Modify the IP address that each node listens on
listen-client-urls: 'https://10.88.12.60:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.88.12.60:2380'
advertise-client-urls: 'https://10.88.12.60:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master-01=https://10.88.12.60:2380,k8s-master-02=https://10.88.12.61:2380,k8s-master-03=https://10.88.12.62:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/opt/etcd/cert/etcd.pem'
key-file: '/opt/etcd/cert/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/opt/etcd/cert/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/opt/etcd/cert/etcd.pem'
key-file: '/opt/etcd/cert/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/opt/etcd/cert/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

View the etcd service directory structure

Terminal window
koevn@k8s-master-01:~$ sudo tree /opt/etcd/
/opt/etcd/
├── bin
│   ├── etcd
│   ├── etcdctl
│   └── etcdutl
├── cert
│   ├── etcd-ca.pem
│   ├── etcd-key.pem
│   └── etcd.pem
└── config
└── etcd.config.yml
4 directories, 7 files

Then package etcd and distribute it to other Master nodes

Terminal window
koevn@k8s-master-01:~$ sudo tar -czvf /tmp/etcd.tar.gz /opt/etcd

2.2.4 Each etcd node configures and starts the service

Terminal window
koevn@k8s-master-01:~$ sudo cat /etc/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
ExecStart=/opt/etcd/bin/etcd --config-file=/opt/etcd/config/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
EOF

Load system service systemctl daemon-reload Start etcd service systemctl start etcd.service Check etcd service status systemctl status etcd.service Add automatic startup systemctl enable --now etcd.service

2.2.5 Check etcd status

Terminal window
koevn@k8s-master-01:~$ sudo export ETCDCTL_API=3
root@k8s-master-01:~# etcdctl --endpoints="10.88.12.60:2379,10.88.12.61:2379,10.88.12.62:2379" \
> --cacert=/opt/etcd/cert/etcd-ca.pem \
> --cert=/opt/etcd/cert/etcd.pem \
> --key=/opt/etcd/cert/etcd-key.pem \
> endpoint status \
> --write-out=table
+------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 10.88.12.60:2379 | c1f013862d84cb12 | 3.5.21 | 35 MB | false | false | 158 | 8434769 | 8434769 | |
| 10.88.12.61:2379 | cea2c8779e4914d0 | 3.5.21 | 36 MB | false | false | 158 | 8434769 | 8434769 | |
| 10.88.12.62:2379 | df3b2276b87896db | 3.5.21 | 33 MB | true | false | 158 | 8434769 | 8434769 | |
+------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

Create a k8s certificate storage directory

Terminal window
koevn@k8s-master-01:~$ sudo mkdir -pv /opt/cert/kubernetes/pki
koevn@k8s-master-01:~$ sudo mkdir -pv /opt/kubernetes/cfg # Cluster kubeconfig file directory
koevn@k8s-master-01:~$ cd /opt/cert

2.3.1 Generate k8s ca certificate

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"ST": "California",
"L": "Los Angeles",
"O": "Kubernetes",
"OU": "Kubernetes-manual"
}
],
"ca": {
"expiry": "876000h"
}
}
EOF

Generate CA certificate

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert -initca ca-csr.json | cfssljson -bare /opt/cert/kubernetes/pki/ca

2.3.2 Generate apiserver certificate

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > apiserver-csr.json << EOF
{
"CN": "kube-apiserver",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"ST": "California",
"L": "Los Angeles",
"O": "Kubernetes",
"OU": "Kubernetes-manual"
}
]
}
EOF
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert \
-ca=/opt/cert/kubernetes/pki/ca.pem \
-ca-key=/opt/cert/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-hostname=10.90.0.1,10.88.12.100,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,k8s.koevn.com,10.88.12.60,10.88.12.61,10.88.12.62,10.88.12.63,10.88.12.64,10.88.12.65,10.88.12.66,10.88.12.67,10.88.12.68 \
-profile=kubernetes apiserver-csr.json | cfssljson -bare /opt/cert/kubernetes/pki/apiserver

2.3.3 Generate apiserver aggregate certificate

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > front-proxy-ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"ca": {
"expiry": "876000h"
}
}
EOF
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert \
-initca front-proxy-ca-csr.json | cfssljson -bare /opt/cert/kubernetes/pki/front-proxy-ca
koevn@k8s-master-01:/opt/cert$ sudo cat > front-proxy-client-csr.json << EOF
{
"CN": "front-proxy-client",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert \
-ca=/opt/cert/kubernetes/pki/front-proxy-ca.pem \
-ca-key=/opt/cert/kubernetes/pki/front-proxy-ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
front-proxy-client-csr.json | cfssljson -bare /opt/cert/kubernetes/pki/front-proxy-client

2.3.4 Generate controller-manage certificate

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > manager-csr.json << EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-controller-manager",
"OU": "Kubernetes-manual"
}
]
}
EOF
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert \
-ca=/opt/cert/kubernetes/pki/front-proxy-ca.pem \
-ca-key=/opt/cert/kubernetes/pki/front-proxy-ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
manager-csr.json | cfssljson -bare /opt/cert/kubernetes/pki/controller-manager
Set cluster items
```bash
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-cluster kubernetes \
--certificate-authority=/opt/cert/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.88.12.100:9443 \
--kubeconfig=/opt/kubernetes/cfg/controller-manager.kubeconfig

⚠️ 注意 The cluster item --server configuration must point to the keepalived high-availability VIP, and port 9443 is the port that haproxy listens on.

Setting the environment item context

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/opt/kubernetes/cfg/controller-manager.kubeconfig

Setting a User Item

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/opt/cert/kubernetes/pki/controller-manager.pem \
--client-key=/opt/cert/kubernetes/pki/controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/opt/kubernetes/cfg/controller-manager.kubeconfig

Setting the default environment

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/opt/kubernetes/cfg/controller-manager.kubeconfig

2.3.5 Generate kube-scheduler certificate

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > scheduler-csr.json << EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"ST": "California",
"L": "Los Angeles",
"O": "system:kube-scheduler",
"OU": "Kubernetes-manual"
}
]
}
EOF
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert \
-ca=/opt/cert/kubernetes/pki/ca.pem \
-ca-key=/opt/cert/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
scheduler-csr.json | cfssljson -bare /opt/cert/kubernetes/pki/scheduler
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-cluster kubernetes \
--certificate-authority=/opt/cert/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.88.12.100:9443 \
--kubeconfig=/opt/kubernetes/cfg/scheduler.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-credentials system:kube-scheduler \
--client-certificate=/opt/cert/kubernetes/pki/scheduler.pem \
--client-key=/opt/cert/kubernetes/pki/scheduler-key.pem \
--embed-certs=true \
--kubeconfig=/opt/kubernetes/cfg/scheduler.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/opt/kubernetes/cfg/scheduler.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/opt/kubernetes/cfg/scheduler.kubeconfig

2.3.6 Generate admin certificate

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > admin-csr.json << EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"ST": "California",
"L": "Los Angeles",
"O": "system:masters",
"OU": "Kubernetes-manual"
}
]
}
EOF
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert \
-ca=/opt/cert/kubernetes/pki/ca.pem \
-ca-key=/opt/cert/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare /opt/cert/kubernetes/pki/admin
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-cluster kubernetes \
--certificate-authority=/opt/cert/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.88.12.100:9443 \
--kubeconfig=/opt/kubernetes/cfg/admin.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-credentials kubernetes-admin \
--client-certificate=/opt/cert/kubernetes/pki//admin.pem \
--client-key=/opt/cert/kubernetes/pki//admin-key.pem \
--embed-certs=true \
--kubeconfig=/opt/kubernetes/cfg/admin.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-context kubernetes-admin@kubernetes \
--cluster=kubernetes \
--user=kubernetes-admin \
--kubeconfig=/opt/kubernetes/cfg/admin.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config use-context kubernetes-admin@kubernetes \
--kubeconfig=/opt/kubernetes/cfg/admin.kubeconfig

2.3.7 Generate kube-proxy certificate (optional)

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"ST": "California",
"L": "Los Angeles",
"O": "system:kube-proxy",
"OU": "Kubernetes-manual"
}
]
}
EOF
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert \
-ca=/opt/cert/kubernetes/pki/ca.pem \
-ca-key=/opt/cert/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare /opt/kubernetes/cfg/kube-proxy
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-cluster kubernetes \
--certificate-authority=/opt/cert/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.88.12.100:9443 \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-credentials kube-proxy \
--client-certificate=/opt/cert/kubernetes/pki/kube-proxy.pem \
--client-key=/opt/cert/kubernetes/pki/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-context kube-proxy@kubernetes \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config use-context kube-proxy@kubernetes \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig

2.3.8 Create a ServiceAccount Key

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo openssl genrsa -out /opt/cert/kubernetes/pki/sa.key 2048
koevn@k8s-master-01:/opt/cert$ sudo openssl rsa -in /opt/cert/kubernetes/pki/sa.key \
-pubout -out /opt/cert/kubernetes/pki/sa.pub

2.3.9 View the number of k8s component certificates

Terminal window
koevn@k8s-master-01:~$ ls /opt/cert/kubernetes/pki
-rw-r--r-- 1 root root 1025 May 20 17:11 admin.csr
-rw------- 1 root root 1679 May 20 17:11 admin-key.pem
-rw-r--r-- 1 root root 1444 May 20 17:11 admin.pem
-rw-r--r-- 1 root root 1383 May 20 16:25 apiserver.csr
-rw------- 1 root root 1675 May 20 16:25 apiserver-key.pem
-rw-r--r-- 1 root root 1777 May 20 16:25 apiserver.pem
-rw-r--r-- 1 root root 1070 May 20 11:58 ca.csr
-rw------- 1 root root 1679 May 20 11:58 ca-key.pem
-rw-r--r-- 1 root root 1363 May 20 11:58 ca.pem
-rw-r--r-- 1 root root 1082 May 20 16:38 controller-manager.csr
-rw------- 1 root root 1679 May 20 16:38 controller-manager-key.pem
-rw-r--r-- 1 root root 1501 May 20 16:38 controller-manager.pem
-rw-r--r-- 1 root root 940 May 20 16:26 front-proxy-ca.csr
-rw------- 1 root root 1675 May 20 16:26 front-proxy-ca-key.pem
-rw-r--r-- 1 root root 1094 May 20 16:26 front-proxy-ca.pem
-rw-r--r-- 1 root root 903 May 20 16:30 front-proxy-client.csr
-rw------- 1 root root 1679 May 20 16:30 front-proxy-client-key.pem
-rw-r--r-- 1 root root 1188 May 20 16:30 front-proxy-client.pem
-rw-r--r-- 1 root root 1045 May 20 18:30 kube-proxy.csr
-rw------- 1 root root 1679 May 20 18:30 kube-proxy-key.pem
-rw-r--r-- 1 root root 1464 May 20 18:30 kube-proxy.pem
-rw------- 1 root root 1704 May 21 09:21 sa.key
-rw-r--r-- 1 root root 451 May 21 09:21 sa.pub
-rw-r--r-- 1 root root 1058 May 20 17:00 scheduler.csr
-rw------- 1 root root 1679 May 20 17:00 scheduler-key.pem
-rw-r--r-- 1 root root 1476 May 20 17:00 scheduler.pem
koevn@k8s-master-01:~$ ls /opt/cert/kubernetes/pki/ | wc -l # View Total
26

2.3.10 Install kubernetes components and certificate distribution (all Master nodes)

Terminal window
koevn@k8s-master-01:~$ sudo mkdir -pv /opt/kubernetes/{bin,etc,cert,cfg}
koevn@k8s-master-01:~$ sudo tree /opt/kubernetes # View file hierarchy
/opt/kubernetes
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubectl
│   ├── kubelet
│   ├── kube-proxy
│   └── kube-scheduler
├── cert
│   ├── admin-key.pem
│   ├── admin.pem
│   ├── apiserver-key.pem
│   ├── apiserver.pem
│   ├── ca-key.pem
│   ├── ca.pem
│   ├── controller-manager-key.pem
│   ├── controller-manager.pem
│   ├── front-proxy-ca.pem
│   ├── front-proxy-client-key.pem
│   ├── front-proxy-client.pem
│   ├── kube-proxy-key.pem
│   ├── kube-proxy.pem
│   ├── sa.key
│   ├── sa.pub
│   ├── scheduler-key.pem
│   └── scheduler.pem
├── cfg
│   ├── admin.kubeconfig
│   ├── bootstrap-kubelet.kubeconfig
│   ├── bootstrap.secret.yaml
│   ├── controller-manager.kubeconfig
│   ├── kubelet.kubeconfig
│   ├── kube-proxy.kubeconfig
│   └── scheduler.kubeconfig
├── etc
│   ├── kubelet-conf.yml
│   └── kube-proxy.yaml
└── manifests
6 directories, 32 files

2.4.0 Deploy Haproxy load balancing

⚠️ 注意 This step configures the high availability of the apiserver using a combination of Haproxy and Keepalived. In order to omit this step, I directly add a network card to the Master-01 node with a fake VIP address. In the production environment, it is best to use a combination of Haproxy and Keepalived to improve redundancy.

2.4.1 Install Haproxy

Terminal window
koevn@k8s-master-01:~$ sudo apt install haproxy

2.4.2 Modify the haproxy configuration file

Terminal window
koevn@k8s-master-01:~$ sudo cat >/etc/haproxy/haproxy.cfg << EOF
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
listen stats
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri /haproxy-status
stats auth admin:K123456
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
frontend k8s-master
bind 0.0.0.0:9443
bind 127.0.0.1:9443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master-01 10.88.12.60:6443 check
server k8s-master-02 10.88.12.61:6443 check
server k8s-master-03 10.88.12.62:6443 check
EOF

Start the service

Terminal window
koevn@k8s-master-01:~$ sudo systemctl daemon-reload
koevn@k8s-master-01:~$ sudo systemctl enable --now haproxy.service

Visit http://10.88.12.100:9999/haproxy-status through the browser to view the haproxy status

2.5.0 Configure apiserver service (all Master nodes)

Configure apiserver to start the service

Terminal window
koevn@k8s-master-01:~$ sudo cat /etc/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \
--v=2 \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=10.88.12.60 \ # Note Change the IP address of each Master node
--service-cluster-ip-range=10.90.0.0/16 \ # The cluster network segment is configured according to the usage
--service-node-port-range=30000-32767 \
--etcd-servers=https://10.88.12.60:2379,https://10.88.12.61:2379,https://10.88.12.62:2379 \
--etcd-cafile=/opt/etcd/cert/etcd-ca.pem \
--etcd-certfile=/opt/etcd/cert/etcd.pem \
--etcd-keyfile=/opt/etcd/cert/etcd-key.pem \
--client-ca-file=/opt/kubernetes/cert/ca.pem \
--tls-cert-file=/opt/kubernetes/cert/apiserver.pem \
--tls-private-key-file=/opt/kubernetes/cert/apiserver-key.pem \
--kubelet-client-certificate=/opt/kubernetes/cert/apiserver.pem \
--kubelet-client-key=/opt/kubernetes/cert/apiserver-key.pem \
--service-account-key-file=/opt/kubernetes/cert/sa.pub \
--service-account-signing-key-file=/opt/kubernetes/cert/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/opt/kubernetes/cert/front-proxy-ca.pem \
--proxy-client-cert-file=/opt/kubernetes/cert/front-proxy-client.pem \
--proxy-client-key-file=/opt/kubernetes/cert/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User \
--enable-aggregator-routing=true
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF

Reload systemd management

Terminal window
koevn@k8s-master-01:~$ sudo systemctl daemon-reload

Start kube-apiserver

Terminal window
koevn@k8s-master-01:~$ sudo systemctl enable --now kube-apiserver.service

Check the kube-apiserver service status

Terminal window
koevn@k8s-master-01:~$ sudo systemctl status kube-apiserver.service

2.6.0 Configure kube-controller-manager service (all Master nodes)

Configure kube-controller-manager service to start the service

Terminal window
koevn@k8s-master-01:~$ sudo cat /etc/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/opt/kubernetes/bin/kube-controller-manager \
--v=2 \
--root-ca-file=/opt/kubernetes/cert/ca.pem \
--cluster-signing-cert-file=/opt/kubernetes/cert/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/cert/ca-key.pem \
--service-account-private-key-file=/opt/kubernetes/cert/sa.key \
--kubeconfig=/opt/kubernetes/cfg/controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--service-cluster-ip-range=10.90.0.0/16 \ # Cluster services network changes based on usage
--cluster-cidr=10.100.0.0/16 \ # Cluster pod network changes based on usage
--node-cidr-mask-size-ipv4=24 \ # The network subnet mask is set to 24
--requestheader-client-ca-file=/opt/kubernetes/cert/front-proxy-ca.pem
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF

Reload systemd management

Terminal window
koevn@k8s-master-01:~$ sudo systemctl daemon-reload

Start kube-apiserver

Terminal window
koevn@k8s-master-01:~$ sudo systemctl enable --now kube-controller-manager.service

Check the kube-apiserver service status

Terminal window
koevn@k8s-master-01:~$ sudo systemctl status kube-controller-manager.service

2.7.0 Configure kube-scheduler service (all Master nodes)

Configure kube-scheduler service and start the service

Terminal window
koevn@k8s-master-01:~$ sudo cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/opt/kubernetes/bin/kube-scheduler \
--v=2 \
--bind-address=0.0.0.0 \
--leader-elect=true \
--kubeconfig=/opt/kubernetes/cfg/scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF

Reload systemd management

Terminal window
koevn@k8s-master-01:~$ sudo systemctl daemon-reload

Start kube-apiserver

Terminal window
koevn@k8s-master-01:~$ sudo systemctl enable --now kube-scheduler.service

Check the kube-apiserver service status

Terminal window
koevn@k8s-master-01:~$ sudo systemctl status kube-scheduler.service

2.8.0 TLS Bootstrapping Configuration

Create cluster configuration items

Terminal window
koevn@k8s-master-01:~$ sudo kubectl config set-cluster kubernetes \
--certificate-authority=/opt/cert/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.88.12.100:9443 \
--kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig

Creating a token

Terminal window
koevn@k8s-master-01:~$ sudo echo "$(head -c 6 /dev/urandom | md5sum | head -c 6)"."$(head -c 16 /dev/urandom | md5sum | head -c 16)"
79b841.0677456fb3b47289

Setting Credentials

Terminal window
koevn@k8s-master-01:~$ sudo kubectl config set-credentials tls-bootstrap-token-user \
--token=79b841.0677456fb3b47289 \
--kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig

Setting context information

Terminal window
koevn@k8s-master-01:~$ sudo kubectl config set-context tls-bootstrap-token-user@kubernetes \
--cluster=kubernetes \
--user=tls-bootstrap-token-user \
--kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig
koevn@k8s-master-01:~$ sudo kubectl config use-context tls-bootstrap-token-user@kubernetes \
--kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig

Configure the current user to manage the cluster

Terminal window
mkdir -pv /home/koevn/.kube && cp /opt/kubernetes/cfg/admin.kubeconfig /home/koevn/.kube/config

Check the cluster status

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get cs # View the health status of core components of the Kubernetes control plane
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
etcd-0 Healthy ok
controller-manager Healthy ok
scheduler Healthy ok

Apply bootstrap-token

Terminal window
koevn@k8s-master-01:~$ sudo cat > bootstrap.secret.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
name: bootstrap-token-79b841
namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
description: "The default bootstrap token generated by 'kubelet '."
token-id: 79b841
token-secret: 0677456fb3b47289
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-autoapprove-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-autoapprove-certificate-rotation
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kube-apiserver

Create bootstrap.secret configuration to the cluster

Terminal window
kubectl create -f bootstrap.secret.yaml

3. Install the basic components of the Kubernetes cluster Worker

3.1.0 Required installation directory structure

Terminal window
koevn@k8s-worker-01:~$ sudo mkdir -pv /opt/kubernetes/{bin,cert,cfg,etc}
koevn@k8s-worker-01:~$ sudo tree /opt/kubernetes/ # View the directory structure
/opt/kubernetes/
├── bin
│   ├── kubectl
│   ├── kubelet
│   └── kube-proxy
├── cert # Copy the certificate file from the master node
│   ├── ca.pem
│   ├── front-proxy-ca.pem
│   ├── kube-proxy-key.pem
│   └── kube-proxy.pem
├── cfg # The kubeconfig file is copied from the master node
│   ├── bootstrap-kubelet.kubeconfig
│   ├── kubelet.kubeconfig
│   └── kube-proxy.kubeconfig
├── etc
│   ├── kubelet-conf.yml
│   └── kube-proxy.yaml
└── manifests
6 directories, 12 files

Kubernetes Worker nodes only need to install the Containerd and kubelet components. Although kube-proxy is not necessary, you can still install it and deploy it according to your needs.

3.2.0 Deploy kubelet service

This component needs to be deployed to all nodes in the cluster, and then deployed to other nodes accordingly. Add kubelet’s service file

Terminal window
koevn@k8s-worker-01:~$ sudo cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=containerd.service
[Service]
ExecStart=/opt/kubernetes/bin/kubelet \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--config=/opt/kubernetes/etc/kubelet-conf.yml \
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
--node-labels=node.kubernetes.io/node=
[Install]
WantedBy=multi-user.target
EOF

Create and edit the kubelet configuration file

Terminal window
koevn@k8s-worker-01:~$ sudo cat > /etc/kubernetes/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/cert/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.90.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /opt/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s

Start kubelet

Terminal window
koevn@k8s-worker-01:~$ sudo systemctl daemon-reload # Reload the systemd management unit
koevn@k8s-worker-01:~$ sudo systemctl enable --now kubelet.service # Enable and immediately start the kubelet.service unit
koevn@k8s-worker-01:~$ sudo systemctl status kubelet.service # View the current status of the kubelet.service service

3.3.0 Check the status of cluster nodes

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-01 NotReady <none> 50s v1.33.0
k8s-master-02 NotReady <none> 47s v1.33.0
k8s-master-03 NotReady <none> 40s v1.33.0
k8s-worker-01 NotReady <none> 35s v1.33.0
k8s-worker-02 NotReady <none> 28s v1.33.0
k8s-worker-03 NotReady <none> 11s v1.33.0

Since the CNI network plug-in is not installed in the Kubernetes cluster at this time, it is normal for the node STATUS to be in the NotReady state. Only when the CNI network plug-in runs normally without errors, the state is Ready.

3.4.0 View the Runtime version information of the cluster nodes

Terminal window
koevn@k8s-master-01:~$ sudo kubectl describe node | grep Runtime
Container Runtime Version: containerd://2.1.0
Container Runtime Version: containerd://2.1.0
Container Runtime Version: containerd://2.1.0
Container Runtime Version: containerd://2.1.0
Container Runtime Version: containerd://2.1.0
Container Runtime Version: containerd://2.1.0

3.5.0 Deploy kube-proxy service (optional)

This component needs to be deployed to all nodes in the cluster, and then deployed to other nodes accordingly. Add the kube-proxy service file

Terminal window
koevn@k8s-worker-01:~$ sudo cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/opt/kubernetes/bin/kube-proxy \
--config=/opt/kubernetes/etc/kube-proxy.yaml \
--cluster-cidr=10.100.0.0/16 \
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF

Create and edit the kube-proxy configuration file

Terminal window
koevn@k8s-worker-01:~$ sudo cat > /etc/kubernetes/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
qps: 5
clusterCIDR: 10.100.0.0/16
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
masqueradeAll: true
minSyncPeriod: 5s
scheduler: "rr"
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
EOF

Start kubelet

Terminal window
koevn@k8s-worker-01:~$ sudo systemctl daemon-reload # Reload the systemd management unit
koevn@k8s-worker-01:~$ sudo systemctl enable --now kube-proxy.service # Enable and immediately start the kubelet.service unit
koevn@k8s-worker-01:~$ sudo systemctl status kube-proxy.service # View the current status of the kubelet.service service

4. Installing the Cluster Network Plugin

⚠️ 注意 The network plug-in installed this time is cilium. With this plug-in taking over the Kubernetes cluster network, the kube-proxy service must be shut down with systemctl stop kube-proxy.service to avoid interference with cilium. Deploy cilium to operate on any node of the Master.

4.1.0 Install Helm

Terminal window
koevn@k8s-master-01:~$ sudo curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
koevn@k8s-master-01:~$ sudo chmod 700 get_helm.sh
koevn@k8s-master-01:~$ sudo ./get_helm.sh
koevn@k8s-master-01:~$ sudo tar xvf helm-*-linux-amd64.tar.gz
koevn@k8s-master-01:~$ sudo cp linux-amd64/helm /usr/local/bin/

4.2.0 Install cilium

Terminal window
koevn@k8s-master-01:~$ sudo helm repo add cilium https://helm.cilium.io # Add Source
koevn@k8s-master-01:~$ sudo helm search repo cilium/cilium --versions # Search for supported versions
koevn@k8s-master-01:~$ sudo helm pull cilium/cilium # Pull the cilium installation package
koevn@k8s-master-01:~$ sudo tar xvf cilium-*.tgz
koevn@k8s-master-01:~$ sudo cd cilium/

Edit and modify the following contents of the values.yaml file in the cilium directory

Terminal window
# ------ Other configurations omitted ------#
kubeProxyReplacement: "true" # Uncomment and change to true
k8sServiceHost: "kubernetes.default.svc.cluster.local" # This domain needs to be bound according to the apiserver certificate you generated
k8sServicePort: "9443" # apiserver prot
ipv4NativeRoutingCIDR: "10.100.0.0/16" # Setting up Pod IPv4 Network
hubble: # Enable specific Hubble metrics
metrics:
enabled:
- dns
- drop
- tcp
- flow
- icmp
- httpV2
# ------ Other configurations omitted ------#

⚠️ 注意 Here, for the repository configuration parameters in all other configuration files, it is recommended to deploy the harbor private warehouse if the network environment is generally not good to avoid the failure of the cluster containerd component to pull the image, resulting in the failure of Pod to run normally

Other features are enabled according to the configuration parameter level

  • Enable Hubble Relay aggregation service hubble.relay.enabled=true
  • Enable Hubble UI frontend (visualization page) hubble.ui.enabled=true
  • Enable Prometheus metrics output for Cilium prometheus.enabled=true
  • Enable Prometheus metrics output for Operators operator.prometheus.enabled=true
  • Enable Hubble hubble.enabled=true

The above paragraph is a simplified representation based on the values.yaml configuration file hierarchy. You can also add --set hubble.relay.enabled=true to enable the specified function through the cilium-cli command

Install cilium

Terminal window
koevn@k8s-master-01:~$ sudo helm upgrade cilium ./cilium \
--namespace kube-system \
--create-namespace \
-f cilium/values.yaml

View Status

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get pod -A | grep cil
cilium-monitoring grafana-679cd8bff-w9xm9 1/1 Running 0 5d
cilium-monitoring prometheus-87d4f66f7-6lmrz 1/1 Running 0 5d
kube-system cilium-envoy-4sjq5 1/1 Running 0 5d
kube-system cilium-envoy-64694 1/1 Running 0 5d
kube-system cilium-envoy-fzjjw 1/1 Running 0 5d
kube-system cilium-envoy-twtw6 1/1 Running 0 5d
kube-system cilium-envoy-vwstr 1/1 Running 0 5d
kube-system cilium-envoy-whck6 1/1 Running 0 5d
kube-system cilium-fkjcm 1/1 Running 0 5d
kube-system cilium-h75vq 1/1 Running 0 5d
kube-system cilium-hcx4q 1/1 Running 0 5d
kube-system cilium-jz44w 1/1 Running 0 5d
kube-system cilium-operator-58d8755c44-hnwmd 1/1 Running 57 (84m ago) 5d
kube-system cilium-operator-58d8755c44-xmg9f 1/1 Running 54 (82m ago) 5d
kube-system cilium-qx5mn 1/1 Running 0 5d
kube-system cilium-wqmzc 1/1 Running 0 5d

5. Deployment monitoring panel

5.1.0 Creating a monitoring service

Terminal window
koevn@k8s-master-01:~$ sudo wget https://raw.githubusercontent.com/cilium/cilium/1.12.1/examples/kubernetes/addons/prometheus/monitoring-example.yaml
koevn@k8s-master-01:~$ sudo kubectl apply -f monitoring-example.yaml

5.2.0 Change the cluster Services Type to NodePort

View Current Services

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cilium-monitoring grafana ClusterIP 10.90.138.192 <none> 3000/TCP 1d
cilium-monitoring prometheus ClusterIP 10.90.145.89 <none> 9090/TCP 1d
default kubernetes ClusterIP 10.90.0.1 <none> 443/TCP 1d
kube-system cilium-envoy ClusterIP None <none> 9964/TCP 1d
kube-system coredns ClusterIP 10.90.0.10 <none> 53/UDP 1d
kube-system hubble-metrics ClusterIP None <none> 9965/TCP 1d
kube-system hubble-peer ClusterIP 10.90.63.177 <none> 443/TCP 1d
kube-system hubble-relay ClusterIP 10.90.157.4 <none> 80/TCP 1d
kube-system hubble-ui ClusterIP 10.90.13.110 <none> 80/TCP 1d
kube-system metrics-server ClusterIP 10.90.61.226 <none> 443/TCP 1d

Modify Services Type

Terminal window
koevn@k8s-master-01:~$ sudo kubectl edit svc -n cilium-monitoring grafana
koevn@k8s-master-01:~$ sudo kubectl edit svc -n cilium-monitoring prometheus
koevn@k8s-master-01:~$ sudo kubectl edit svc -n kube-system hubble-ui

Execute any of the above commands to display the contents of the yaml format file. Change the type: ClusterIP below to type: NodePort. If you cancel, change it back to type: ClusterIP

View the current Services again

Terminal window
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cilium-monitoring grafana NodePort 10.90.138.192 <none> 3000:31240/TCP 1d
cilium-monitoring prometheus NodePort 10.90.145.89 <none> 9090:32044/TCP 1d
default kubernetes ClusterIP 10.90.0.1 <none> 443/TCP 1d
kube-system cilium-envoy ClusterIP None <none> 9964/TCP 1d
kube-system coredns ClusterIP 10.90.0.10 <none> 53/UDP 1d
kube-system hubble-metrics ClusterIP None <none> 9965/TCP 1d
kube-system hubble-peer ClusterIP 10.90.63.177 <none> 443/TCP 1d
kube-system hubble-relay ClusterIP 10.90.157.4 <none> 80/TCP 1d
kube-system hubble-ui NodePort 10.90.13.110 <none> 80:32166/TCP 1d
kube-system metrics-server ClusterIP 10.90.61.226 <none> 443/TCP 1d

Browser accesses http://10.88.12.61:31240 grafana service

Browser accesses http://10.88.12.61:32044 prometheus service

Browser accesses http://10.88.12.61:32166 hubble-ui service

6. Install CoreDNS

6.1.0 Pull the CoreDNS installation package

Terminal window
koevn@k8s-master-01:~$ sudo helm repo add coredns https://coredns.github.io/helm
koevn@k8s-master-01:~$ sudo helm pull coredns/coredns
koevn@k8s-master-01:~$ sudo tar xvf coredns-*.tgz
koevn@k8s-master-01:~$ cd coredns/

Modify the values.yamlfile

Terminal window
koevn@k8s-master-01:~/coredns$ sudo vim values.yaml
service:
clusterIP: "10.90.0.10" # Uncomment and modify the specified IP

Install CoreDNS

Terminal window
koevn@k8s-master-01:~/coredns$ cd ..
koevn@k8s-master-01:~$ sudo helm install coredns ./coredns/ -n kube-system \
-f coredns/values.yaml

7. Install Metrics Server

Download the metrics-server yaml file

Terminal window
koevn@k8s-master-01:~$ sudo wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml \
-O metrics-server.yaml

Edit and modify the metrics-server.yaml file

# Modify the file about 134 lines
- args:
- --cert-dir=/tmp
- --secure-port=10250
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls
- --requestheader-client-ca-file=/opt/kubernetes/cert/front-proxy-ca.pem
- --requestheader-username-headers=X-Remote-User
- --requestheader-group-headers=X-Remote-Group
- --requestheader-extra-headers-prefix=X-Remote-Extra-
# Modify the file about 182 lines
volumeMounts:
- mountPath: /tmp
name: tmp-dir
- name: ca-ssl
mountPath: /opt/kubernetes/cert
volumes:
- emptyDir: {}
name: tmp-dir
- name: ca-ssl
hostPath:
path: /opt/kubernetes/cert

Then deploy the application

Terminal window
koevn@k8s-master-01:~$ sudo kubectl apply -f metrics-server.yaml

View resource status

Terminal window
koevn@k8s-master-01:~$ sudo kubectl top node
NAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%)
k8s-master-01 212m 5% 2318Mi 61%
k8s-master-02 148m 3% 2180Mi 57%
k8s-master-03 178m 4% 2103Mi 55%
k8s-worker-01 38m 1% 1377Mi 36%
k8s-worker-02 40m 2% 1452Mi 38%
k8s-worker-03 32m 1% 1475Mi 39%

8. Verifying intra-cluster communication

8.1.0 Deploy a pod resource

Terminal window
koevn@k8s-master-01:~$ cat > busybox.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: harbor.koevn.com/library/busybox:1.37.0 # Here I use a private warehouse
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
EOF

Then execute the deployment pod resources

Terminal window
koevn@k8s-master-01:~$ sudo kubectl apply -f busybox.yaml

View pod resources

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get pod
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 36s

8.2.0 Resolving NAMESPACE Services with pod

View Services

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cilium-monitoring grafana NodePort 10.90.138.192 <none> 3000:31240/TCP 1d
cilium-monitoring prometheus NodePort 10.90.145.89 <none> 9090:32044/TCP 1d
default kubernetes ClusterIP 10.90.0.1 <none> 443/TCP 1d
kube-system cilium-envoy ClusterIP None <none> 9964/TCP 1d
kube-system coredns ClusterIP 10.90.0.10 <none> 53/UDP 1d
kube-system hubble-metrics ClusterIP None <none> 9965/TCP 1d
kube-system hubble-peer ClusterIP 10.90.63.177 <none> 443/TCP 1d
kube-system hubble-relay ClusterIP 10.90.157.4 <none> 80/TCP 1d
kube-system hubble-ui NodePort 10.90.13.110 <none> 80:32166/TCP 1d
kube-system metrics-server ClusterIP 10.90.61.226 <none> 443/TCP 1d

Test analysis

Terminal window
koevn@k8s-master-01:~$ sudo kubectl exec busybox -n default \
-- nslookup hubble-ui.kube-system.svc.cluster.local
Server: 10.90.0.10
Address: 10.90.0.10:53
Name: hubble-ui.kube-system.svc.cluster.local
Address: 10.90.13.110
koevn@k8s-master-01:~$ sudo kubectl exec busybox -n default \
-- nslookup grafana.cilium-monitoring.svc.cluster.local
Server: 10.90.0.10
Address: 10.90.0.10:53
Name: grafana.cilium-monitoring.svc.cluster.local
Address: 10.90.138.192
koevn@k8s-master-01:~$ sudo kubectl exec busybox -n default \
-- nslookup kubernetes.default.svc.cluster.local
Server: 10.90.0.10
Address: 10.90.0.10:53
Name: kubernetes.default.svc.cluster.local
Address: 10.90.0.1

8.3.0 Each node tests the Kubernetes 443 and CoreDNS 53 ports

Terminal window
koevn@k8s-master-01:~$ sudo telnet 10.90.0.1 443
Trying 10.90.0.1...
Connected to 10.90.0.1.
Escape character is '^]'
koevn@k8s-master-01:~$ sudo nc -zvu 10.90.0.10 53
10.90.0.10: inverse host lookup failed: Unknown host
(UNKNOWN) [10.90.0.10] 53 (domain) open
koevn@k8s-worker-03:~$ sudo telnet 10.90.0.1 443
Trying 10.90.0.1...
Connected to 10.90.0.1.
Escape character is '^]'
koevn@k8s-worker-03:~$ sudo nc -zvu 10.90.0.10 53
10.90.0.10: inverse host lookup failed: Unknown host
(UNKNOWN) [10.90.0.10] 53 (domain) open

8.4.0 Test the network connectivity between Pods

View pod information

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get pods -A -o \
custom-columns="NAMESPACE:.metadata.namespace,STATUS:.status.phase,NAME:.metadata.name,IP:.status.podIP"
NAMESPACE STATUS NAME IP
cilium-monitoring Running grafana-679cd8bff-w9xm9 10.100.4.249
cilium-monitoring Running prometheus-87d4f66f7-6lmrz 10.100.3.238
default Running busybox 10.100.4.232
kube-system Running cilium-envoy-4sjq5 10.88.12.63
kube-system Running cilium-envoy-64694 10.88.12.60
kube-system Running cilium-envoy-fzjjw 10.88.12.62
kube-system Running cilium-envoy-twtw6 10.88.12.61
kube-system Running cilium-envoy-vwstr 10.88.12.65
kube-system Running cilium-envoy-whck6 10.88.12.64
kube-system Running cilium-fkjcm 10.88.12.63
kube-system Running cilium-h75vq 10.88.12.64
kube-system Running cilium-hcx4q 10.88.12.61
kube-system Running cilium-jz44w 10.88.12.65
kube-system Running cilium-operator-58d8755c44-hnwmd 10.88.12.60
kube-system Running cilium-operator-58d8755c44-xmg9f 10.88.12.65
kube-system Running cilium-qx5mn 10.88.12.62
kube-system Running cilium-wqmzc 10.88.12.60
kube-system Running coredns-6f44546d75-qnl9d 10.100.0.187
kube-system Running hubble-relay-7cd9d88674-2tdcc 10.100.3.64
kube-system Running hubble-ui-9f5cdb9bd-8fwsx 10.100.2.214
kube-system Running metrics-server-76cb66cbf9-xfbzq 10.100.5.187

Enter the busybox pod test network

Terminal window
koevn@k8s-master-01:~$ sudo kubectl exec -ti busybox -- sh
/ # ping 10.100.4.249
PING 10.100.4.249 (10.100.4.249): 56 data bytes
64 bytes from 10.100.4.249: seq=0 ttl=63 time=0.472 ms
64 bytes from 10.100.4.249: seq=1 ttl=63 time=0.063 ms
64 bytes from 10.100.4.249: seq=2 ttl=63 time=0.064 ms
64 bytes from 10.100.4.249: seq=3 ttl=63 time=0.062 ms
/ # ping 10.88.12.63
PING 10.88.12.63 (10.88.12.63): 56 data bytes
64 bytes from 10.88.12.63: seq=0 ttl=62 time=0.352 ms
64 bytes from 10.88.12.63: seq=1 ttl=62 time=0.358 ms
64 bytes from 10.88.12.63: seq=2 ttl=62 time=0.405 ms
64 bytes from 10.88.12.63: seq=3 ttl=62 time=0.314 ms

9. Install dashboard

9.1.0 Pull the dashboard installation package

Terminal window
koevn@k8s-master-01:~$ sudo helm repo add kubernetes-dashboard \
https://kubernetes.github.io/dashboard/
koevn@k8s-master-01:~$ sudo helm search repo \
kubernetes-dashboard/kubernetes-dashboard --versions # View supported versions
NAME CHART VERSION APP VERSION DESCRIPTION
kubernetes-dashboard/kubernetes-dashboard 7.0.0 General-purpose web UI for Kubernetes clusters
kubernetes-dashboard/kubernetes-dashboard 6.0.8 v2.7.0 General-purpose web UI for Kubernetes clusters
kubernetes-dashboard/kubernetes-dashboard 6.0.7 v2.7.0 General-purpose web UI for Kubernetes clusters
koevn@k8s-master-01:~$ sudo helm pull kubernetes-dashboard/kubernetes-dashboard --version 6.0.8 # Specifying a version
koevn@k8s-master-01:~$ sudo tar xvf kubernetes-dashboard-*.tgz
koevn@k8s-master-01:~$ cd kubernetes-dashboard

Edit and modify the values.yaml file

Terminal window
image:
repository: harbor.koevn.com/library/kubernetesui/dashboard # Here I use a private warehouse
tag: "v2.7.0" # The default is empty, specify the version yourself

9.2.0 Deploy dashboard

Terminal window
koevn@k8s-master-01:~/kubernetes-dashboard$ cd ..
helm install dashboard kubernetes-dashboard/kubernetes-dashboard \
--version 6.0.8 \
--namespace kubernetes-dashboard \
--create-namespace

9.3.0 Temp token

Terminal window
koevn@k8s-master-01:~$ cat > dashboard-user.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
EOF

Execute the application

Terminal window
koevn@k8s-master-01:~$ sudo kubectl apply -f dashboard-user.yaml

Create a temporary token

Terminal window
koevn@k8s-master-01:~$ sudo kubectl -n kube-system create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6IjRkV3lkZTU1dF9mazczemwwaUdxUElPWDZhejFyaDh2ZzRhZmN5RlN3T0EifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzUwOTExMzc1LCJpYXQiOjE3NTA5MDc3NzUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMmQwZmViMzItNDJlNy00ODE0LWJmYjUtOGU5MTFhNWZhZDM2Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZmEwZGFhNjAtNWEzNC00NTFjLWFiYmUtNTQ2Y2EwOWVkNWQyIn19LCJuYmYiOjE3NTA5MDc3NzUsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.XbF8s70UTzrOnf8TrkyC7-3vdxLKQU1aQiCRGgRWyzh4e-A0uFgjVfQ17IrFsZEOVpE8H9ydNc3dZbP81apnGegeFZ42J7KmUkSUJnh5UbiKjmfWwK9ysoP-bba5nnq1uWB_iFR6r4vr6Q_B4-YyAn1DVy70VNaHrfyakyvpJ69L-5eH2jHXn68uizXdi4brf2YEAwDlmBWufeQqtPx7pdnF5HNMyt56oxQb2S2gNrgwLvb8WV2cIKE3DvjQYfcQeaufWK3gn0y-2h5-3z3r4084vHrXAYJRkPmOKy7Fh-DZ8t1g7icNfDRg4geI48WrMH2vOk3E_cpjRxS7dC5P9A

9.4.0 Creating a long-term token

Terminal window
koevn@k8s-master-01:~$ cat > dashboard-user-token.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
name: admin-user
namespace: kube-system
annotations:
kubernetes.io/service-account.name: "admin-user"
type: kubernetes.io/service-account-token
EOF

Execute the application

Terminal window
koevn@k8s-master-01:~$ sudo kubectl apply -f dashboard-user-token.yaml

View token

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get secret admin-user -n kube-system -o jsonpath={".data.token"} | base64 -d
eyJhbGciOiJSUzI1NiIsImtpZCI6IjRkV3lkZTU1dF9mazczemwwaUdxUElPWDZhejFyaDh2ZzRhZmN5RlN3T0EifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzUwOTExNzM5LCJpYXQiOjE3NTA5MDgxMzksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMmZmNTE5NGEtOTlkMC00MDJmLTljNWUtMGQxOTMyZDkxNjgwIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZmEwZGFhNjAtNWEzNC00NTFjLWFiYmUtNTQ2Y2EwOWVkNWQyIn19LCJuYmYiOjE3NTA5MDgxMzksInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.LJa37eblpk_OBWGqRgX2f_aCMiCrrpjY57dW8bskKUWu7ldLhgR6JIbICI0NVnvX4by3RX9v_FGnAPwU821VDp05oYT1KcTDXV1BC57G4QGL4kS9tBOrmRyXY0jxB8ETRmGx8ECiCJqNfrVdT99dm8oaFqJx1zq6jut70UwhxQCIh7C-QVqg6Gybbb3a9x25M2YvVHWStduN_swMOQxyQDBRtA0ARAyClu73o36pDCs_a56GizGspA4bvHpHPT-_y1i3EkeVjMsEl6JQ0PeJNQiM4fBvtJ2I_0kqoEHNMRYzZXEQNETXF9vqjkiEg7XBlKe1L2Ke1-xwK5ZBKFnPOg

9.5.0 Login dashboard

Edit dashboard servers

Terminal window
koevn@k8s-master-01:~$ sudo kubectl edit svc -n kube-system kubernetes-dashboard
type: NodePort

View the mapped ports

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get svc kubernetes-dashboard -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.90.211.135 <none> 443:32744/TCP 5d

Browser access https://10.88.12.62:32744