几年前部署的一套Kubernetes集群,渐渐需要逐步更新,由于长时间没再记录,对部署方式有些生疏,现对新版本的集群,尝试梳理更新。

一、环境

(1) 集群环境

主机名称IP主机角色部署软件
k8s-master-0110.88.12.60Masterkube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、haproxy、containerd
k8s-master-0210.88.12.61Masterkube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、containerd
k8s-master-0310.88.12.62Masterkube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、containerd
k8s-worker-0110.88.12.63Workerkubelet、containerd
k8s-worker-0210.88.12.64Workerkubelet、containerd
k8s-worker-0310.88.12.65Workerkubelet、containerd
10.88.12.100VIP这个地址通常由keepalived绑定的漂移地址,这次为了简便,直接配置新增网卡上

以上集群组件我想有人注意到没有kube-proxy,因为本次集群使用cilium替代kube-proxy,但在部署时还是附带把kube-proxy带上,同时注意每个节点都要有runc程序。

(2) 网络

类型网段描述
Host10.88.12/24物理机或者虚拟机网段
Cluster10.90.0.0/16集群Service网段
Pod10.100.0.0/16集群Pod网段

(3) 软件版本

软件版本
cfssl1.6.5
containerdv2.1.0
crictl0.1.0
runc1.3.0
helmv3.18.0
os_kernel6.1.0-29

至于其他系统基础配置,例如安装软件依赖环境、修改系统内核参数、修改系统文件描述符、启用内核某些模块功能(例如ipvs)、集群主机时间同步、禁用selinux、禁用swap分区等等,都可以按自己需求网上搜索设置,这里就不再进行叙述。

二、安装Kubernetes集群Master基本组件

2.1.0 部署containerd

集群所有节点执行

Terminal window
# 解压到/usr/local/bin/ 目录下
kevn@k8s-master-01:~$ sudo tar -xvf containerd-2.1.0-linux-amd64.tar.gz -C /usr/local/
# 创建启动服务文件
koevn@k8s-master-01:~$ sudo cat > /etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStart=/usr/local/bin/containerd --config /etc/containerd/config.toml
Delegate=yes
KillMode=process
Restart=always
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
# 创建containerd 配置文件
koevn@k8s-master-01:~$ sudo mkdir -pv /etc/containerd/certs.d/harbor.koevn.com
koevn@k8s-master-01:~$ sudo containerd config default | tee /etc/containerd/config.toml
touch /etc/containerd/certs.d/harbor.koevn.com/hosts.toml
# 修改/etc/containerd/config.toml
[plugins]
[plugins.'io.containerd.cri.v1.images']
snapshotter = 'overlayfs'
disable_snapshot_annotations = true
discard_unpacked_layers = false
max_concurrent_downloads = 3
image_pull_progress_timeout = '5m0s'
image_pull_with_sync_fs = false
stats_collect_period = 10
[plugins.'io.containerd.cri.v1.images'.pinned_images]
sandbox = 'harbor.koevn.com/k8s/pause:3.10' # 这个镜像拉起根据自己定义拉取,我这里是指向私有仓库
[plugins.'io.containerd.cri.v1.images'.registry]
config_path = "/etc/containerd/certs.d" # 这里配置镜像仓库相关信息
[plugins.'io.containerd.cri.v1.images'.image_decryption]
key_model = 'node'
# 配置/etc/containerd/certs.d/harbor.koevn.com/hosts.toml
server = "https://harbor.koevn.com"
[host."https://harbor.koevn.com"]
capabilities = ["pull", "resolve", "push"]
skip_verify = true
username = "admin"
password = "K123456"
# 重新加载systemd管理
koevn@k8s-master-01:~$ sudo systemctl daemon-reload
# containerd服务启动
koevn@k8s-master-01:~$ sudo systemctl start containerd.service
# containerd服务状态
koevn@k8s-master-01:~$ sudo systemctl status containerd.service
# containerd服务开机自启
koevn@k8s-master-01:~$ sudo systemctl enable --now containerd.service

2.1.1 配置crictl

Terminal window
# 解压
koevn@k8s-master-01:~$ sudo tar xf crictl-v*-linux-amd64.tar.gz -C /usr/sbin/
# 生成配置文件
koevn@k8s-master-01:~$ sudo cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
EOF
# 查看
crictl info

2.2.0 部署etcd

此服务部署在Master三个节点

2.2.1 安装etcd环境

Terminal window
# 将已下载的cfssl工具移到/usr/local/bin目录下
koevn@k8s-master-01:/tmp$ sudo mv cfssl_1.6.5_linux_amd64 /usr/local/bin/cfssl
koevn@k8s-master-01:/tmp$ sudo mv cfssljson_1.6.5_linux_amd64 /usr/local/bin/cfssljson
koevn@k8s-master-01:/tmp$ sudo mv cfssl-certinfo_1.6.5_linux_amd64 /usr/local/bin/cfssl-certinfo
koevn@k8s-master-01:/tmp$ sudo chmod +x /usr/local/bin/cfss*
# 创建etcd服务目录
koevn@k8s-master-01:~$ sudo mkdir -pv /opt/etcd/{bin,cert,config}
# 解压etcd安装文件
koevn@k8s-master-01:/tmp$ sudo tar -xvf etcd*.tar.gz && mv etcd-*/etcd /opt/etcd/bin/
# 添加etcd系统环境变量
koevn@k8s-master-01:/tmp$ sudo echo "export PATH=/opt/etcd/bin:$PATH" > /etc/profile.d/etcd.sh && sudo source /etc/profile
# 创建证书生成目录
koevn@k8s-master-01:/tmp$ sudo mkdir -pv /opt/cert/etcd

2.2.2 生成etcd证书

创建ca配置文件

Terminal window
koevn@k8s-master-01:/tmp$ cd /opt/cert
koevn@k8s-master-01:/opt/cert$ sudo cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "876000h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "876000h"
}
}
}
}
EOF

创建ca证书签名请求

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > etcd-ca-csr.json << EOF
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "etcd",
"OU": "Etcd Security"
}
],
"ca": {
"expiry": "876000h"
}
}
EOF

生成etcd ca证书

Terminal window
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /opt/cert/etcd

创建etcd请求证书

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > etcd-csr.json << EOF
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "etcd",
"OU": "Etcd Security"
}
]
}
EOF

生成etcd证书

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert \
-ca=/opt/cert/etcd/etcd-ca.pem \
-ca-key=/opt/cert/etcd/etcd-ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1,k8s-master-01,k8s-master-02,k8s-master-03,10.88.12.60,10.88.12.61,10.88.12.62 \
-profile=kubernetes \
etcd-csr.json | cfssljson -bare /opt/cert/etcd

/opt/cert/etcd目录下生成的etcd*.pem证书复制到/opt/etcd/cert/目录。

2.2.3 创建etcd配置文件

Terminal window
koevn@k8s-master-01:~$ sudo cat > /oet/etcd/config/etcd.config.yml << EOF
name: 'k8s-master-01' # 注意修改每个节点名称
data-dir: /data/etcd/data
wal-dir: /data/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.88.12.60:2380' # 修改每个节点监听的IP地址
listen-client-urls: 'https://10.88.12.60:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.88.12.60:2380'
advertise-client-urls: 'https://10.88.12.60:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master-01=https://10.88.12.60:2380,k8s-master-02=https://10.88.12.61:2380,k8s-master-03=https://10.88.12.62:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/opt/etcd/cert/etcd.pem'
key-file: '/opt/etcd/cert/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/opt/etcd/cert/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/opt/etcd/cert/etcd.pem'
key-file: '/opt/etcd/cert/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/opt/etcd/cert/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

查看etcd服务目录结构

Terminal window
koevn@k8s-master-01:~$ sudo tree /opt/etcd/
/opt/etcd/
├── bin
│   ├── etcd
│   ├── etcdctl
│   └── etcdutl
├── cert
│   ├── etcd-ca.pem
│   ├── etcd-key.pem
│   └── etcd.pem
└── config
└── etcd.config.yml
4 directories, 7 files

然后把etcd打包分发到其他Master节点上

Terminal window
koevn@k8s-master-01:~$ sudo tar -czvf /tmp/etcd.tar.gz /opt/etcd

2.2.4 每个etcd节点配置启动服务

Terminal window
koevn@k8s-master-01:~$ sudo cat /etc/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
ExecStart=/opt/etcd/bin/etcd --config-file=/opt/etcd/config/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
EOF

加载系统服务systemctl daemon-reload 启动etcd服务systemctl start etcd.service 查看etcd服务状态systemctl status etcd.service 添加开机自启systemctl enable --now etcd.service

2.2.5 查看etcd状态

Terminal window
koevn@k8s-master-01:~$ sudo export ETCDCTL_API=3
root@k8s-master-01:~# etcdctl --endpoints="10.88.12.60:2379,10.88.12.61:2379,10.88.12.62:2379" \
> --cacert=/opt/etcd/cert/etcd-ca.pem \
> --cert=/opt/etcd/cert/etcd.pem \
> --key=/opt/etcd/cert/etcd-key.pem \
> endpoint status \
> --write-out=table
+------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 10.88.12.60:2379 | c1f013862d84cb12 | 3.5.21 | 35 MB | false | false | 158 | 8434769 | 8434769 | |
| 10.88.12.61:2379 | cea2c8779e4914d0 | 3.5.21 | 36 MB | false | false | 158 | 8434769 | 8434769 | |
| 10.88.12.62:2379 | df3b2276b87896db | 3.5.21 | 33 MB | true | false | 158 | 8434769 | 8434769 | |
+------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

2.3.0 生成k8s相关证书

创建k8s证书存放目录

Terminal window
koevn@k8s-master-01:~$ sudo mkdir -pv /opt/cert/kubernetes/pki
koevn@k8s-master-01:~$ sudo mkdir -pv /opt/kubernetes/cfg # 集群kubeconfig文件目录
koevn@k8s-master-01:~$ cd /opt/cert

2.3.1 生成k8s ca证书

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "Kubernetes",
"OU": "Kubernetes-manual"
}
],
"ca": {
"expiry": "876000h"
}
}
EOF

生成ca证书

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert -initca ca-csr.json | cfssljson -bare /opt/cert/kubernetes/pki/ca

2.3.2 生成apiserver 证书

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > apiserver-csr.json << EOF
{
"CN": "kube-apiserver",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "Kubernetes",
"OU": "Kubernetes-manual"
}
]
}
EOF
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert \
-ca=/opt/cert/kubernetes/pki/ca.pem \
-ca-key=/opt/cert/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-hostname=10.90.0.1,10.88.12.100,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,k8s.koevn.com,10.88.12.60,10.88.12.61,10.88.12.62,10.88.12.63,10.88.12.64,10.88.12.65,10.88.12.66,10.88.12.67,10.88.12.68 \
-profile=kubernetes apiserver-csr.json | cfssljson -bare /opt/cert/kubernetes/pki/apiserver

2.3.3 生成apiserver聚合证书

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > front-proxy-ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"ca": {
"expiry": "876000h"
}
}
EOF
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert \
-initca front-proxy-ca-csr.json | cfssljson -bare /opt/cert/kubernetes/pki/front-proxy-ca
koevn@k8s-master-01:/opt/cert$ sudo cat > front-proxy-client-csr.json << EOF
{
"CN": "front-proxy-client",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert \
-ca=/opt/cert/kubernetes/pki/front-proxy-ca.pem \
-ca-key=/opt/cert/kubernetes/pki/front-proxy-ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
front-proxy-client-csr.json | cfssljson -bare /opt/cert/kubernetes/pki/front-proxy-client

2.3.4 生成controller-manage证书

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > manager-csr.json << EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-controller-manager",
"OU": "Kubernetes-manual"
}
]
}
EOF
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert \
-ca=/opt/cert/kubernetes/pki/front-proxy-ca.pem \
-ca-key=/opt/cert/kubernetes/pki/front-proxy-ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
manager-csr.json | cfssljson -bare /opt/cert/kubernetes/pki/controller-manager
设置集群项
```bash
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-cluster kubernetes \
--certificate-authority=/opt/cert/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.88.12.100:9443 \
--kubeconfig=/opt/kubernetes/cfg/controller-manager.kubeconfig

⚠️ 注意 配置集群项--server配置一定要指向keepalived高可用VIP,而端口9443是haproxy监听的端口

设置环境项上下文

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/opt/kubernetes/cfg/controller-manager.kubeconfig

设置一个用户项

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/opt/cert/kubernetes/pki/controller-manager.pem \
--client-key=/opt/cert/kubernetes/pki/controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/opt/kubernetes/cfg/controller-manager.kubeconfig

设置默认环境

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/opt/kubernetes/cfg/controller-manager.kubeconfig

2.3.5 生成kube-scheduler证书

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > scheduler-csr.json << EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-scheduler",
"OU": "Kubernetes-manual"
}
]
}
EOF
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert \
-ca=/opt/cert/kubernetes/pki/ca.pem \
-ca-key=/opt/cert/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
scheduler-csr.json | cfssljson -bare /opt/cert/kubernetes/pki/scheduler
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-cluster kubernetes \
--certificate-authority=/opt/cert/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.88.12.100:9443 \
--kubeconfig=/opt/kubernetes/cfg/scheduler.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-credentials system:kube-scheduler \
--client-certificate=/opt/cert/kubernetes/pki/scheduler.pem \
--client-key=/opt/cert/kubernetes/pki/scheduler-key.pem \
--embed-certs=true \
--kubeconfig=/opt/kubernetes/cfg/scheduler.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/opt/kubernetes/cfg/scheduler.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/opt/kubernetes/cfg/scheduler.kubeconfig

2.3.6 生成admin证书

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > admin-csr.json << EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:masters",
"OU": "Kubernetes-manual"
}
]
}
EOF
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert \
-ca=/opt/cert/kubernetes/pki/ca.pem \
-ca-key=/opt/cert/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare /opt/cert/kubernetes/pki/admin
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-cluster kubernetes \
--certificate-authority=/opt/cert/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.88.12.100:9443 \
--kubeconfig=/opt/kubernetes/cfg/admin.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-credentials kubernetes-admin \
--client-certificate=/opt/cert/kubernetes/pki//admin.pem \
--client-key=/opt/cert/kubernetes/pki//admin-key.pem \
--embed-certs=true \
--kubeconfig=/opt/kubernetes/cfg/admin.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-context kubernetes-admin@kubernetes \
--cluster=kubernetes \
--user=kubernetes-admin \
--kubeconfig=/opt/kubernetes/cfg/admin.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config use-context kubernetes-admin@kubernetes \
--kubeconfig=/opt/kubernetes/cfg/admin.kubeconfig

2.3.7 生成kube-proxy证书(可选)

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-proxy",
"OU": "Kubernetes-manual"
}
]
}
EOF
koevn@k8s-master-01:/opt/cert$ sudo cfssl gencert \
-ca=/opt/cert/kubernetes/pki/ca.pem \
-ca-key=/opt/cert/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare /opt/kubernetes/cfg/kube-proxy
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-cluster kubernetes \
--certificate-authority=/opt/cert/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.88.12.100:9443 \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-credentials kube-proxy \
--client-certificate=/opt/cert/kubernetes/pki/kube-proxy.pem \
--client-key=/opt/cert/kubernetes/pki/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config set-context kube-proxy@kubernetes \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
koevn@k8s-master-01:/opt/cert$ sudo kubectl config use-context kube-proxy@kubernetes \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig

2.3.8 创建ServiceAccount Key

Terminal window
koevn@k8s-master-01:/opt/cert$ sudo openssl genrsa -out /opt/cert/kubernetes/pki/sa.key 2048
koevn@k8s-master-01:/opt/cert$ sudo openssl rsa -in /opt/cert/kubernetes/pki/sa.key \
-pubout -out /opt/cert/kubernetes/pki/sa.pub

2.3.9 查看k8s组件证书数量

Terminal window
koevn@k8s-master-01:~$ ls /opt/cert/kubernetes/pki
-rw-r--r-- 1 root root 1025 May 20 17:11 admin.csr
-rw------- 1 root root 1679 May 20 17:11 admin-key.pem
-rw-r--r-- 1 root root 1444 May 20 17:11 admin.pem
-rw-r--r-- 1 root root 1383 May 20 16:25 apiserver.csr
-rw------- 1 root root 1675 May 20 16:25 apiserver-key.pem
-rw-r--r-- 1 root root 1777 May 20 16:25 apiserver.pem
-rw-r--r-- 1 root root 1070 May 20 11:58 ca.csr
-rw------- 1 root root 1679 May 20 11:58 ca-key.pem
-rw-r--r-- 1 root root 1363 May 20 11:58 ca.pem
-rw-r--r-- 1 root root 1082 May 20 16:38 controller-manager.csr
-rw------- 1 root root 1679 May 20 16:38 controller-manager-key.pem
-rw-r--r-- 1 root root 1501 May 20 16:38 controller-manager.pem
-rw-r--r-- 1 root root 940 May 20 16:26 front-proxy-ca.csr
-rw------- 1 root root 1675 May 20 16:26 front-proxy-ca-key.pem
-rw-r--r-- 1 root root 1094 May 20 16:26 front-proxy-ca.pem
-rw-r--r-- 1 root root 903 May 20 16:30 front-proxy-client.csr
-rw------- 1 root root 1679 May 20 16:30 front-proxy-client-key.pem
-rw-r--r-- 1 root root 1188 May 20 16:30 front-proxy-client.pem
-rw-r--r-- 1 root root 1045 May 20 18:30 kube-proxy.csr
-rw------- 1 root root 1679 May 20 18:30 kube-proxy-key.pem
-rw-r--r-- 1 root root 1464 May 20 18:30 kube-proxy.pem
-rw------- 1 root root 1704 May 21 09:21 sa.key
-rw-r--r-- 1 root root 451 May 21 09:21 sa.pub
-rw-r--r-- 1 root root 1058 May 20 17:00 scheduler.csr
-rw------- 1 root root 1679 May 20 17:00 scheduler-key.pem
-rw-r--r-- 1 root root 1476 May 20 17:00 scheduler.pem
koevn@k8s-master-01:~$ ls /opt/cert/kubernetes/pki/ | wc -l # 查看总数
26

2.3.10 安装kubernetes组件与证书分发(所有Master节点)

Terminal window
koevn@k8s-master-01:~$ sudo mkdir -pv /opt/kubernetes/{bin,etc,cert,cfg}
koevn@k8s-master-01:~$ sudo tree /opt/kubernetes # 查看文件层级
/opt/kubernetes
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubectl
│   ├── kubelet
│   ├── kube-proxy
│   └── kube-scheduler
├── cert
│   ├── admin-key.pem
│   ├── admin.pem
│   ├── apiserver-key.pem
│   ├── apiserver.pem
│   ├── ca-key.pem
│   ├── ca.pem
│   ├── controller-manager-key.pem
│   ├── controller-manager.pem
│   ├── front-proxy-ca.pem
│   ├── front-proxy-client-key.pem
│   ├── front-proxy-client.pem
│   ├── kube-proxy-key.pem
│   ├── kube-proxy.pem
│   ├── sa.key
│   ├── sa.pub
│   ├── scheduler-key.pem
│   └── scheduler.pem
├── cfg
│   ├── admin.kubeconfig
│   ├── bootstrap-kubelet.kubeconfig
│   ├── bootstrap.secret.yaml
│   ├── controller-manager.kubeconfig
│   ├── kubelet.kubeconfig
│   ├── kube-proxy.kubeconfig
│   └── scheduler.kubeconfig
├── etc
│   ├── kubelet-conf.yml
│   └── kube-proxy.yaml
└── manifests
6 directories, 32 files

2.4.0 部署Haproxy负载均衡

⚠️ 注意 这一步配置apiserver高可用本是Haproxy加Keepalived组合,这步为了省略,我直接在Master-01节点加一张网卡配个假VIP地址,生产环境则最好用Haproxy加Keepalived组合,提高冗余。

2.4.1 安装Haproxy

Terminal window
koevn@k8s-master-01:~$ sudo apt install haproxy

2.4.2 修改haproxy配置文件

Terminal window
koevn@k8s-master-01:~$ sudo cat >/etc/haproxy/haproxy.cfg << EOF
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
listen stats
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri /haproxy-status
stats auth admin:K123456
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
frontend k8s-master
bind 0.0.0.0:9443
bind 127.0.0.1:9443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master-01 10.88.12.60:6443 check
server k8s-master-02 10.88.12.61:6443 check
server k8s-master-03 10.88.12.62:6443 check
EOF

启动服务

Terminal window
koevn@k8s-master-01:~$ sudo systemctl daemon-reload
koevn@k8s-master-01:~$ sudo systemctl enable --now haproxy.service

通过浏览器访问http://10.88.12.100:9999/haproxy-status查看haproxy状态

2.5.0 配置apiserver 服务(所有Master节点)`

配置apiserver启动服务

Terminal window
koevn@k8s-master-01:~$ sudo cat /etc/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \
--v=2 \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=10.88.12.60 \ # 注意更改每个Master节点IP
--service-cluster-ip-range=10.90.0.0/16 \ # 集群cluster网段根据使用配置
--service-node-port-range=30000-32767 \
--etcd-servers=https://10.88.12.60:2379,https://10.88.12.61:2379,https://10.88.12.62:2379 \
--etcd-cafile=/opt/etcd/cert/etcd-ca.pem \
--etcd-certfile=/opt/etcd/cert/etcd.pem \
--etcd-keyfile=/opt/etcd/cert/etcd-key.pem \
--client-ca-file=/opt/kubernetes/cert/ca.pem \
--tls-cert-file=/opt/kubernetes/cert/apiserver.pem \
--tls-private-key-file=/opt/kubernetes/cert/apiserver-key.pem \
--kubelet-client-certificate=/opt/kubernetes/cert/apiserver.pem \
--kubelet-client-key=/opt/kubernetes/cert/apiserver-key.pem \
--service-account-key-file=/opt/kubernetes/cert/sa.pub \
--service-account-signing-key-file=/opt/kubernetes/cert/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/opt/kubernetes/cert/front-proxy-ca.pem \
--proxy-client-cert-file=/opt/kubernetes/cert/front-proxy-client.pem \
--proxy-client-key-file=/opt/kubernetes/cert/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User \
--enable-aggregator-routing=true
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF

重新加载systemd管理

Terminal window
koevn@k8s-master-01:~$ sudo systemctl daemon-reload

启动kube-apiserver

Terminal window
koevn@k8s-master-01:~$ sudo systemctl enable --now kube-apiserver.service

查看kube-apiserver服务状态

Terminal window
koevn@k8s-master-01:~$ sudo systemctl status kube-apiserver.service

2.6.0 配置kube-controller-manager service(所有Master节点)

配置kube-controller-manager service启动服务

Terminal window
koevn@k8s-master-01:~$ sudo cat /etc/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/opt/kubernetes/bin/kube-controller-manager \
--v=2 \
--root-ca-file=/opt/kubernetes/cert/ca.pem \
--cluster-signing-cert-file=/opt/kubernetes/cert/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/cert/ca-key.pem \
--service-account-private-key-file=/opt/kubernetes/cert/sa.key \
--kubeconfig=/opt/kubernetes/cfg/controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--service-cluster-ip-range=10.90.0.0/16 \ # 集群cluster网段根据使用更改
--cluster-cidr=10.100.0.0/16 \ # 集群pod网段根据使用更改
--node-cidr-mask-size-ipv4=24 \ # 网段子网掩码设为24
--requestheader-client-ca-file=/opt/kubernetes/cert/front-proxy-ca.pem
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF

重新加载systemd管理

Terminal window
koevn@k8s-master-01:~$ sudo systemctl daemon-reload

启动kube-apiserver

Terminal window
koevn@k8s-master-01:~$ sudo systemctl enable --now kube-controller-manager.service

查看kube-apiserver服务状态

Terminal window
koevn@k8s-master-01:~$ sudo systemctl status kube-controller-manager.service

2.7.0 配置kube-scheduler service(所有Master节点)

配置kube-scheduler service 启动服务

Terminal window
koevn@k8s-master-01:~$ sudo cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/opt/kubernetes/bin/kube-scheduler \
--v=2 \
--bind-address=0.0.0.0 \
--leader-elect=true \
--kubeconfig=/opt/kubernetes/cfg/scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF

重新加载systemd管理

Terminal window
koevn@k8s-master-01:~$ sudo systemctl daemon-reload

启动kube-apiserver

Terminal window
koevn@k8s-master-01:~$ sudo systemctl enable --now kube-scheduler.service

查看kube-apiserver服务状态

Terminal window
koevn@k8s-master-01:~$ sudo systemctl status kube-scheduler.service

2.8.0 TLS Bootstrapping配置

创建集群配置项

Terminal window
koevn@k8s-master-01:~$ sudo kubectl config set-cluster kubernetes \
--certificate-authority=/opt/cert/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.88.12.100:9443 \
--kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig

创建token

Terminal window
koevn@k8s-master-01:~$ sudo echo "$(head -c 6 /dev/urandom | md5sum | head -c 6)"."$(head -c 16 /dev/urandom | md5sum | head -c 16)"
79b841.0677456fb3b47289

设置凭证信息

Terminal window
koevn@k8s-master-01:~$ sudo kubectl config set-credentials tls-bootstrap-token-user \
--token=79b841.0677456fb3b47289 \
--kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig

设置上下文信息

Terminal window
koevn@k8s-master-01:~$ sudo kubectl config set-context tls-bootstrap-token-user@kubernetes \
--cluster=kubernetes \
--user=tls-bootstrap-token-user \
--kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig
koevn@k8s-master-01:~$ sudo kubectl config use-context tls-bootstrap-token-user@kubernetes \
--kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig

配置当前用户管理集群

Terminal window
mkdir -pv /home/koevn/.kube && cp /opt/kubernetes/cfg/admin.kubeconfig /home/koevn/.kube/config

查看集群状态

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get cs # 查看 Kubernetes 控制平面核心组件的健康状态
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
etcd-0 Healthy ok
controller-manager Healthy ok
scheduler Healthy ok

应用bootstrap-token

Terminal window
koevn@k8s-master-01:~$ sudo cat > bootstrap.secret.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
name: bootstrap-token-79b841
namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
description: "The default bootstrap token generated by 'kubelet '."
token-id: 79b841
token-secret: 0677456fb3b47289
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-autoapprove-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-autoapprove-certificate-rotation
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kube-apiserver

创建bootstrap.secret配置到集群

Terminal window
kubectl create -f bootstrap.secret.yaml

三、安装Kubernetes集群Worker基本组件

3.1.0 所需安装目录结构

Terminal window
koevn@k8s-worker-01:~$ sudo mkdir -pv /opt/kubernetes/{bin,cert,cfg,etc}
koevn@k8s-worker-01:~$ sudo tree /opt/kubernetes/ # 查看目录结构
/opt/kubernetes/
├── bin
│   ├── kubectl
│   ├── kubelet
│   └── kube-proxy
├── cert # 证书文件从master节点copy过来
│   ├── ca.pem
│   ├── front-proxy-ca.pem
│   ├── kube-proxy-key.pem
│   └── kube-proxy.pem
├── cfg # kubeconfig文件从master节点copy过来
│   ├── bootstrap-kubelet.kubeconfig
│   ├── kubelet.kubeconfig
│   └── kube-proxy.kubeconfig
├── etc
│   ├── kubelet-conf.yml
│   └── kube-proxy.yaml
└── manifests
6 directories, 12 files

Kubernetes Worker节点只需安装Containerdkubelet组件,kube-proxy虽不是必要,但还是顺带安装下,按自己需求部署就好。

3.2.0 部署kubelet服务

这个组件是需要部署到集群所有节点,按照这个部署到其他节点。 添加kubelet的service文件

Terminal window
koevn@k8s-worker-01:~$ sudo cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=containerd.service
[Service]
ExecStart=/opt/kubernetes/bin/kubelet \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap-kubelet.kubeconfig \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--config=/opt/kubernetes/etc/kubelet-conf.yml \
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
--node-labels=node.kubernetes.io/node=
[Install]
WantedBy=multi-user.target
EOF

创建并编辑kubelet配置文件

Terminal window
koevn@k8s-worker-01:~$ sudo cat > /etc/kubernetes/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/cert/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.90.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /opt/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s

启动kubelet

Terminal window
koevn@k8s-worker-01:~$ sudo systemctl daemon-reload # 重载systemd管理单元
koevn@k8s-worker-01:~$ sudo systemctl enable --now kubelet.service # 启用并立即启动kubelet.service单元
koevn@k8s-worker-01:~$ sudo systemctl status kubelet.service # 查看kubelet.service服务当前状态

3.3.0 查看集群节点状态

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-01 NotReady <none> 50s v1.33.0
k8s-master-02 NotReady <none> 47s v1.33.0
k8s-master-03 NotReady <none> 40s v1.33.0
k8s-worker-01 NotReady <none> 35s v1.33.0
k8s-worker-02 NotReady <none> 28s v1.33.0
k8s-worker-03 NotReady <none> 11s v1.33.0

由于此时Kubernetes集群未安装CNI网络插件,节点STATUS处于NotReady状态是正常的,只有CNI网络插件正常运行无报错,状态就是Ready

3.4.0 查看集群节点运行的Runtime版本信息

Terminal window
koevn@k8s-master-01:~$ sudo kubectl describe node | grep Runtime
Container Runtime Version: containerd://2.1.0
Container Runtime Version: containerd://2.1.0
Container Runtime Version: containerd://2.1.0
Container Runtime Version: containerd://2.1.0
Container Runtime Version: containerd://2.1.0
Container Runtime Version: containerd://2.1.0

3.5.0 部署kube-proxy服务(可选)

这个组件是需要部署到集群所有节点,按照这个部署到其他节点。 添加kube-proxy的service文件

Terminal window
koevn@k8s-worker-01:~$ sudo cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/opt/kubernetes/bin/kube-proxy \
--config=/opt/kubernetes/etc/kube-proxy.yaml \
--cluster-cidr=10.100.0.0/16 \
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF

创建并编辑kube-proxy配置文件

Terminal window
koevn@k8s-worker-01:~$ sudo cat > /etc/kubernetes/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
qps: 5
clusterCIDR: 10.100.0.0/16
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
masqueradeAll: true
minSyncPeriod: 5s
scheduler: "rr"
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
EOF

启动kubelet

Terminal window
koevn@k8s-worker-01:~$ sudo systemctl daemon-reload # 重载systemd管理单元
koevn@k8s-worker-01:~$ sudo systemctl enable --now kube-proxy.service # 启用并立即启动kubelet.service单元
koevn@k8s-worker-01:~$ sudo systemctl status kube-proxy.service # 查看kubelet.service服务当前状态

四、安装集群网络插件

⚠️ 注意 此次安装的网络插件是cilium,有此插件接管Kubernetes集群网络,那么kube-proxy服务就必须systemctl stop kube-proxy.service关闭,避免对cilium造成干扰,部署cilium在Master任意节点上操作。

4.1.0 安装helm

Terminal window
koevn@k8s-master-01:~$ sudo curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
koevn@k8s-master-01:~$ sudo chmod 700 get_helm.sh
koevn@k8s-master-01:~$ sudo ./get_helm.sh
koevn@k8s-master-01:~$ sudo tar xvf helm-*-linux-amd64.tar.gz
koevn@k8s-master-01:~$ sudo cp linux-amd64/helm /usr/local/bin/

4.2.0 安装cilium

Terminal window
koevn@k8s-master-01:~$ sudo helm repo add cilium https://helm.cilium.io # 添加源
koevn@k8s-master-01:~$ sudo helm search repo cilium/cilium --versions # 搜索支持的版本
koevn@k8s-master-01:~$ sudo helm pull cilium/cilium # 拉取cilium安装包
koevn@k8s-master-01:~$ sudo tar xvf cilium-*.tgz
koevn@k8s-master-01:~$ sudo cd cilium/

编辑修改cilium目录下values.yaml文件以下内容

Terminal window
# ------ 其他配置省略 ------#
kubeProxyReplacement: "true" # 取消注释,修改为true
k8sServiceHost: "kubernetes.default.svc.cluster.local" # 这个domain要根据自己生成apiserver证书时绑定
k8sServicePort: "9443" # apiserver 端口
ipv4NativeRoutingCIDR: "10.100.0.0/16" # 设定Pod IPv4 网段
hubble: # 启用特定的 Hubble 指标
metrics:
enabled:
- dns
- drop
- tcp
- flow
- icmp
- httpV2
# ------ 其他配置省略 ------#

⚠️ 注意 这里其他所有配置文件里的repository配置参数,一般网络环境不好的,建议部署harbor私有仓库,避免集群containerd组件拉取镜像失败,导致Pod无法正常运行

其他功能则按照配置参数层级去启用

  • 启用 Hubble Relay 聚合服务 hubble.relay.enabled=true
  • 启用 Hubble UI 前端(可视化页面) hubble.ui.enabled=true
  • 启用 Cilium 的 Prometheus 指标输出 prometheus.enabled=true
  • 启用 Operator 的 Prometheus 指标输出 operator.prometheus.enabled=true
  • 启用 Hubble hubble.enabled=true

上面这段是根据values.yaml配置文件层级简化表示,也可以通过cilium-cli命令添加--set hubble.relay.enabled=true去启用指定功能

安装cilium

Terminal window
koevn@k8s-master-01:~$ sudo helm upgrade cilium ./cilium \
--namespace kube-system \
--create-namespace \
-f cilium/values.yaml

查看状态

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get pod -A | grep cil
cilium-monitoring grafana-679cd8bff-w9xm9 1/1 Running 0 5d
cilium-monitoring prometheus-87d4f66f7-6lmrz 1/1 Running 0 5d
kube-system cilium-envoy-4sjq5 1/1 Running 0 5d
kube-system cilium-envoy-64694 1/1 Running 0 5d
kube-system cilium-envoy-fzjjw 1/1 Running 0 5d
kube-system cilium-envoy-twtw6 1/1 Running 0 5d
kube-system cilium-envoy-vwstr 1/1 Running 0 5d
kube-system cilium-envoy-whck6 1/1 Running 0 5d
kube-system cilium-fkjcm 1/1 Running 0 5d
kube-system cilium-h75vq 1/1 Running 0 5d
kube-system cilium-hcx4q 1/1 Running 0 5d
kube-system cilium-jz44w 1/1 Running 0 5d
kube-system cilium-operator-58d8755c44-hnwmd 1/1 Running 57 (84m ago) 5d
kube-system cilium-operator-58d8755c44-xmg9f 1/1 Running 54 (82m ago) 5d
kube-system cilium-qx5mn 1/1 Running 0 5d
kube-system cilium-wqmzc 1/1 Running 0 5d

五、部署监控面板

5.1.0 创建监控服务

Terminal window
koevn@k8s-master-01:~$ sudo wget https://raw.githubusercontent.com/cilium/cilium/1.12.1/examples/kubernetes/addons/prometheus/monitoring-example.yaml
koevn@k8s-master-01:~$ sudo kubectl apply -f monitoring-example.yaml

5.2.0 修改集群Services Type为NodePort

查看当前Services

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cilium-monitoring grafana ClusterIP 10.90.138.192 <none> 3000/TCP 1d
cilium-monitoring prometheus ClusterIP 10.90.145.89 <none> 9090/TCP 1d
default kubernetes ClusterIP 10.90.0.1 <none> 443/TCP 1d
kube-system cilium-envoy ClusterIP None <none> 9964/TCP 1d
kube-system coredns ClusterIP 10.90.0.10 <none> 53/UDP 1d
kube-system hubble-metrics ClusterIP None <none> 9965/TCP 1d
kube-system hubble-peer ClusterIP 10.90.63.177 <none> 443/TCP 1d
kube-system hubble-relay ClusterIP 10.90.157.4 <none> 80/TCP 1d
kube-system hubble-ui ClusterIP 10.90.13.110 <none> 80/TCP 1d
kube-system metrics-server ClusterIP 10.90.61.226 <none> 443/TCP 1d

修改Services Type

Terminal window
koevn@k8s-master-01:~$ sudo kubectl edit svc -n cilium-monitoring grafana
koevn@k8s-master-01:~$ sudo kubectl edit svc -n cilium-monitoring prometheus
koevn@k8s-master-01:~$ sudo kubectl edit svc -n kube-system hubble-ui

执行以上任意一条命令,显示yaml格式文件内容,将底下的type: ClusterIP改成type: NodePort即可,取消则改回type: ClusterIP 再次查看当前Services

Terminal window
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cilium-monitoring grafana NodePort 10.90.138.192 <none> 3000:31240/TCP 1d
cilium-monitoring prometheus NodePort 10.90.145.89 <none> 9090:32044/TCP 1d
default kubernetes ClusterIP 10.90.0.1 <none> 443/TCP 1d
kube-system cilium-envoy ClusterIP None <none> 9964/TCP 1d
kube-system coredns ClusterIP 10.90.0.10 <none> 53/UDP 1d
kube-system hubble-metrics ClusterIP None <none> 9965/TCP 1d
kube-system hubble-peer ClusterIP 10.90.63.177 <none> 443/TCP 1d
kube-system hubble-relay ClusterIP 10.90.157.4 <none> 80/TCP 1d
kube-system hubble-ui NodePort 10.90.13.110 <none> 80:32166/TCP 1d
kube-system metrics-server ClusterIP 10.90.61.226 <none> 443/TCP 1d

浏览器访问http://10.88.12.61:31240grafana服务

浏览器访问http://10.88.12.61:32044prometheus服务

浏览器访问http://10.88.12.61:32166hubble-ui服务

六、安装CoreDNS

6.1.0 拉取CoreDNS安装包

Terminal window
koevn@k8s-master-01:~$ sudo helm repo add coredns https://coredns.github.io/helm
koevn@k8s-master-01:~$ sudo helm pull coredns/coredns
koevn@k8s-master-01:~$ sudo tar xvf coredns-*.tgz
koevn@k8s-master-01:~$ cd coredns/

修改values.yaml文件

Terminal window
koevn@k8s-master-01:~/coredns$ sudo vim values.yaml
service:
clusterIP: "10.90.0.10" # 取消注释修改指定IP

安装CoreDNS

Terminal window
koevn@k8s-master-01:~/coredns$ cd ..
koevn@k8s-master-01:~$ sudo helm install coredns ./coredns/ -n kube-system \
-f coredns/values.yaml

七、安装Metrics Server

下载Metrics-server yaml文件

Terminal window
koevn@k8s-master-01:~$ sudo wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml \
-O metrics-server.yaml

编辑修改metrics-server.yaml文件

# 修改文件大约134行
- args:
- --cert-dir=/tmp
- --secure-port=10250
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls
- --requestheader-client-ca-file=/opt/kubernetes/cert/front-proxy-ca.pem
- --requestheader-username-headers=X-Remote-User
- --requestheader-group-headers=X-Remote-Group
- --requestheader-extra-headers-prefix=X-Remote-Extra-
# 修改文件大约182行
volumeMounts:
- mountPath: /tmp
name: tmp-dir
- name: ca-ssl
mountPath: /opt/kubernetes/cert
volumes:
- emptyDir: {}
name: tmp-dir
- name: ca-ssl
hostPath:
path: /opt/kubernetes/cert

然后应用部署

Terminal window
koevn@k8s-master-01:~$ sudo kubectl apply -f metrics-server.yaml

查看资源状态

Terminal window
koevn@k8s-master-01:~$ sudo kubectl top node
NAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%)
k8s-master-01 212m 5% 2318Mi 61%
k8s-master-02 148m 3% 2180Mi 57%
k8s-master-03 178m 4% 2103Mi 55%
k8s-worker-01 38m 1% 1377Mi 36%
k8s-worker-02 40m 2% 1452Mi 38%
k8s-worker-03 32m 1% 1475Mi 39%

八、验证集群内部通信

8.1.0 部署一个pod资源

Terminal window
koevn@k8s-master-01:~$ cat > busybox.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: harbor.koevn.com/library/busybox:1.37.0 # 这里我用的是私有仓库
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
EOF

然后执行部署pod资源

Terminal window
koevn@k8s-master-01:~$ sudo kubectl apply -f busybox.yaml

查看pod资源

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get pod
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 36s

8.2.0 用pod解析NAMESPACE Services

查看Services

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cilium-monitoring grafana NodePort 10.90.138.192 <none> 3000:31240/TCP 1d
cilium-monitoring prometheus NodePort 10.90.145.89 <none> 9090:32044/TCP 1d
default kubernetes ClusterIP 10.90.0.1 <none> 443/TCP 1d
kube-system cilium-envoy ClusterIP None <none> 9964/TCP 1d
kube-system coredns ClusterIP 10.90.0.10 <none> 53/UDP 1d
kube-system hubble-metrics ClusterIP None <none> 9965/TCP 1d
kube-system hubble-peer ClusterIP 10.90.63.177 <none> 443/TCP 1d
kube-system hubble-relay ClusterIP 10.90.157.4 <none> 80/TCP 1d
kube-system hubble-ui NodePort 10.90.13.110 <none> 80:32166/TCP 1d
kube-system metrics-server ClusterIP 10.90.61.226 <none> 443/TCP 1d

测试解析

Terminal window
koevn@k8s-master-01:~$ sudo kubectl exec busybox -n default \
-- nslookup hubble-ui.kube-system.svc.cluster.local
Server: 10.90.0.10
Address: 10.90.0.10:53
Name: hubble-ui.kube-system.svc.cluster.local
Address: 10.90.13.110
koevn@k8s-master-01:~$ sudo kubectl exec busybox -n default \
-- nslookup grafana.cilium-monitoring.svc.cluster.local
Server: 10.90.0.10
Address: 10.90.0.10:53
Name: grafana.cilium-monitoring.svc.cluster.local
Address: 10.90.138.192
koevn@k8s-master-01:~$ sudo kubectl exec busybox -n default \
-- nslookup kubernetes.default.svc.cluster.local
Server: 10.90.0.10
Address: 10.90.0.10:53
Name: kubernetes.default.svc.cluster.local
Address: 10.90.0.1

8.3.0 每个节点测试kubernetes 443与coredns 53端口

Terminal window
koevn@k8s-master-01:~$ sudo telnet 10.90.0.1 443
Trying 10.90.0.1...
Connected to 10.90.0.1.
Escape character is '^]'
koevn@k8s-master-01:~$ sudo nc -zvu 10.90.0.10 53
10.90.0.10: inverse host lookup failed: Unknown host
(UNKNOWN) [10.90.0.10] 53 (domain) open
koevn@k8s-worker-03:~$ sudo telnet 10.90.0.1 443
Trying 10.90.0.1...
Connected to 10.90.0.1.
Escape character is '^]'
koevn@k8s-worker-03:~$ sudo nc -zvu 10.90.0.10 53
10.90.0.10: inverse host lookup failed: Unknown host
(UNKNOWN) [10.90.0.10] 53 (domain) open

8.4.0 测试Pod之间网络互通

查看pod信息

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get pods -A -o \
custom-columns="NAMESPACE:.metadata.namespace,STATUS:.status.phase,NAME:.metadata.name,IP:.status.podIP"
NAMESPACE STATUS NAME IP
cilium-monitoring Running grafana-679cd8bff-w9xm9 10.100.4.249
cilium-monitoring Running prometheus-87d4f66f7-6lmrz 10.100.3.238
default Running busybox 10.100.4.232
kube-system Running cilium-envoy-4sjq5 10.88.12.63
kube-system Running cilium-envoy-64694 10.88.12.60
kube-system Running cilium-envoy-fzjjw 10.88.12.62
kube-system Running cilium-envoy-twtw6 10.88.12.61
kube-system Running cilium-envoy-vwstr 10.88.12.65
kube-system Running cilium-envoy-whck6 10.88.12.64
kube-system Running cilium-fkjcm 10.88.12.63
kube-system Running cilium-h75vq 10.88.12.64
kube-system Running cilium-hcx4q 10.88.12.61
kube-system Running cilium-jz44w 10.88.12.65
kube-system Running cilium-operator-58d8755c44-hnwmd 10.88.12.60
kube-system Running cilium-operator-58d8755c44-xmg9f 10.88.12.65
kube-system Running cilium-qx5mn 10.88.12.62
kube-system Running cilium-wqmzc 10.88.12.60
kube-system Running coredns-6f44546d75-qnl9d 10.100.0.187
kube-system Running hubble-relay-7cd9d88674-2tdcc 10.100.3.64
kube-system Running hubble-ui-9f5cdb9bd-8fwsx 10.100.2.214
kube-system Running metrics-server-76cb66cbf9-xfbzq 10.100.5.187

进入busybox pod测试网络

Terminal window
koevn@k8s-master-01:~$ sudo kubectl exec -ti busybox -- sh
/ # ping 10.100.4.249
PING 10.100.4.249 (10.100.4.249): 56 data bytes
64 bytes from 10.100.4.249: seq=0 ttl=63 time=0.472 ms
64 bytes from 10.100.4.249: seq=1 ttl=63 time=0.063 ms
64 bytes from 10.100.4.249: seq=2 ttl=63 time=0.064 ms
64 bytes from 10.100.4.249: seq=3 ttl=63 time=0.062 ms
/ # ping 10.88.12.63
PING 10.88.12.63 (10.88.12.63): 56 data bytes
64 bytes from 10.88.12.63: seq=0 ttl=62 time=0.352 ms
64 bytes from 10.88.12.63: seq=1 ttl=62 time=0.358 ms
64 bytes from 10.88.12.63: seq=2 ttl=62 time=0.405 ms
64 bytes from 10.88.12.63: seq=3 ttl=62 time=0.314 ms

九、安装dashboard

9.1.0 拉取dashboard安装包

Terminal window
koevn@k8s-master-01:~$ sudo helm repo add kubernetes-dashboard \
https://kubernetes.github.io/dashboard/
koevn@k8s-master-01:~$ sudo helm search repo \
kubernetes-dashboard/kubernetes-dashboard --versions # 查看支持的版本
NAME CHART VERSION APP VERSION DESCRIPTION
kubernetes-dashboard/kubernetes-dashboard 7.0.0 General-purpose web UI for Kubernetes clusters
kubernetes-dashboard/kubernetes-dashboard 6.0.8 v2.7.0 General-purpose web UI for Kubernetes clusters
kubernetes-dashboard/kubernetes-dashboard 6.0.7 v2.7.0 General-purpose web UI for Kubernetes clusters
koevn@k8s-master-01:~$ sudo helm pull kubernetes-dashboard/kubernetes-dashboard --version 6.0.8 # 指定版本
koevn@k8s-master-01:~$ sudo tar xvf kubernetes-dashboard-*.tgz
koevn@k8s-master-01:~$ cd kubernetes-dashboard

编辑修改values.yaml文件

Terminal window
image:
repository: harbor.koevn.com/library/kubernetesui/dashboard # 这里我用的是私有仓库
tag: "v2.7.0" # 默认为空,自己指定版本

9.2.0 部署dashboard

Terminal window
koevn@k8s-master-01:~/kubernetes-dashboard$ cd ..
helm install dashboard kubernetes-dashboard/kubernetes-dashboard \
--version 6.0.8 \
--namespace kubernetes-dashboard \
--create-namespace

9.3.0 临时token

Terminal window
koevn@k8s-master-01:~$ cat > dashboard-user.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
EOF

执行应用

Terminal window
koevn@k8s-master-01:~$ sudo kubectl apply -f dashboard-user.yaml

创建临时token

Terminal window
koevn@k8s-master-01:~$ sudo kubectl -n kube-system create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6IjRkV3lkZTU1dF9mazczemwwaUdxUElPWDZhejFyaDh2ZzRhZmN5RlN3T0EifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzUwOTExMzc1LCJpYXQiOjE3NTA5MDc3NzUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMmQwZmViMzItNDJlNy00ODE0LWJmYjUtOGU5MTFhNWZhZDM2Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZmEwZGFhNjAtNWEzNC00NTFjLWFiYmUtNTQ2Y2EwOWVkNWQyIn19LCJuYmYiOjE3NTA5MDc3NzUsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.XbF8s70UTzrOnf8TrkyC7-3vdxLKQU1aQiCRGgRWyzh4e-A0uFgjVfQ17IrFsZEOVpE8H9ydNc3dZbP81apnGegeFZ42J7KmUkSUJnh5UbiKjmfWwK9ysoP-bba5nnq1uWB_iFR6r4vr6Q_B4-YyAn1DVy70VNaHrfyakyvpJ69L-5eH2jHXn68uizXdi4brf2YEAwDlmBWufeQqtPx7pdnF5HNMyt56oxQb2S2gNrgwLvb8WV2cIKE3DvjQYfcQeaufWK3gn0y-2h5-3z3r4084vHrXAYJRkPmOKy7Fh-DZ8t1g7icNfDRg4geI48WrMH2vOk3E_cpjRxS7dC5P9A

9.4.0 创建长期token

Terminal window
koevn@k8s-master-01:~$ cat > dashboard-user-token.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
name: admin-user
namespace: kube-system
annotations:
kubernetes.io/service-account.name: "admin-user"
type: kubernetes.io/service-account-token
EOF

执行应用

Terminal window
koevn@k8s-master-01:~$ sudo kubectl apply -f dashboard-user-token.yaml

查看token

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get secret admin-user -n kube-system -o jsonpath={".data.token"} | base64 -d
eyJhbGciOiJSUzI1NiIsImtpZCI6IjRkV3lkZTU1dF9mazczemwwaUdxUElPWDZhejFyaDh2ZzRhZmN5RlN3T0EifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzUwOTExNzM5LCJpYXQiOjE3NTA5MDgxMzksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMmZmNTE5NGEtOTlkMC00MDJmLTljNWUtMGQxOTMyZDkxNjgwIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZmEwZGFhNjAtNWEzNC00NTFjLWFiYmUtNTQ2Y2EwOWVkNWQyIn19LCJuYmYiOjE3NTA5MDgxMzksInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.LJa37eblpk_OBWGqRgX2f_aCMiCrrpjY57dW8bskKUWu7ldLhgR6JIbICI0NVnvX4by3RX9v_FGnAPwU821VDp05oYT1KcTDXV1BC57G4QGL4kS9tBOrmRyXY0jxB8ETRmGx8ECiCJqNfrVdT99dm8oaFqJx1zq6jut70UwhxQCIh7C-QVqg6Gybbb3a9x25M2YvVHWStduN_swMOQxyQDBRtA0ARAyClu73o36pDCs_a56GizGspA4bvHpHPT-_y1i3EkeVjMsEl6JQ0PeJNQiM4fBvtJ2I_0kqoEHNMRYzZXEQNETXF9vqjkiEg7XBlKe1L2Ke1-xwK5ZBKFnPOg

9.5.0 登录dashboard

修改dashboard servers

Terminal window
koevn@k8s-master-01:~$ sudo kubectl edit svc -n kube-system kubernetes-dashboard
type: NodePort

查看映射的端口

Terminal window
koevn@k8s-master-01:~$ sudo kubectl get svc kubernetes-dashboard -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.90.211.135 <none> 443:32744/TCP 5d

浏览器访问