原创文章,转载请注明: 转载自慢慢的回味
本文链接地址: 搭建高可用Kubernetes集群
搭建高可用Kubernetes集群基本遵循一些范例,本文参考他人的文章实验了一次,其中发现了一些问题,经过Debug调试通过,难点在网络问题和kubernates的授权问题。
网络规划
全部节点都在一个低延迟的子网中:
主机名 | IP | 说明 |
---|---|---|
k8s-lb-01 |
192.168.0.151 192.168.0.170 |
HAProxy + Keepalived (master) |
k8s-lb-02 |
192.168.0.161 192.168.0.170 |
HAProxy + Keepalived (backup) |
k8s-master-01 |
192.168.0.152 | Master节点1 |
k8s-master-02 |
192.168.0.153 | Master节点2 |
k8s-master-03 |
192.168.0.162 | Master节点3 |
k8s-worker-01 |
192.168.0.154 | Worker节点1 |
说明:
- 采用外部HAProxy作集群负载均衡器;
- 采用2个 HAProxy + Keepalived 来保证负载均衡的高可用,Keepalived VIP为192.168.0.170;即整个集群的IP地址。
- 采用每个Master节点上伴随部署Etcd的方式(Slacked Etcd typology 堆叠Etcd拓扑),而Etcd高可用最少需要3个节点(奇数),因此Master节点至少为3个。
- 1个worker节点,只是测试集群,所以1个就OK了。
- 服务器规格都为CentOS7 x86_64。
- 全部机器都在192.168.0.0/24网段,且相互连通。
- Docker 19.03.11
- Kubernetes v1.19.3
- Calico网络组件
- HAProxy负载均衡
- 集群Control-Plane Endpoint:
192.168.0.170:6443
- 集群API Server(s):
192.168.0.152:6443
192.168.0.153:6443
192.168.0.162:6443
部署一个基础镜像供拷贝使用
拉取k8s-deploy项目
# 安装Git yum install git -y # 拉取k8s-deploy项目 mkdir -p ~/k8s cd ~/k8s git clone https://github.com/cookcodeblog/k8s-deploy.git # 增加脚本的可执行文件权限 cd k8s-deploy/kubeadm_v1.19.3 find . -name '*.sh' -exec chmod u+x {} \;
安装Kubernetes
# 安装前检查和配置 bash 01_pre_check_and_configure.sh # 安装Docker bash 02_install_docker.sh # 安装kubeadm, kubectl, kubelet bash 03_install_kubernetes.sh # 拉取Kubernetes集群要用到的镜像 bash 04_pull_kubernetes_images_from_aliyun.sh # 拉取Calico网络组件的镜像 bash 04_pull_calico_images.sh iptables -P FORWARD ACCEPT
克隆服务器作为基准镜像
关机,然后拷贝这个镜像,生成k8s-master-02,k8s-master-03,k8s-worker-01,k8s-lb-01,k8s-lb-02。然后修改所有机器的IP地址为对应的静态地址。
vi /etc/sysconfig/network-scripts/ifcfg-ens33bootproto=static onboot=yes IPADDR=192.168.0.153 NETMASK=255.255.255.0 GATEWAY=192.168.0.1 DNS1=192.168.0.1 DNS2=8.8.8.8
#重启网络服务
systemctl restart network部署负载均衡服务器(k8s-lb-01,k8s-lb-02)
安装和配置HAProxy
安装HAProxy:
yum install haproxy -y
编辑/etc/haproxy/haproxy.cfg,删掉对应默认的代理设置,添加负载均衡反向代理到Kubernetes集群Master节点的设置:
#--------------------------------------------------------------------- # apiserver frontend which proxys to the masters #--------------------------------------------------------------------- frontend apiserver bind *:6443 mode tcp option tcplog default_backend apiserver #--------------------------------------------------------------------- # round robin balancing for apiserver #--------------------------------------------------------------------- backend apiserver option httpchk GET /healthz http-check expect status 200 mode tcp option ssl-hello-chk balance roundrobin server k8s-master-01 192.168.0.152:6443 check server k8s-master-02 192.168.0.153:6443 check server k8s-master-03 192.168.0.162:6443 check
开启HAProxy服务:
systemctl daemon-reload systemctl enable haproxy systemctl start haproxy systemctl status haproxy -l
安装和配置Keepalived
安装Keepalived:
yum install keepalived -y
配置Linux系统内核参数:
sudo cat < /etc/sysctl.d/keepalived.conf net.ipv4.ip_forward = 1 net.ipv4.ip_nonlocal_bind = 1 EOF #ip_forward 允许IP转发 #ip_nonlocal_bind允许绑定浮动IP,即VIP sudo sysctl --system
在/etc/keepalived下新建check_apiserver.sh用来Keepalived作健康检查:
#!/bin/sh APISERVER_VIP=$1 APISERVER_DEST_PORT=$2 errorExit() { echo "*** $*" 1>&2 exit 1 } curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/" if ip addr | grep -q ${APISERVER_VIP}; then curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/" fi
chmod +x /etc/keepalived/check_apiserver.sh
在Keepalived master节点上的
/etc/keepalived/keepalived.conf
中的配置为:! /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script check_apiserver { script "/etc/keepalived/check_apiserver.sh 192.168.0.170 6443" interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 51 priority 101 authentication { auth_type PASS auth_pass Keep@lived } virtual_ipaddress { 192.168.0.170 } track_script { check_apiserver } }
在Keepalived backup节点上的
/etc/keepalived/keepalived.conf
中的配置为:! /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script check_apiserver { script "/etc/keepalived/check_apiserver.sh 192.168.0.170 6443" interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 51 priority 100 authentication { auth_type PASS auth_pass Keep@lived } virtual_ipaddress { 192.168.0.170 } track_script { check_apiserver } }
让HAProxy使用Keepalived VIP,修改HAProxy的配置文件,绑定frontend IP为Keepalived VIP:
frontend apiserver bind 192.168.0.170:6443
运行Keepalived:
systemctl daemon-reload systemctl enable keepalived systemctl start keepalived systemctl status keepalived -l systemctl daemon-reload systemctl restart haproxy
初始化集群(在k8s-master-02上)
kubeadm init
cd ~/k8s cd k8s-deploy/kubeadm_v1.19.3 # 05_kubeadm_init.sh bash 05_kubeadm_init.sh 192.168.0.170:6443
05_kubeadm_init.sh中的service-cidr和pod-network-cidr需要修改,且不能和当前主机环境的CIDR一样。pod-network-cidr后面安装网络插件calico需要使用。
kubeadm init \ --kubernetes-version=v1.19.3 \ --control-plane-endpoint=${CONTROL_PLANE_ENDPOINT} \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.244.0.0/16 \ --image-repository=${IMAGE_REPOSITORY} \ --upload-certs
记录上面脚本输出中的kubeadm join 内容,后面用。
安装Calico网络组件
需要先修改calico/calico-ens33.xml中的内容:
- name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16" - name: KUBERNETES_SERVICE_HOST value: "192.168.0.170" - name: KUBERNETES_SERVICE_PORT value: "6443" - name: KUBERNETES_SERVICE_PORT_HTTPS value: "6443"
# support eth* and ens* networkd interfaces bash 06_install_calico.sh ens33
部署Worker节点
运行
kubeadm init
中打印的日志中关于加入“worker node”的命令。示例:
kubeadm join 192.168.0.170:6443 --token r7w69v.3e1nweyk81h5zj6y \ --discovery-token-ca-cert-hash sha256:1234a2317d27f0a4c6bcf5f284416a2fb3e8f3bd61aa88bc279a4f6ef18e09a1
让Worker节点上的
kubectl
命令生效:bash enable_kubectl_worker.sh
部署其他Master节点
运行
kubeadm init
中打印的日志中关于加入“control-plane node”的命令。示例:
kubeadm join 192.168.0.170:6443 --token r7w69v.3e1nweyk81h5zj6y \ --discovery-token-ca-cert-hash sha256:1234a2317d27f0a4c6bcf5f284416a2fb3e8f3bd61aa88bc279a4f6ef18e09a1 \ --control-plane --certificate-key 0e48107fbcd11cda60a5c2b76ae488b4ebf57223a4001acac799996740a6044e
如果忘记了kubeadm join命令的内容,可运行kubeadm token create –print-join-command重新获取,并可运行kubeadm init phase upload-certs –upload-certs获取新的certificate-key。
让Master节点上的
kubectl
命令生效:bash enable_kubectl_master.sh
查看集群部署情况
# Display cluster info kubectl cluster-info # Nodes kubectl get nodes # Display pds kubectl get pods --all-namespaces -o wide # Check pods status incase any pods are not in running status kubectl get pods --all-namespaces | grep -v Running
(可选)安装Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
创建文件dashboard-adminuser.yaml应用来添加管理员
--- apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard
然后执行下面命令后,拷贝输出的token来登录Dashboard:
kubectl apply -f dashboard-adminuser.yaml kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
使用如下命令来把Dashboard暴露出docker:
kubectl -n kubernetes-dashboard port-forward kubernetes-dashboard-7cb9fd9999-gtjn7 --address 0.0.0.0 8443
现在可访问:https://k8s-master-01:8443
大部分内容参考在CentOS7上用kubeadm HAProxy Keepalived 安装多Master节点的高可用Kubernetes集群
本作品采用知识共享署名 4.0 国际许可协议进行许可。