在AWS使用EKS

原创文章,转载请注明: 转载自慢慢的回味

本文链接地址: 在AWS使用EKS

使用AWS的EKS来托管Kubernetes是比较复杂,按照如下的方法可以创建出一个满足大部分使用环境的EKS。

1 创建一个IAM用户(Root用户操作)

在AWS中创建一个IAM用户,权限够用就行。

在AWS管理控制台,点击”Add users”:

其它页面默认就好。最后保存好下载的CSV文件,里面包含的Access Key和Secret Access Key在AWS CLI里面会用到。

2 创建策略和角色(Root用户操作)
2.1 创建EKS集群角色

给EKS集群创建一个角色:”testEKSClusterRole”,它包含一个策略: AmazonEKSClusterPolicy。

2.2 创建集群节点组角色

创建角色”testEKSNodeRole”,包含如下策略:
AmazonEKSWorkerNodePolicy
AmazonEC2ContainerRegistryReadOnly
AmazonEKS_CNI_Policy

2.3 给IAM用户添加权限

用户需要如下4个权限。你也可以创建一个用户组,并给其赋予权限,然后加入用户。

赋予受管策略”AmazonEC2FullAccess”, “AmazonVPCReadOnlyAccess”, “AmazonEC2FullAccess”。

添加一个包含如下内容的自定义策略:”TestEKSPolicy”

(请修改账号ID675892200046)

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "eks:*",
            "Resource": "*"
        },
        {
            "Action": [
                "ssm:GetParameter",
                "ssm:GetParameters"
            ],
            "Resource": [
                "arn:aws:ssm:*:675892200046:parameter/aws/*",
                "arn:aws:ssm:*::parameter/aws/*"
            ],
            "Effect": "Allow"
        },
        {
            "Action": [
                "kms:CreateGrant",
                "kms:DescribeKey"
            ],
            "Resource": "*",
            "Effect": "Allow"
        },
        {
            "Action": [
                "logs:PutRetentionPolicy"
            ],
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}

添加一个包含如下内容的自定义策略:”IamLimitedAccess”

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "iam:CreateServiceLinkedRole",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "iam:AWSServiceName": [
                        "eks.amazonaws.com",
                        "eks-nodegroup.amazonaws.com",
                        "eks-fargate.amazonaws.com"
                    ]
                }
            }
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "iam:CreateInstanceProfile",
                "iam:TagRole",
                "iam:RemoveRoleFromInstanceProfile",
                "iam:DeletePolicy",
                "iam:CreateRole",
                "iam:AttachRolePolicy",
                "iam:PutRolePolicy",
                "iam:AddRoleToInstanceProfile",
                "iam:ListInstanceProfilesForRole",
                "iam:PassRole",
                "iam:DetachRolePolicy",
                "iam:DeleteRolePolicy",
                "iam:ListAttachedRolePolicies",
                "iam:DeleteOpenIDConnectProvider",
                "iam:DeleteInstanceProfile",
                "iam:GetRole",
                "iam:GetInstanceProfile",
                "iam:GetPolicy",
                "iam:DeleteRole",
                "iam:ListInstanceProfiles",
                "iam:CreateOpenIDConnectProvider",
                "iam:CreatePolicy",
                "iam:ListPolicyVersions",
                "iam:GetOpenIDConnectProvider",
                "iam:TagOpenIDConnectProvider",
                "iam:GetRolePolicy"
            ],
            "Resource": [
                "arn:aws:iam::675892200046:role/testEKSNodeRole",
                "arn:aws:iam::675892200046:role/testEKSClusterRole",
                "arn:aws:iam::675892200046:role/aws-service-role/eks-nodegroup.amazonaws.com/AWSServiceRoleForAmazonEKSNodegroup",
                "arn:aws:iam::675892200046:instance-profile/*",
                "arn:aws:iam::675892200046:policy/*",
                "arn:aws:iam::675892200046:oidc-provider/*"
            ]
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "iam:GetRole",
            "Resource": "arn:aws:iam::675892200046:role/*"
        },
        {
            "Sid": "VisualEditor3",
            "Effect": "Allow",
            "Action": "iam:ListRoles",
            "Resource": "*"
        }
    ]
}
3 创建EKS集群(IAM用户)
3.1 创建EKS集群控制平面

在EKS产品页面,点击”Create Cluster”。

如果你没有在”Custer service role”下拉列表中看见角色,请检查第2步。

在子网”Subnets”中, 3个子网就好了。
在集群端点访问”Cluster endpoint access”中,选 “Public”就好,生产环境,请选择”Private”。
在网络插件”Networking add-ons”中,默认就好。



3.2 添加工作节点到集群

当集群创建成功了”Active”, 点击Compute标签中的”Add node group”来创建工作节点。


你可以配置 “SSH login”进入到工作节点。

4 设置AWS CLI 工具和Kubectl 工具(IAM用户)
4.1 配置AWS CLI

安装AWS CLI后,运行”aws configure”来配置第一步中的IAM账号:

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws configure
[awscli@bogon ~]$ aws sts get-caller-identity
{
    "UserId": "AIDAZ2XSQQJXKNKFI4YDF",
    "Account": "675892200046",
    "Arn": "arn:aws:iam::675892200046:user/TestEKSUser"
}
4.2 配置Kubectl
[awscli@bogon ~]$ aws eks --region us-east-1 update-kubeconfig --name TestEKSCluster
Updated context arn:aws:eks:us-east-1:675892200046:cluster/TestEKSCluster in /home/awscli/.kube/config
5 设置EKS的存储EFS
5.1 创建接入EFS的策略(Root用户操作)

自定义一策略:”TestEKSAccessEFSPolicy”

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "elasticfilesystem:DescribeAccessPoints",
                "elasticfilesystem:DescribeFileSystems"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "elasticfilesystem:CreateAccessPoint"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "aws:RequestTag/efs.csi.aws.com/cluster": "true"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": "elasticfilesystem:DeleteAccessPoint",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/efs.csi.aws.com/cluster": "true"
                }
            }
        }
    ]
}
5.2 创建访问EFS的角色(Root用户操作)

创建角色”TestEKSAccessEFSRole”并分配策略”TestEKSAccessEFSPolicy”。

在信任关系”Trust relationships”中,修改如下内容。
替换”oidc.eks.us-east-1.amazonaws.com/id/98F61019E9B399FA9B7A43A19B56DF14″为你EKS的”OpenID Connect provider URL”。

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::675892200046:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/98F61019E9B399FA9B7A43A19B56DF14"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "oidc.eks.us-east-1.amazonaws.com/id/98F61019E9B399FA9B7A43A19B56DF14:sub": "system:serviceaccount:kube-system:efs-csi-controller-sa"
                }
            }
        }
    ]
}

5.3 为OpenID Connect创建Identity Provider(Root用户操作)

填入提供URL和审计URL “sts.amazonaws.com”, 点击”Get thumbprint”, 然后单击”Add provider”。

5.4 在EKS中创建服务账户(IAM用户)

创建文件”efs-service-account.yaml”,包含如下内容,然后”kubectl apply -f efs-service-account.yaml”创建账户,注意修改account id。

apiVersion: v1
kind: ServiceAccount
metadata:
  name: efs-csi-controller-sa
  namespace: kube-system
  labels:
    app.kubernetes.io/name: aws-efs-csi-driver
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::675892200046:role/TestEKSAccessEFSRole
5.5 创建EFS CSI 插件(IAM用户)

执行如下命令获取EFS插件的安装yaml文件:driver.yaml

kubectl kustomize "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/ecr?ref=release-1.3" > driver.yaml

上面已经创建了服务账号,所以driver.yaml文件里面的”efs-csi-controller-sa”段可以去掉。

接着运行命令 “kubectl apply -f driver.yaml”创建CSI插件。

apiVersion: v1
kind: ServiceAccount
metadata:
  name: efs-csi-controller-sa
  namespace: kube-system
  labels:
    app.kubernetes.io/name: aws-efs-csi-driver
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::675892200046:role/TestEKSAccessEFSRole
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/name: aws-efs-csi-driver
  name: efs-csi-node-sa
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/name: aws-efs-csi-driver
  name: efs-csi-external-provisioner-role
rules:
- apiGroups:
  - ""
  resources:
  - persistentvolumes
  verbs:
  - get
  - list
  - watch
  - create
  - delete
- apiGroups:
  - ""
  resources:
  - persistentvolumeclaims
  verbs:
  - get
  - list
  - watch
  - update
- apiGroups:
  - storage.k8s.io
  resources:
  - storageclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - list
  - watch
  - create
  - patch
- apiGroups:
  - storage.k8s.io
  resources:
  - csinodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - get
  - watch
  - list
  - delete
  - update
  - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/name: aws-efs-csi-driver
  name: efs-csi-provisioner-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: efs-csi-external-provisioner-role
subjects:
- kind: ServiceAccount
  name: efs-csi-controller-sa
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/name: aws-efs-csi-driver
  name: efs-csi-controller
  namespace: kube-system
spec:
  replicas: 2
  selector:
    matchLabels:
      app: efs-csi-controller
      app.kubernetes.io/instance: kustomize
      app.kubernetes.io/name: aws-efs-csi-driver
  template:
    metadata:
      labels:
        app: efs-csi-controller
        app.kubernetes.io/instance: kustomize
        app.kubernetes.io/name: aws-efs-csi-driver
    spec:
      containers:
      - args:
        - --endpoint=$(CSI_ENDPOINT)
        - --logtostderr
        - --v=2
        - --delete-access-point-root-dir=false
        env:
        - name: CSI_ENDPOINT
          value: unix:///var/lib/csi/sockets/pluginproxy/csi.sock
        image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/aws-efs-csi-driver:v1.3.8
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: healthz
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 3
        name: efs-plugin
        ports:
        - containerPort: 9909
          name: healthz
          protocol: TCP
        securityContext:
          privileged: true
        volumeMounts:
        - mountPath: /var/lib/csi/sockets/pluginproxy/
          name: socket-dir
      - args:
        - --csi-address=$(ADDRESS)
        - --v=2
        - --feature-gates=Topology=true
        - --extra-create-metadata
        - --leader-election
        env:
        - name: ADDRESS
          value: /var/lib/csi/sockets/pluginproxy/csi.sock
        image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/csi-provisioner:v2.1.1
        imagePullPolicy: IfNotPresent
        name: csi-provisioner
        volumeMounts:
        - mountPath: /var/lib/csi/sockets/pluginproxy/
          name: socket-dir
      - args:
        - --csi-address=/csi/csi.sock
        - --health-port=9909
        image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/livenessprobe:v2.2.0
        imagePullPolicy: IfNotPresent
        name: liveness-probe
        volumeMounts:
        - mountPath: /csi
          name: socket-dir
      hostNetwork: true
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      priorityClassName: system-cluster-critical
      serviceAccountName: efs-csi-controller-sa
      volumes:
      - emptyDir: {}
        name: socket-dir
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app.kubernetes.io/name: aws-efs-csi-driver
  name: efs-csi-node
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: efs-csi-node
      app.kubernetes.io/instance: kustomize
      app.kubernetes.io/name: aws-efs-csi-driver
  template:
    metadata:
      labels:
        app: efs-csi-node
        app.kubernetes.io/instance: kustomize
        app.kubernetes.io/name: aws-efs-csi-driver
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: eks.amazonaws.com/compute-type
                operator: NotIn
                values:
                - fargate
      containers:
      - args:
        - --endpoint=$(CSI_ENDPOINT)
        - --logtostderr
        - --v=2
        env:
        - name: CSI_ENDPOINT
          value: unix:/csi/csi.sock
        image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/aws-efs-csi-driver:v1.3.8
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: healthz
          initialDelaySeconds: 10
          periodSeconds: 2
          timeoutSeconds: 3
        name: efs-plugin
        ports:
        - containerPort: 9809
          name: healthz
          protocol: TCP
        securityContext:
          privileged: true
        volumeMounts:
        - mountPath: /var/lib/kubelet
          mountPropagation: Bidirectional
          name: kubelet-dir
        - mountPath: /csi
          name: plugin-dir
        - mountPath: /var/run/efs
          name: efs-state-dir
        - mountPath: /var/amazon/efs
          name: efs-utils-config
        - mountPath: /etc/amazon/efs-legacy
          name: efs-utils-config-legacy
      - args:
        - --csi-address=$(ADDRESS)
        - --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
        - --v=2
        env:
        - name: ADDRESS
          value: /csi/csi.sock
        - name: DRIVER_REG_SOCK_PATH
          value: /var/lib/kubelet/plugins/efs.csi.aws.com/csi.sock
        - name: KUBE_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/csi-node-driver-registrar:v2.1.0
        imagePullPolicy: IfNotPresent
        name: csi-driver-registrar
        volumeMounts:
        - mountPath: /csi
          name: plugin-dir
        - mountPath: /registration
          name: registration-dir
      - args:
        - --csi-address=/csi/csi.sock
        - --health-port=9809
        - --v=2
        image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/livenessprobe:v2.2.0
        imagePullPolicy: IfNotPresent
        name: liveness-probe
        volumeMounts:
        - mountPath: /csi
          name: plugin-dir
      dnsPolicy: ClusterFirst
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/os: linux
      priorityClassName: system-node-critical
      serviceAccountName: efs-csi-node-sa
      tolerations:
      - operator: Exists
      volumes:
      - hostPath:
          path: /var/lib/kubelet
          type: Directory
        name: kubelet-dir
      - hostPath:
          path: /var/lib/kubelet/plugins/efs.csi.aws.com/
          type: DirectoryOrCreate
        name: plugin-dir
      - hostPath:
          path: /var/lib/kubelet/plugins_registry/
          type: Directory
        name: registration-dir
      - hostPath:
          path: /var/run/efs
          type: DirectoryOrCreate
        name: efs-state-dir
      - hostPath:
          path: /var/amazon/efs
          type: DirectoryOrCreate
        name: efs-utils-config
      - hostPath:
          path: /etc/amazon/efs
          type: DirectoryOrCreate
        name: efs-utils-config-legacy
---
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
  annotations:
    helm.sh/hook: pre-install, pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation
    helm.sh/resource-policy: keep
  name: efs.csi.aws.com
spec:
  attachRequired: false

等一会,”efs-csi-controller*”应该就绪了。

5.6 创建EFS文件系统(Root用户操作)

在Amazon EFS产品中,点击”Create file system”开始创建:

选择”Standard”作为存储类,这样可用区里面的所有节点都可以访问。

创建完成后,等待”Network”可用,然后点击”Manage”按钮添加集群安全组。

5.7 创建Kubernetes里面的存储类(IAM用户)

安装如下内容创建”storageclass.yaml”,并运行”kubectl apply -f storageclass.yaml”来创建。

注意修改”fileSystemId”成你自己的,通过如下图查询。

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
parameters:
  provisioningMode: efs-ap
  fileSystemId: fs-04470c1ed1eab275c
  directoryPerms: "700"
  gidRangeStart: "1000" # optional
  gidRangeEnd: "2000" # optional
  basePath: "/dynamic_provisioning" # optional
6 部署Jenkins来测试(IAM用户)
6.1 部署Jenkins

注意设置存储类为efs-sc。

helm repo add jenkinsci https://charts.jenkins.io/
helm install my-jenkins jenkinsci/jenkins –version 4.1.17 –set persistence.storageClass=efs-sc

6.2 验证结果

等Jenkins启动后,可以采用端口转发来临时访问。

[awscli@bogon ~]$ kubectl port-forward svc/my-jenkins --address=0.0.0.0 8081:8080
Forwarding from 0.0.0.0:8081 -> 8080
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081

7 集群自动伸缩
7.1 创建一个自动伸缩策略供EKS使用

首先创建一个策略:”TestEKSClusterAutoScalePolicy”

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:DescribeTags",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup",
                "ec2:DescribeLaunchTemplateVersions",
                "ec2:DescribeInstanceTypes"
            ],
            "Resource": "*"
        }
    ]
}
7.2 创建一个自动伸缩角色供EKS使用

接着创建一个角色:”TestEKSClusterAutoScaleRole”,使用上面创建的策略:”TestEKSClusterAutoScalePolicy”,并且按照如下设置”trusted entities”。
注意open id需要从你的EKS集群详情页面获取。

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::675892200046:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/BEDCA5446D2676BB0A51B7BECFB36773"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "oidc.eks.us-east-1.amazonaws.com/id/BEDCA5446D2676BB0A51B7BECFB36773:sub": "system:serviceaccount:kube-system:cluster-autoscaler"
        }
      }
    }
  ]
}
7.3 部署cluster scaler

使用如下命令获取部署文件。

wget https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

然后修改”cluster-autoscaler-autodiscover.yaml” :

1 添加annotation “eks.amazonaws.com/role-arn” 到服务账号 ServiceAccount “cluster-autoscaler”上,见下面的代码。

2 在Deployment “cluster-autoscaler”中,修改 为你的集群名字,如”TestEKSCluster”。

3 添加2个参数(- –balance-similar-node-groups – –skip-nodes-with-system-pods=false)到步骤2处代码的下一行。

4 添加注解annotation “cluster-autoscaler.kubernetes.io/safe-to-evict 在代码行”prometheus.io/port: ‘8085’”下面。

5 在https://github.com/kubernetes/autoscaler/releases找到对应的镜像版本,需要和你的EKS Kubernetes版本一致。

6 最后,运行”kubectl apply -f cluster-autoscaler-autodiscover.yaml”来创建。

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::675892200046:role/TestEKSClusterAutoScaleRole
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
rules:
  - apiGroups: [""]
    resources: ["events", "endpoints"]
    verbs: ["create", "patch"]
  - apiGroups: [""]
    resources: ["pods/eviction"]
    verbs: ["create"]
  - apiGroups: [""]
    resources: ["pods/status"]
    verbs: ["update"]
  - apiGroups: [""]
    resources: ["endpoints"]
    resourceNames: ["cluster-autoscaler"]
    verbs: ["get", "update"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["watch", "list", "get", "update"]
  - apiGroups: [""]
    resources:
      - "namespaces"
      - "pods"
      - "services"
      - "replicationcontrollers"
      - "persistentvolumeclaims"
      - "persistentvolumes"
    verbs: ["watch", "list", "get"]
  - apiGroups: ["extensions"]
    resources: ["replicasets", "daemonsets"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["policy"]
    resources: ["poddisruptionbudgets"]
    verbs: ["watch", "list"]
  - apiGroups: ["apps"]
    resources: ["statefulsets", "replicasets", "daemonsets"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses", "csinodes", "csidrivers", "csistoragecapacities"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["batch", "extensions"]
    resources: ["jobs"]
    verbs: ["get", "list", "watch", "patch"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["create"]
  - apiGroups: ["coordination.k8s.io"]
    resourceNames: ["cluster-autoscaler"]
    resources: ["leases"]
    verbs: ["get", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["create","list","watch"]
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"]
    verbs: ["delete", "get", "update", "watch"]
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: kube-system
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: kube-system
 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    app: cluster-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
      annotations:
        prometheus.io/scrape: 'true'
        prometheus.io/port: '8085'
        cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
    spec:
      priorityClassName: system-cluster-critical
      securityContext:
        runAsNonRoot: true
        runAsUser: 65534
        fsGroup: 65534
      serviceAccountName: cluster-autoscaler
      containers:
        - image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.23.0
          name: cluster-autoscaler
          resources:
            limits:
              cpu: 100m
              memory: 600Mi
            requests:
              cpu: 100m
              memory: 600Mi
          command:
            - ./cluster-autoscaler
            - --v=4
            - --stderrthreshold=info
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=false
            - --expander=least-waste
            - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/TestEKSCluster
            - --balance-similar-node-groups
            - --skip-nodes-with-system-pods=false
          volumeMounts:
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-certificates.crt #/etc/ssl/certs/ca-bundle.crt for Amazon Linux Worker Nodes
              readOnly: true
          imagePullPolicy: "Always"
      volumes:
        - name: ssl-certs
          hostPath:
            path: "/etc/ssl/certs/ca-bundle.crt"
7.4 部署metrics server

使用metrics server我们可以获取pods的metrics,着色HPA的基础。

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        image: k8s.gcr.io/metrics-server/metrics-server:v0.5.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
7.5 测试集群伸缩cluster scaling

部署一个niginx来测试:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

目前只有1个节点。

[awscli@bogon ~]$ kubectl top node
NAME                            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
ip-172-31-17-148.ec2.internal   52m          2%     635Mi           19%
[awscli@bogon ~]$ kubectl top pods --all-namespaces
NAMESPACE     NAME                                  CPU(cores)   MEMORY(bytes)
default       nginx-deployment-9456bbbf9-qlpcb      0m           2Mi
kube-system   aws-node-m6xjs                        3m           34Mi
kube-system   cluster-autoscaler-5c4d9b6d4c-k2csm   2m           22Mi
kube-system   coredns-d5b9bfc4-4bvnn                1m           12Mi
kube-system   coredns-d5b9bfc4-z2ppq                1m           12Mi
kube-system   kube-proxy-x55c8                      1m           10Mi
kube-system   metrics-server-84cd7b5645-prh6c       3m           16Mi
 
现在,我们把上面创建的测试POD副本设置到30,应为当前节点容量不够,一会儿后,一个新的节点(ip-172-31-91-231.ec2.internal)启动并加入到了集群。
[awscli@bogon ~]$ kubectl top node
NAME                            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
ip-172-31-17-148.ec2.internal   66m          3%     726Mi           21%
ip-172-31-91-231.ec2.internal   774m         40%    569Mi           17%
[awscli@bogon ~]$ kubectl top pods --all-namespaces
NAMESPACE     NAME                                  CPU(cores)   MEMORY(bytes)
default       nginx-deployment-9456bbbf9-2tgpl      0m           2Mi
default       nginx-deployment-9456bbbf9-5jdsm      0m           2Mi
default       nginx-deployment-9456bbbf9-5vt9l      2m           2Mi
default       nginx-deployment-9456bbbf9-8ldm7      0m           2Mi
default       nginx-deployment-9456bbbf9-9m499      0m           2Mi
default       nginx-deployment-9456bbbf9-cpmqs      0m           2Mi
default       nginx-deployment-9456bbbf9-d6p4k      2m           2Mi
default       nginx-deployment-9456bbbf9-f2z87      2m           2Mi
default       nginx-deployment-9456bbbf9-f8w2f      0m           2Mi
default       nginx-deployment-9456bbbf9-fwjg4      0m           2Mi
default       nginx-deployment-9456bbbf9-kfmv8      0m           2Mi
default       nginx-deployment-9456bbbf9-knn2t      0m           2Mi
default       nginx-deployment-9456bbbf9-mq5sv      0m           2Mi
default       nginx-deployment-9456bbbf9-plh7h      0m           2Mi
default       nginx-deployment-9456bbbf9-qlpcb      0m           2Mi
default       nginx-deployment-9456bbbf9-tz22s      0m           2Mi
default       nginx-deployment-9456bbbf9-v6ccx      0m           2Mi
default       nginx-deployment-9456bbbf9-v9rc8      0m           2Mi
default       nginx-deployment-9456bbbf9-vwsfr      0m           2Mi
default       nginx-deployment-9456bbbf9-x2jnb      0m           2Mi
default       nginx-deployment-9456bbbf9-xhllv      0m           2Mi
default       nginx-deployment-9456bbbf9-z7hhr      0m           2Mi
default       nginx-deployment-9456bbbf9-zj7qc      0m           2Mi
default       nginx-deployment-9456bbbf9-zqptw      0m           2Mi
kube-system   aws-node-f4kf4                        2m           35Mi
kube-system   aws-node-m6xjs                        3m           35Mi
kube-system   cluster-autoscaler-5c4d9b6d4c-k2csm   3m           26Mi
kube-system   coredns-d5b9bfc4-4bvnn                1m           12Mi
kube-system   coredns-d5b9bfc4-z2ppq                1m           12Mi
kube-system   kube-proxy-qqrw9                      1m           10Mi
kube-system   kube-proxy-x55c8                      1m           10Mi
kube-system   metrics-server-84cd7b5645-prh6c       4m           16Mi
8 在EKS中访问ECR

应为EKS托管的Node Group中的Node,我们不能修改上面的docker配置文件,所有不能用我们自己的Harbor除非你有正确的证书。所以采用AWS ECR就没有这些麻烦了。

8.1 创建repository

首先创建一个内联策略:”TestEKSonECRPolicy”,然后才能创建docker repository, 获取login token并上传镜像。

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ecr:CreateRepository",
                "ecr:GetDownloadUrlForLayer",
                "ecr:DescribeRegistry",
                "ecr:GetAuthorizationToken",
                "ecr:UploadLayerPart",
                "ecr:ListImages",
                "ecr:DeleteRepository",
                "ecr:PutImage",
                "ecr:UntagResource",
                "ecr:BatchGetImage",
                "ecr:CompleteLayerUpload",
                "ecr:DescribeImages",
                "ecr:TagResource",
                "ecr:DescribeRepositories",
                "ecr:InitiateLayerUpload",
                "ecr:BatchCheckLayerAvailability"
            ],
            "Resource": "*"
        }
    ]
}

在ECR产品页面创建:

8.2 在EKS中拉取镜像

需要确认你的EKS node role托管角色有策略: AmazonEC2ContainerRegistryReadOnly

8.3 在EKS中推送镜像

运行如下命令获取login token.(注意修改ECR端点成你的).

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 675892200046.dkr.ecr.us-east-1.amazonaws.com

为Jenkins命名空间创建一个secret token,然后Jenkins中的pipeline就可以使用docker推送镜像到ECR中。

kubectl create secret generic awsecr --from-file=.dockerconfigjson=config.json  --type=kubernetes.io/dockerconfigjson -n jenkins

本作品采用知识共享署名 4.0 国际许可协议进行许可。

Kubernetes EFK(Elastic Search, Fluentd, Kibana)搭建

原创文章,转载请注明: 转载自慢慢的回味

本文链接地址: Kubernetes EFK(Elastic Search, Fluentd, Kibana)搭建

上一次,我们完成了搭建单节点Kubernetes环境。现在在其基础上继续搭建EFK(Elastic Search, Fluentd, Kibana)日志收集系统。

Elasticsearch 安装

Elasticsearch 安装时需要开启HTTPS。

kubectl create namespace efk
 
cat <<EOF > es_extracfg.yaml
  xpack:
    security:
      enabled: "true"
      authc:
        api_key:
          enabled: "true"
EOF
 
 
helm upgrade --install my-elasticsearch bitnami/elasticsearch -n efk --set security.enabled=true --set security.elasticPassword=YourPassword --set security.tls.autoGenerated=true --set-file extraConfig=es_extracfg.yaml

需要修改Stateful Set “my-elasticsearch-coordinating-only”和“my-elasticsearch-master”的内容如下,否则数据传输不成功:

          resources:
            requests:
              cpu: 25m
              memory: 512Mi
Kibana 安装

Kibana 安装的时候需要指定Elastic Search服务器的地址,密码为Elastic Search服务器的密码。这儿必须连接HTTPS接口。

helm upgrade --install my-kibana bitnami/kibana -n efk --set elasticsearch.hosts[0]=my-elasticsearch-coordinating-only --set elasticsearch.port=9200 --set elasticsearch.security.auth.enabled=true --set elasticsearch.security.auth.kibanaPassword=YourPassword --set elasticsearch.security.tls.enabled=true --set elasticsearch.security.tls.verificationMode=none
Fluentd 安装

手动添加如下配置,通过@type elasticsearch把日志转发到Elastic Search服务器上面。

kind: ConfigMap
apiVersion: v1
metadata:
  name: elasticsearch-output
  namespace: efk
data:
  fluentd.conf: |
    # Prometheus Exporter Plugin
    # input plugin that exports metrics
    <source>
      @type prometheus
      port 24231
    </source>
 
    # input plugin that collects metrics from MonitorAgent
    <source>
      @type prometheus_monitor
      <labels>
        host ${hostname}
      </labels>
    </source>
 
    # input plugin that collects metrics for output plugin
    <source>
      @type prometheus_output_monitor
      <labels>
        host ${hostname}
      </labels>
    </source>
 
    # Ignore fluentd own events
    <match fluent.**>
      @type null
    </match>
 
    # TCP input to receive logs from the forwarders
    <source>
      @type forward
      bind 0.0.0.0
      port 24224
    </source>
 
    # HTTP input for the liveness and readiness probes
    <source>
      @type http
      bind 0.0.0.0
      port 9880
    </source>
 
    # Throw the healthcheck to the standard output instead of forwarding it
    <match fluentd.healthcheck>
      @type stdout
    </match>
 
    # Send the logs to the standard output
    <match **>
      @type elasticsearch
      include_tag_key true
      scheme https
      host my-elasticsearch-coordinating-only
      port 9200
      user elastic
      password YourPassword
      ssl_verify false
      logstash_format true
      logstash_prefix k8s 
      request_timeout 30s
 
      <buffer>
        @type file
        path /opt/bitnami/fluentd/logs/buffers/logs.buffer
        flush_thread_count 2
        flush_interval 5s
      </buffer>
    </match>
helm upgrade --install my-fluentd bitnami/fluentd -n efk --set aggregator.configMap=elasticsearch-output
Kibana 管理界面

添加Index Pattern后可以查看log


本作品采用知识共享署名 4.0 国际许可协议进行许可。

Kubernetes Gitlab CICD和服务网格Istio搭建

原创文章,转载请注明: 转载自慢慢的回味

本文链接地址: Kubernetes Gitlab CICD和服务网格Istio搭建

上一次,我们完成了搭建单节点Kubernetes环境。现在在其基础上用Gitlab构建CICD持续集成环境,并用一个Demo在服务网格Istio上面进行演示。

服务网格Istio安装

参照https://istio.io/latest/docs/setup/getting-started/完成Istio的安装:

export https_proxy=http://192.168.0.105:8070
export http_proxy=http://192.168.0.105:8070
curl -L https://istio.io/downloadIstio | sh -
export https_proxy=
export http_proxy=
cd istio-1.12.1
export PATH=$PWD/bin:$PATH
istioctl install --set profile=demo -y
 
#✔ Istio core installed                                                                                                                                                                                         
#✔ Istiod installed                                                                                                                                                                                             
#✔ Egress gateways installed                                                                                                                                                                                    
#✔ Ingress gateways installed                                                                                                                                                                                   
#✔ Installation complete                                                                                                                                                                                        #Making this installation the default for injection and validation.
#
#Thank you for installing Istio 1.12.  Please take a few minutes to tell us about your install/upgrade experience!  https://forms.gle/FegQbc9UvePd4Z9z7

安装完成后如下图:

如下安装插件:

kubectl apply -f samples/addons
kubectl rollout status deployment/kiali -n istio-system

修改服务kiali为LoadBalancer类型:

kind: Service
apiVersion: v1
metadata:
  name: kiali
  namespace: istio-system
spec:
  type: LoadBalancer
安装NFS Server

Gitlab需要存储卷,所有给Kubernetes集群提供一个NFS Server作为存储提供。
服务器端安装NFS Server:

sudo yum install nfs-utils -y
sudo systemctl start nfs-server.service
sudo systemctl enable nfs-server.service
sudo systemctl status nfs-server.service
sudo cat /proc/fs/nfsd/versions
 
sudo mkdir /data
chmod +w /data
sudo mkdir -p /srv/nfs4/data
sudo mount --bind /data /srv/nfs4/data
 
sudo cp -p /etc/fstab /etc/fstab.bak$(date '+%Y%m%d%H%M%S')
sudo echo "/data    /srv/nfs4/data    none    bind    0    0" >> /etc/fstab
sudo mount -a
 
sudo echo "/srv/nfs4    192.168.0.0/24(rw,sync,no_subtree_check,crossmnt,fsid=0)" >> /etc/exports
sudo echo "/srv/nfs4/data    192.168.0.0/24(rw,sync,no_subtree_check,no_root_squash)" >> /etc/exports
sudo exportfs -ra
sudo exportfs -v
sudo systemctl restart nfs-server.service

在客户端测试NFS Service:

sudo yum install nfs-utils -y
sudo mkdir /root/data
sudo mount -t nfs -o vers=4 192.168.0.180:/data /root/data
cd /root/data
echo "test nfs write" >> test.txt

在客户端测试NFS Service:

export https_proxy=http://192.168.0.105:8070
export http_proxy=http://192.168.0.105:8070
wget https://get.helm.sh/helm-v3.7.2-linux-amd64.tar.gz
export https_proxy=
export http_proxy=
 
tar -xvf helm-v3.7.2-linux-amd64.tar.gz
cd /root/linux-amd64
export PATH=$PWD:$PATH
 
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
export https_proxy=http://192.168.0.105:8070
export http_proxy=http://192.168.0.105:8070
helm fetch nfs-subdir-external-provisioner/nfs-subdir-external-provisioner
export https_proxy=
export http_proxy=
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner-4.0.14.tgz \
    --set nfs.server=192.168.0.180 \
    --set nfs.path=/data

编辑StorageClass nfs-client,加上storageclass.kubernetes.io/is-default-class: ‘true’,使其成为默认的存储提供者:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: nfs-client
  annotations:
    storageclass.kubernetes.io/is-default-class: 'true'
构建CICD持续集成环境
安装 Gitlab

安装gitlab到kubernetes环境中:
注意上一篇文章中给Docker配置代理相当重要。

helm repo add gitlab http://charts.gitlab.io/
helm repo update
kubectl create namespace mygitlab
helm upgrade --install my-gitlab gitlab/gitlab --version 5.6.0 --namespace mygitlab --set global.hosts.https=false --set global.ingress.tls.enabled=false --set global.ingress.configureCertmanager=false --set global.kas.enabled=true --set global.edition=ce

安装完成后如下图:

配置本地域名

查询ingress的外网地址如192.168.0.192:

[root@k8s-master data]# kubectl get ingress --all-namespaces
NAMESPACE   NAME                           CLASS             HOSTS                  ADDRESS         PORTS   AGE
mygitlab    my-gitlab-kas                  my-gitlab-nginx   kas.example.com        192.168.0.192   80      44m
mygitlab    my-gitlab-minio                my-gitlab-nginx   minio.example.com      192.168.0.192   80      44m
mygitlab    my-gitlab-registry             my-gitlab-nginx   registry.example.com   192.168.0.192   80      44m
mygitlab    my-gitlab-webservice-default   my-gitlab-nginx   gitlab.example.com     192.168.0.192   80      44m

添加自定义host到hosts中:

vi /etc/hosts
192.168.0.192       minio.example.com
192.168.0.192       registry.example.com
192.168.0.192       gitlab.example.com
192.168.0.192       kas.example.com

接下来需要修改coredns ConfigMap,使集群内部的DNS能够连上gitlab。注意IP地址192.168.0.192需要修改成你自己gitlab LoadBalancer地址。
否则,比如my-gitlab-gitlab-runner-*-*连不上gitlab.example.com。

kind: ConfigMap
apiVersion: v1
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        ......
        hosts {
          192.168.0.192  minio.example.com
          192.168.0.192  registry.example.com
          192.168.0.192  gitlab.example.com
          fallthrough
        }
        ......
    }
修复Docker的代理设置,注意包括master和worker节点

给Docker追加-H tcp://0.0.0.0:2375使my-gitlab-gitlab-runner-*-*可以调用Docker:

vi /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2375

确保http-proxy.conf含有NO_PROXY=example.com

vi /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.0.105:8070"
Environment="HTTPS_PROXY=http://192.168.0.105:8070"
Environment="NO_PROXY=localhost,127.0.0.1,example.com"

修改daemon.json,增加”insecure-registries”: [“registry.example.com”]:

vi /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.example.com"]
}
 
systemctl daemon-reload
systemctl restart docker
设置gitlab-runner可以访问非https的块存储器

接下来需要修改my-gitlab-gitlab-runner ConfigMap,把Insecure = false改成Insecure = true。

kind: ConfigMap
apiVersion: v1
metadata:
  name: my-gitlab-gitlab-runner
  namespace: mygitlab
data:
  config.template.toml: |
    [[runners]]
      [runners.cache]
        [runners.cache.s3]
          ServerAddress = "minio.example.com"
          BucketName = "runner-cache"
          BucketLocation = "us-east-1"
          Insecure = true
确保gitlab-runner工作正常

重启my-gitlab-gitlab-runner-*-*,然后查看my-gitlab-gitlab-runner-*-*的日志,确保Registering runner… succeeded就成功了:

ERROR: Registering runner... failed                 runner=CiOHA0SP status=couldn't execute POST against http://gitlab.example.com/api/v4/runners: Post http://gitlab.example.com/api/v4/runners: dial tcp: lookup gitlab.example.com on 10.96.0.10:53: no such host
PANIC: Failed to register the runner. You may be having network problems. 
Registration attempt 6 of 30
Runtime platform                                    arch=amd64 os=linux pid=82 revision=5316d4ac version=14.6.0
WARNING: Running in user-mode.                     
WARNING: The user-mode requires you to manually start builds processing: 
WARNING: $ gitlab-runner run                       
WARNING: Use sudo for system-mode:                 
WARNING: $ sudo gitlab-runner...                   
 
Registering runner... succeeded                     runner=CiOHA0SP
登录gitlab,修改密码,上传SSH Public Key

通过下列命令获取GITLAB root用户密码后登录http://gitlab.example.com/users/sign_in:

kubectl get secret my-gitlab-gitlab-initial-root-password -n mygitlab  -o jsonpath='{.data.password}' | base64 --decode

通过下列命令生成GITLAB root用户的SSH KEY,并把id_rsa.pub 内容更新到http://gitlab.example.com/-/profile/keys,这样就可以用git了:

[root@k8s-master k8s]# ssh-keygen -t rsa -b 2048 -C "mygitlab"
 
[root@k8s-master k8s]# cat /root/.ssh/id_rsa.pub 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+tr8cgRitUKHzoIReyPYYsoywtCvn8TLFMC2BjyI3kKWia4zajWkOFQpJwe9eaSlwO3GkqVdpfZ34O+y0caUWfwaw1+inZIlRvx7X6yGmMha27VSmfzj6dfd6TzH2B5KaBUg21nFBYaXaYwLAT0jX8BQ+/QXl8gi33NmH06ctIdVPl9dBkNBvr9rzRMYQnoFtJppKHnN8S/9XnhEJFN3lEvajka+j5VgeOuzLNUs7NvWd9+cbSWNakJulOSK/WSUdzT2oWpY6YP+amAByOIa5Nl2XSRpZ2/oVWG0KsXBHSgwhIlu6WK5GzTVSxRRdQNjSyqNTeuPmsh6WC1alWPGl mygitlab
修改允许上传Jar的最大限制

发布服务网格Demo程序
创建命名空间并注入Istio

在Kubernetes下创建namespace bookstore-servicemesh

kubectl create namespace bookstore-servicemesh
kubectl label namespace bookstore-servicemesh istio-injection=enabled

绑定ServiceAccount使其具有集群管理权限,后面helm会用:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: mygitlab-admin-role-default
subjects:
  - kind: ServiceAccount
    name: default
    namespace: mygitlab
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
CICD过程

下载源码bookstore(修改自https://github.com/fenixsoft/servicemesh_arch_istio.git),解压后直接Push到git@gitlab.example.com:root/bookstore.git。

等待gitlab CICD,然后在namespace bookstore-servicemesh下会自动Build and Deploy。

使book-admin具有pull 私有hub的权限

在namespace bookstore-servicemesh下创建一个私有Docker hub registry.example.com的 pull secret:

kubectl create secret docker-registry docker-gitlab --docker-server=registry.example.com --docker-username=root --docker-password=yougitlabpassword -n bookstore-servicemesh

关联这个secret到ServiceAccount book-admin。

kind: ServiceAccount
apiVersion: v1
metadata:
  name: book-admin
  namespace: bookstore-servicemesh
......
imagePullSecrets:
  - name: docker-gitlab

部署成功后如下:

测试

从如下图示可以得到Istio的Ingress地址:

访问如下:

从Kaili里面可以查看网络拓扑图:

本作品采用知识共享署名 4.0 国际许可协议进行许可。

搭建单节点Kubernetes环境

原创文章,转载请注明: 转载自慢慢的回味

本文链接地址: 搭建单节点Kubernetes环境

这次在Centos 8环境下搭建单节点Kubernetes环境用于日常的开发。区别于搭建高可用Kubernetes集群的是:系统升级为Centos 8;控制面为单节点;工作节点也只有一个。

系统规划

系统:CentOS-8.4.2105-x86_64
网络:Master节点 192.168.0.180;Worker节点:192.168.0.181
Kubernetes:1.23.1
kubeadm:1.23.1
Docker:20.10.9

准备工作

因为后面使用到的软件大部分需要科学上网。所以可以从阿里云香港区域购买一个Linux的主机,按量付费就可以。比如公网IP为47.52.220.100。
然后使用pproxy开启代理服务器:
pip3 install pproxy
pproxy -l http://0.0.0.0:8070 -r ssh://47.52.220.100/#root:password –v
这样代理服务器就在8070端口开放了。

安装基础服务器

安装所有需要的软件,后面的服务器只需要从它拷贝就可以了。

虚拟机安装Centos8

网卡需要选择桥接模式。
安装后手动设置IP,不要用DHCP

前置检查与配置

1 关闭防火墙,不然配置防火墙太麻烦。
2 关闭SELinux。
3 确保每个节点上 MAC 地址和 product_uuid 的唯一性。
4 禁用交换分区。为了保证 kubelet 正常工作,你 必须 禁用交换分区。
5 开启IP转发。

#!/bin/bash
 
echo "###############################################"
echo "Please ensure your OS is CentOS8 64 bits"
echo "Please ensure your machine has full network connection and internet access"
echo "Please ensure run this script with root user"
 
# Check hostname, Mac addr and product_uuid
echo "###############################################"
echo "Please check hostname as below:"
uname -a
# Set hostname if want
#hostnamectl set-hostname k8s-master
 
echo "###############################################"
echo "Please check Mac addr and product_uuid as below:"
ip link
cat /sys/class/dmi/id/product_uuid
 
echo "###############################################"
echo "Please check default route:"
ip route show
 
# Stop firewalld
echo "###############################################"
echo "Stop firewalld"
sudo systemctl stop firewalld
sudo systemctl disable firewalld
 
# Disable SELinux
echo "###############################################"
echo "Disable SELinux"
sudo getenforce
 
sudo setenforce 0
sudo cp -p /etc/selinux/config /etc/selinux/config.bak$(date '+%Y%m%d%H%M%S')
sudo sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
 
sudo getenforce
 
# Turn off Swap
echo "###############################################"
echo "Turn off Swap"
free -m
sudo cat /proc/swaps
 
sudo swapoff -a
 
sudo cp -p /etc/fstab /etc/fstab.bak$(date '+%Y%m%d%H%M%S')
sudo sed -i "s/\/dev\/mapper\/rhel-swap/\#\/dev\/mapper\/rhel-swap/g" /etc/fstab
sudo sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab
sudo sed -i "s/\/dev\/mapper\/cl-swap/\#\/dev\/mapper\/cl-swap/g" /etc/fstab
sudo mount -a
 
free -m
sudo cat /proc/swaps
 
# Setup iptables (routing)
echo "###############################################"
echo "Setup iptables (routing)"
sudo cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward = 1
EOF
 
sudo sysctl --system
iptables -P FORWARD ACCEPT
 
# Check ports
echo "###############################################"
echo "Check API server port(s)"
netstat -nlp | grep "8080\|6443"
 
echo "Check ETCD port(s)"
netstat -nlp | grep "2379\|2380"
 
echo "Check port(s): kublet, kube-scheduler, kube-controller-manager"
netstat -nlp | grep "10250\|10251\|10252"
安装Docker

卸载掉旧的docker,安装我们需要的版本。

#!/bin/bash
 
set -e
 
# Uninstall installed docker
sudo yum remove -y docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-selinux \
                  docker-engine-selinux \
                  docker-engine \
                  runc
# If you need set proxy, append one line
#vi /etc/yum.conf
#proxy=http://192.168.0.105:8070
 
 
# Set up repository
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
 
# Use Aliyun Docker
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
 
# Install a validated docker version
sudo yum install -y docker-ce-20.10.9 docker-ce-cli-20.10.9 containerd.io-1.4.12
 
# Setup Docker daemon https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/#docker
mkdir -p /etc/docker
 
sudo cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
 
sudo mkdir -p /etc/systemd/system/docker.service.d
 
# Run Docker as systemd service
sudo systemctl daemon-reload
sudo systemctl enable docker
sudo systemctl start docker
 
# Check Docker version
docker version
安装Kubernetes
#!/bin/bash
 
set -e
 
sudo cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
 
yum clean all
yum makecache -y
yum repolist all
 
setenforce 0
 
sudo yum install -y kubelet-1.23.1 kubeadm-1.23.1 kubectl-1.23.1 --disableexcludes=kubernetes
 
# Check installed Kubernetes packages
sudo yum list installed | grep kube
 
sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl start kubelet
提前下载Docker镜像

包括Kubernetes和Calico网络插件镜像。

mkdir -p /etc/systemd/system/docker.service.d
vi /etc/systemd/system/docker.service.d/http-proxy.conf
#加入如下配置
[Service]
Environment="HTTP_PROXY=http://192.168.0.105:8070" "HTTPS_PROXY=http://192.168.0.105:8070" "NO_PROXY=localhost,127.0.0.1,registry.example.com"
 
#重载配置并重启dockers服务
systemctl daemon-reload
systemctl restart docker
#!/bin/bash
 
# Run `kubeadm config images list` to check required images
# Check version in https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
# Search "Running kubeadm without an internet connection"
# For running kubeadm without an internet connection you have to pre-pull the required master images for the version of choice:
KUBE_VERSION=v1.23.1
KUBE_PAUSE_VERSION=3.6
ETCD_VERSION=3.5.1-0
CORE_DNS_VERSION=1.8.6
 
# In Kubernetes 1.12 and later, the k8s.gcr.io/kube-*, k8s.gcr.io/etcd and k8s.gcr.io/pause images don’t require an -${ARCH} suffix
images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION})
 
for imageName in ${images[@]} ; do
  docker pull k8s.gcr.io/$imageName
done
docker pull coredns/coredns:${CORE_DNS_VERSION}
 
docker images
 
docker pull calico/cni:v3.21.2
docker pull calico/pod2daemon-flexvol:v3.21.2
docker pull calico/node:v3.21.2
docker pull calico/kube-controllers:v3.21.2
 
docker images | grep calico
配置Master节点

拷贝一份上面的基础镜像,命名为k8s-master,同时修改hostname。

echo "192.168.0.180    k8s-master" >> /etc/hosts
echo "192.168.0.181    k8s-worker" >> /etc/hosts
初始化集群服务器
#!/bin/bash
 
set -e
 
# Reset firstly if ran kubeadm init before
kubeadm reset -f
 
# kubeadm init with calico network
CONTROL_PLANE_ENDPOINT="192.168.0.180:6443"
 
kubeadm init \
  --kubernetes-version=v1.23.1 \
  --control-plane-endpoint=${CONTROL_PLANE_ENDPOINT} \
  --service-cidr=10.96.0.0/16 \
  --pod-network-cidr=10.244.0.0/16 \
  --upload-certs
 
# Make kubectl works
mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
cp -p $HOME/.bash_profile $HOME/.bash_profile.bak$(date '+%Y%m%d%H%M%S')
echo "export KUBECONFIG=$HOME/.kube/config" >> $HOME/.bash_profile
source $HOME/.bash_profile
 
# Get cluster information
kubectl cluster-info

记录上面脚本输出中的kubeadm join 内容,后面用。
如果忘记了kubeadm join命令的内容,可运行kubeadm token create –print-join-command重新获取,并可运行kubeadm init phase upload-certs –upload-certs获取新的certificate-key。

当然,现在集群还没有工作,你会发现coredns还是Pending,那是因为我们没有安装CNI插件。

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-64897985d-p46b8              0/1     Pending   0          4m12s
kube-system   coredns-64897985d-tbdxl              0/1     Pending   0          4m12s
kube-system   etcd-k8s-master                      1/1     Running   1          4m27s
kube-system   kube-apiserver-k8s-master            1/1     Running   1          4m26s
kube-system   kube-controller-manager-k8s-master   1/1     Running   1          4m27s
kube-system   kube-proxy-dwj6v                     1/1     Running   0          52s
kube-system   kube-proxy-nszmz                     1/1     Running   0          4m13s
kube-system   kube-scheduler-k8s-master            1/1     Running   1          4m26s
安装网络插件
#!/bin/bash
 
set -e
 
wget -O calico.yaml https://docs.projectcalico.org/v3.21/manifests/calico.yaml
 
kubectl apply -f calico.yaml
 
# Wait a while to let network takes effect
sleep 30
 
# Check daemonset
kubectl get ds -n kube-system -l k8s-app=calico-node
 
# Check pod status and ready
kubectl get pods -n kube-system -l k8s-app=calico-node
 
# Check apiservice status
kubectl get apiservice v1.crd.projectcalico.org -o yaml

现在,所有Pod状态都正常了。

NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-647d84984b-gmlhv   1/1     Running   0          67s     10.244.254.131   k8s-worker              
kube-system   calico-node-bj8nn                          1/1     Running   0          67s     192.168.0.181    k8s-worker              
kube-system   calico-node-m77mk                          1/1     Running   0          67s     192.168.0.180    k8s-master              
kube-system   coredns-64897985d-p46b8                    1/1     Running   0          12m     10.244.254.130   k8s-worker              
kube-system   coredns-64897985d-tbdxl                    1/1     Running   0          12m     10.244.254.129   k8s-worker              
kube-system   etcd-k8s-master                            1/1     Running   1          12m     192.168.0.180    k8s-master              
kube-system   kube-apiserver-k8s-master                  1/1     Running   1          12m     192.168.0.180    k8s-master              
kube-system   kube-controller-manager-k8s-master         1/1     Running   1          12m     192.168.0.180    k8s-master              
kube-system   kube-proxy-dwj6v                           1/1     Running   0          8m49s   192.168.0.181    k8s-worker              
kube-system   kube-proxy-nszmz                           1/1     Running   0          12m     192.168.0.180    k8s-master              
kube-system   kube-scheduler-k8s-master                  1/1     Running   1          12m     192.168.0.180    k8s-master              
安装MetalLB作为集群负载均衡提供者

https://metallb.universe.tf/installation/

修改strictARP的值:
kubectl edit configmap -n kube-system kube-proxy

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  strictARP: true

下载并安装MetalLB:

export https_proxy=http://192.168.0.105:8070
export http_proxy=http://192.168.0.105:8070
wget https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/namespace.yaml
wget https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/metallb.yaml
 
export https_proxy=
export http_proxy=
kubectl apply -f namespace.yaml
kubectl apply -f metallb.yaml

创建文件lb.yaml如下:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.0.190-192.168.0.250

然后apply是MetalLB生效:
kubectl apply -f lb.yaml

配置Workder节点

拷贝一份上面的基础镜像,命名为k8s-worker,同时修改hostname,注意修改Mac地址和IP地址。

echo "192.168.0.180    k8s-master" >> /etc/hosts
echo "192.168.0.181    k8s-worker" >> /etc/hosts
Workder节点到集群

运行kubeadm init中打印的日志中关于加入“worker node”的命令。
如果忘记了kubeadm join命令的内容,可运行kubeadm token create –print-join-command重新获取,并可运行kubeadm init phase upload-certs –upload-certs获取新的certificate-key。

kubeadm join 192.168.0.180:6443 --token srmce8.eonpa2amiwek1x0n \
	--discovery-token-ca-cert-hash sha256:048c067f64ded80547d5c6acf2f9feda45d62c2fb02c7ab6da29d52b28eee1bb
安装Dashboard
export https_proxy=http://192.168.0.105:8070
export http_proxy=http://192.168.0.105:8070
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
 
export https_proxy=
export http_proxy=
kubectl apply -f recommended.yaml

创建文件dashboard-adminuser.yaml应用来添加管理员

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

然后执行下面命令后,拷贝输出的token来登录Dashboard:

kubectl apply -f dashboard-adminuser.yaml
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

使用如下命令修改ClusterIP为LoadBalancer:

kubectl edit service -n kubernetes-dashboard kubernetes-dashboard

查询服务kubernetes-dashboard,发现有外网IP地址了:

[root@k8s-master ~]# kubectl get service --all-namespaces
NAMESPACE              NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                  AGE
default                kubernetes                  ClusterIP      10.96.0.1       <none>          443/TCP                  45m
kube-system            kube-dns                    ClusterIP      10.96.0.10      <none>          53/UDP,53/TCP,9153/TCP   45m
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP      10.96.46.101    <none>          8000/TCP                 11m
kubernetes-dashboard   kubernetes-dashboard        LoadBalancer   10.96.155.101   192.168.0.190   443:31019/TCP            11m

现在可访问:https://192.168.0.190

安装Metrics Server
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        image: k8s.gcr.io/metrics-server/metrics-server:v0.5.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

本作品采用知识共享署名 4.0 国际许可协议进行许可。

搭建高可用Kubernetes集群

原创文章,转载请注明: 转载自慢慢的回味

本文链接地址: 搭建高可用Kubernetes集群

搭建高可用Kubernetes集群基本遵循一些范例,本文参考他人的文章实验了一次,其中发现了一些问题,经过Debug调试通过,难点在网络问题和kubernates的授权问题。

网络规划

全部节点都在一个低延迟的子网中:

主机名 IP 说明
k8s-lb-01 192.168.0.151
192.168.0.170
HAProxy + Keepalived (master)
k8s-lb-02 192.168.0.161
192.168.0.170
HAProxy + Keepalived (backup)
k8s-master-01 192.168.0.152 Master节点1
k8s-master-02 192.168.0.153 Master节点2
k8s-master-03 192.168.0.162 Master节点3
k8s-worker-01 192.168.0.154 Worker节点1

说明:

  • 采用外部HAProxy作集群负载均衡器;
  • 采用2个 HAProxy + Keepalived 来保证负载均衡的高可用,Keepalived VIP为192.168.0.170;即整个集群的IP地址。
  • 采用每个Master节点上伴随部署Etcd的方式(Slacked Etcd typology 堆叠Etcd拓扑),而Etcd高可用最少需要3个节点(奇数),因此Master节点至少为3个。
  • 1个worker节点,只是测试集群,所以1个就OK了。
  • 服务器规格都为CentOS7 x86_64。
  • 全部机器都在192.168.0.0/24网段,且相互连通。
  • Docker 19.03.11
  • Kubernetes v1.19.3
  • Calico网络组件
  • HAProxy负载均衡
  • 集群Control-Plane Endpoint:192.168.0.170:6443
  • 集群API Server(s):
    • 192.168.0.152:6443
    • 192.168.0.153:6443
    • 192.168.0.162:6443
    部署一个基础镜像供拷贝使用
    拉取k8s-deploy项目
    # 安装Git
    yum install git -y
     
    # 拉取k8s-deploy项目
    mkdir -p ~/k8s
    cd ~/k8s
    git clone https://github.com/cookcodeblog/k8s-deploy.git
     
    # 增加脚本的可执行文件权限
    cd k8s-deploy/kubeadm_v1.19.3
    find . -name '*.sh' -exec chmod u+x {} \;
    安装Kubernetes
    # 安装前检查和配置
    bash 01_pre_check_and_configure.sh
     
    # 安装Docker
    bash 02_install_docker.sh
     
    # 安装kubeadm, kubectl, kubelet
    bash 03_install_kubernetes.sh
     
    # 拉取Kubernetes集群要用到的镜像
    bash 04_pull_kubernetes_images_from_aliyun.sh
     
    # 拉取Calico网络组件的镜像
    bash 04_pull_calico_images.sh
     
    iptables -P FORWARD ACCEPT
    克隆服务器作为基准镜像

    关机,然后拷贝这个镜像,生成k8s-master-02,k8s-master-03,k8s-worker-01,k8s-lb-01,k8s-lb-02。然后修改所有机器的IP地址为对应的静态地址。
    vi /etc/sysconfig/network-scripts/ifcfg-ens33

    bootproto=static
    onboot=yes
    IPADDR=192.168.0.153
    NETMASK=255.255.255.0
    GATEWAY=192.168.0.1
    DNS1=192.168.0.1
    DNS2=8.8.8.8

    #重启网络服务
    systemctl restart network

    部署负载均衡服务器(k8s-lb-01,k8s-lb-02)
    安装和配置HAProxy

    安装HAProxy:

    yum install haproxy -y

    编辑/etc/haproxy/haproxy.cfg,删掉对应默认的代理设置,添加负载均衡反向代理到Kubernetes集群Master节点的设置:

    #---------------------------------------------------------------------
    # apiserver frontend which proxys to the masters
    #---------------------------------------------------------------------
    frontend apiserver
        bind *:6443
        mode tcp
        option tcplog
        default_backend apiserver
     
    #---------------------------------------------------------------------
    # round robin balancing for apiserver
    #---------------------------------------------------------------------
    backend apiserver
        option httpchk GET /healthz
        http-check expect status 200
        mode tcp
        option ssl-hello-chk
        balance     roundrobin
            server k8s-master-01 192.168.0.152:6443 check
            server k8s-master-02 192.168.0.153:6443 check
            server k8s-master-03 192.168.0.162:6443 check

    开启HAProxy服务:

    systemctl daemon-reload
    systemctl enable haproxy
    systemctl start haproxy
    systemctl status haproxy -l
    安装和配置Keepalived

    安装Keepalived:

    yum install keepalived -y

    配置Linux系统内核参数:

    sudo cat &lt;  /etc/sysctl.d/keepalived.conf
    net.ipv4.ip_forward = 1
    net.ipv4.ip_nonlocal_bind = 1
    EOF
     
    #ip_forward 允许IP转发
    #ip_nonlocal_bind允许绑定浮动IP,即VIP
    sudo sysctl --system

    在/etc/keepalived下新建check_apiserver.sh用来Keepalived作健康检查:

    #!/bin/sh
     
     
    APISERVER_VIP=$1
    APISERVER_DEST_PORT=$2
     
    errorExit() {
        echo "*** $*" 1&gt;&amp;2
        exit 1
    }
     
    curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
    if ip addr | grep -q ${APISERVER_VIP}; then
        curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
    fi
    chmod +x /etc/keepalived/check_apiserver.sh

    在Keepalived master节点上的/etc/keepalived/keepalived.conf中的配置为:

    ! /etc/keepalived/keepalived.conf
    ! Configuration File for keepalived
    global_defs {
        router_id LVS_DEVEL
    }
    vrrp_script check_apiserver {
      script "/etc/keepalived/check_apiserver.sh 192.168.0.170 6443"
      interval 3
      weight -2
      fall 10
      rise 2
    }
     
    vrrp_instance VI_1 {
        state MASTER
        interface ens33
        virtual_router_id 51
        priority 101
        authentication {
            auth_type PASS
            auth_pass Keep@lived
        }
        virtual_ipaddress {
            192.168.0.170
        }
        track_script {
            check_apiserver
        }
    }

    在Keepalived backup节点上的/etc/keepalived/keepalived.conf中的配置为:

    ! /etc/keepalived/keepalived.conf
    ! Configuration File for keepalived
    global_defs {
        router_id LVS_DEVEL
    }
    vrrp_script check_apiserver {
      script "/etc/keepalived/check_apiserver.sh 192.168.0.170 6443"
      interval 3
      weight -2
      fall 10
      rise 2
    }
     
    vrrp_instance VI_1 {
        state BACKUP
        interface ens33
        virtual_router_id 51
        priority 100
        authentication {
            auth_type PASS
            auth_pass Keep@lived
        }
        virtual_ipaddress {
            192.168.0.170
        }
        track_script {
            check_apiserver
        }
    }

    让HAProxy使用Keepalived VIP,修改HAProxy的配置文件,绑定frontend IP为Keepalived VIP:

    frontend apiserver
        bind 192.168.0.170:6443

    运行Keepalived:

    systemctl daemon-reload
    systemctl enable keepalived
    systemctl start keepalived
    systemctl status keepalived -l
     
    systemctl daemon-reload
    systemctl restart haproxy
    初始化集群(在k8s-master-02上)
    kubeadm init
    cd ~/k8s
    cd k8s-deploy/kubeadm_v1.19.3
     
    # 05_kubeadm_init.sh 
    bash 05_kubeadm_init.sh 192.168.0.170:6443

    05_kubeadm_init.sh中的service-cidr和pod-network-cidr需要修改,且不能和当前主机环境的CIDR一样。pod-network-cidr后面安装网络插件calico需要使用。

    kubeadm init \
      --kubernetes-version=v1.19.3 \
      --control-plane-endpoint=${CONTROL_PLANE_ENDPOINT} \
      --service-cidr=10.96.0.0/16 \
      --pod-network-cidr=10.244.0.0/16 \
      --image-repository=${IMAGE_REPOSITORY} \
      --upload-certs

    记录上面脚本输出中的kubeadm join 内容,后面用。

    安装Calico网络组件

    需要先修改calico/calico-ens33.xml中的内容:

                - name: CALICO_IPV4POOL_CIDR
                  value: "10.244.0.0/16"
     
                - name: KUBERNETES_SERVICE_HOST
                  value: "192.168.0.170"
                - name: KUBERNETES_SERVICE_PORT
                  value: "6443"
                - name: KUBERNETES_SERVICE_PORT_HTTPS
                  value: "6443"
    # support eth* and ens* networkd interfaces
    bash 06_install_calico.sh ens33
    部署Worker节点

    运行kubeadm init中打印的日志中关于加入“worker node”的命令。

    示例:

    kubeadm join 192.168.0.170:6443 --token r7w69v.3e1nweyk81h5zj6y \
        --discovery-token-ca-cert-hash sha256:1234a2317d27f0a4c6bcf5f284416a2fb3e8f3bd61aa88bc279a4f6ef18e09a1

    让Worker节点上的kubectl命令生效:

    bash enable_kubectl_worker.sh
    部署其他Master节点

    运行kubeadm init中打印的日志中关于加入“control-plane node”的命令。

    示例:

    kubeadm join 192.168.0.170:6443 --token r7w69v.3e1nweyk81h5zj6y \
        --discovery-token-ca-cert-hash sha256:1234a2317d27f0a4c6bcf5f284416a2fb3e8f3bd61aa88bc279a4f6ef18e09a1 \
        --control-plane --certificate-key 0e48107fbcd11cda60a5c2b76ae488b4ebf57223a4001acac799996740a6044e

    如果忘记了kubeadm join命令的内容,可运行kubeadm token create –print-join-command重新获取,并可运行kubeadm init phase upload-certs –upload-certs获取新的certificate-key。

    让Master节点上的kubectl命令生效:

    bash enable_kubectl_master.sh
    查看集群部署情况
    # Display cluster info
    kubectl cluster-info
     
    # Nodes
    kubectl get nodes
     
    # Display pds
    kubectl get pods --all-namespaces -o wide
     
    # Check pods status incase any pods are not in running status
    kubectl get pods --all-namespaces | grep -v Running
    (可选)安装Dashboard
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml

    创建文件dashboard-adminuser.yaml应用来添加管理员

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kubernetes-dashboard
     
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: admin-user
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: admin-user
      namespace: kubernetes-dashboard

    然后执行下面命令后,拷贝输出的token来登录Dashboard:

    kubectl apply -f dashboard-adminuser.yaml
     
    kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

    使用如下命令来把Dashboard暴露出docker:

    kubectl -n kubernetes-dashboard  port-forward kubernetes-dashboard-7cb9fd9999-gtjn7 --address 0.0.0.0 8443

    现在可访问:https://k8s-master-01:8443

    大部分内容参考在CentOS7上用kubeadm HAProxy Keepalived 安装多Master节点的高可用Kubernetes集群

本作品采用知识共享署名 4.0 国际许可协议进行许可。