手把手从零搭建 k8s 集群系列(三)高可用(灾备)切换

一、高可用测试

查看集群状态:

[root@k8s-master01 ~]# kubectl cluster-info
Kubernetes master is running at https://lb.kubesphere.local:6443
coredns is running at https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

将 master2 服务器关掉,测试整个集群是否能够正常运行?

[root@k8s-master01 ~]# kubectl get nodes

Error from server: etcdserver: request timed out
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# systemctl start etcd
[root@k8s-master01 ~]# kubectl get nodes

可以看到,将master2 主节点停掉之后,在 master1 上测试,不能查询到节点信息,报如下错误:

Error from server: etcdserver: request timed out

ETCD超时的原因是因为它是分布式键值数据库,需要仲裁才能正常运行。这基本上意味着ETCD集群的所有成员都对某些决策进行投票,然后多数人决定要做什么。当您有3个节点时,您总是可以丢失1,因为2个节点仍然占多数

有2个节点的问题是,当1个节点关闭时,最后一个ETCD节点在决定任何事情之前等待多数票,这永远不会发生。

这就是为什么您总是需要在Kubernetes cluster上使用不相等数量的主节点。

我有相同的设置(堆叠的etcd,但使用的是keepalived和HAProxy而不是nginx),我也有同样的问题。

您至少需要3 (!)控制平面节点。只有这样,您才能在不丢失功能的情况下关闭三个控制平面节点中的一个。

3个控制平面节点中的3个向上:

https://stackoverflow.com/questions/64424416/kubernetes-ha-cluster-using-kubeadm-with-nginx-lb-not-working-when-1-master-node

使用kubeadm和nginx LB的Kubernetes HA集群在1个主节点关闭时无法工作-来自服务器的错误: etcdserver:请求超时

我已经使用Kubeadm设置了Kubernetes HA集群(堆叠ETCD)。当我故意关闭一个主节点时,整个集群都会关闭,我得到的错误信息如下:

[vagrant@k8s-master01 ~]$ kubectl get nodes
Error from server: etcdserver: request timed out

综上,在演练灾备的时候,必须三台 master,才能进行选举,一个节点是不能进行选举的,所以会报上述错误;

二、添加节点

为了验证高可用,所以害的添加一个master节点,达到三台 master,所以,再新添加一个k8s-master03节点,做切换验证。
官方文档:kubekey添加新节点

集群规划: 系统类型 IP地址 节点角色 CPU Memory 硬盘 Hostname
CentOS7.9 11.0.1.10 master 2C 4G 40G k8s-master01
CentOS7.9 11.0.1.11 master 2C 4G 40G k8s-master02
CentOS7.9 11.0.1.12 master 2C 4G 40G k8s-master03(新增)
CentOS7.9 11.0.1.20 worker 2C 3G 40G k8s-master01
CentOS7.9 11.0.1.21 worker 2C 4G 40G k8s-master01

说明:master 内存必须为 4G+,否则安装kubesphere会报错,容器起不来。

未添加节点前:

[root@k8s-master01 softwares]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   13h   v1.19.9
k8s-master02   Ready    master   13h   v1.19.9
k8s-node01     Ready    worker   13h   v1.19.9
k8s-node02     Ready    worker   13h   v1.19.9
[root@k8s-master01 softwares]# 

添加主节点以实现高可用
添加主节点的步骤与添加工作节点的步骤大体一致,不过您需要为集群配置负载均衡器。您可以使用任何云负载均衡器或者硬件负载均衡器(例如 F5)。另外,Keepalived 和 HAproxy、或者 Nginx 也是创建高可用集群的替代方案。

打开文件,可以看到一些字段预先填充了值。将新节点和负载均衡器的信息添加到文件中。以下示例供您参考:

在原来 config-sample.yaml 文件新增加 k8s-master03 节点,复制一个新的配置文件:
config-sample-add-node.yaml

[root@k8s-master01 softwares]# cat config-sample-add-node.yaml 

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: k8s-master01, address: 11.0.1.10, internalAddress: 11.0.1.10, user: root, password: "123456"}
  - {name: k8s-master02, address: 11.0.1.11, internalAddress: 11.0.1.11, user: root, password: "123456"}
  - {name: k8s-master03, address: 11.0.1.12, internalAddress: 11.0.1.12, user: root, password: "123456"}
  - {name: k8s-node01, address: 11.0.1.20, internalAddress: 11.0.1.20, user: root, password: "123456"}
  - {name: k8s-node02, address: 11.0.1.21, internalAddress: 11.0.1.21, user: root, password: "123456"}
  roleGroups:
    etcd:
    - k8s-master01
    - k8s-master02
    - k8s-master03
    control-plane: 
    - k8s-master01
    - k8s-master02
    - k8s-master03
    worker:
    - k8s-node01
    - k8s-node02
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.19.9
    clusterName: cluster.local
    autoRenewCerts: true
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.1.1
spec:
  persistence:
    storageClass: ""       
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""        
  etcd:
    monitoring: false      
    endpointIps: localhost  
    port: 2379             
    tlsEnable: true
  common:
    redis:
      enabled: false
    redisVolumSize: 2Gi 
    openldap:
      enabled: false
    openldapVolumeSize: 2Gi  
    minioVolumeSize: 20Gi
    monitoring:
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
    es:  
      elasticsearchMasterVolumeSize: 4Gi   
      elasticsearchDataVolumeSize: 20Gi   
      logMaxAge: 7          
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""  
  console:
    enableMultiLogin: true 
    port: 30880
  alerting:       
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:    
    enabled: false
  devops:           
    enabled: true
    jenkinsMemoryLim: 2Gi     
    jenkinsMemoryReq: 1500Mi 
    jenkinsVolumeSize: 8Gi   
    jenkinsJavaOpts_Xms: 512m  
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:          
    enabled: false
    ruler:
      enabled: true
      replicas: 2
  logging:         
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:             
    enabled: false
  monitoring:
    storageClass: ""
    prometheusMemoryRequest: 400Mi  
    prometheusVolumeSize: 20Gi  
  multicluster:
    clusterRole: none 
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:    
    enabled: false  
  kubeedge:
    enabled: false
    cloudCore:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress: 
          - ""           
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {"node-role.kubernetes.io/worker": ""}
        tolerations: []

[root@k8s-master01 softwares]# 

file

保存文件并执行以下命令以应用配置。

./kk add nodes -f config-sample-add-node.yaml 

执行打印:

[root@k8s-master01 softwares]# ./kk add nodes -f config-sample-add-node.yaml

 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

23:56:44 PDT [GreetingsModule] Greetings
23:56:44 PDT message: [k8s-node02]
Greetings, KubeKey!
23:56:45 PDT message: [k8s-master02]
Greetings, KubeKey!
23:56:45 PDT message: [k8s-master01]
Greetings, KubeKey!
23:56:45 PDT message: [k8s-master03]
Greetings, KubeKey!
23:56:45 PDT message: [k8s-node01]
Greetings, KubeKey!
23:56:45 PDT success: [k8s-node02]
23:56:45 PDT success: [k8s-master02]
23:56:45 PDT success: [k8s-master01]
23:56:45 PDT success: [k8s-master03]
23:56:45 PDT success: [k8s-node01]
23:56:45 PDT [NodePreCheckModule] A pre-check on nodes
23:56:52 PDT success: [k8s-master03]
23:56:52 PDT success: [k8s-node02]
23:56:52 PDT success: [k8s-node01]
23:56:52 PDT success: [k8s-master02]
23:56:52 PDT success: [k8s-master01]
23:56:52 PDT [ConfirmModule] Display confirmation form
+--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+---------+------------+------------+-------------+------------------+--------------+
| name         | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker  | containerd | nfs client | ceph client | glusterfs client | time         |
+--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+---------+------------+------------+-------------+------------------+--------------+
| k8s-master01 | y    | y    | y       | y        | y     | y     | y       | y         | y      | 20.10.8 | v1.4.9     | y          |             | y                | PDT 23:56:52 |
| k8s-master02 | y    | y    | y       | y        | y     | y     | y       | y         | y      | 20.10.8 | v1.4.9     | y          |             | y                | PDT 23:56:51 |
| k8s-master03 | y    | y    | y       | y        | y     | y     | y       | y         | y      | 20.10.8 | v1.4.9     | y          |             | y                | PDT 23:56:47 |
| k8s-node01   | y    | y    | y       | y        | y     | y     | y       | y         | y      | 20.10.8 | v1.4.9     | y          |             | y                | PDT 23:56:50 |
| k8s-node02   | y    | y    | y       | y        | y     | y     | y       | y         | y      | 20.10.8 | v1.4.9     | y          |             | y                | PDT 23:56:49 |
+--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+---------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes

00:05:52 PDT [InternalLoadbalancerModule] Update kube-proxy configmap
00:05:53 PDT skipped: [k8s-master01]
00:05:53 PDT [InternalLoadbalancerModule] Update /etc/hosts
00:05:53 PDT success: [k8s-master03]
00:05:53 PDT success: [k8s-master02]
00:05:53 PDT success: [k8s-node02]
00:05:53 PDT success: [k8s-node01]
00:05:53 PDT success: [k8s-master01]
00:05:53 PDT [ConfigureKubernetesModule] Configure kubernetes
00:05:53 PDT success: [k8s-node02]
00:05:53 PDT success: [k8s-master01]
00:05:53 PDT success: [k8s-master02]
00:05:53 PDT success: [k8s-master03]
00:05:53 PDT success: [k8s-node01]
00:05:53 PDT [ChownModule] Chown user $HOME/.kube dir
00:05:53 PDT success: [k8s-master03]
00:05:53 PDT success: [k8s-node02]
00:05:53 PDT success: [k8s-node01]
00:05:53 PDT success: [k8s-master02]
00:05:53 PDT success: [k8s-master01]
00:05:53 PDT [AutoRenewCertsModule] Generate k8s certs renew script
00:05:54 PDT success: [k8s-master03]
00:05:54 PDT success: [k8s-master02]
00:05:54 PDT success: [k8s-master01]
00:05:54 PDT [AutoRenewCertsModule] Generate k8s certs renew service
00:05:55 PDT success: [k8s-master03]
00:05:55 PDT success: [k8s-master01]
00:05:55 PDT success: [k8s-master02]
00:05:55 PDT [AutoRenewCertsModule] Generate k8s certs renew timer
00:05:56 PDT success: [k8s-master03]
00:05:56 PDT success: [k8s-master02]
00:05:56 PDT success: [k8s-master01]
00:05:56 PDT [AutoRenewCertsModule] Enable k8s certs renew service
00:05:58 PDT success: [k8s-master03]
00:05:58 PDT success: [k8s-master02]
00:05:58 PDT success: [k8s-master01]
00:05:58 PDT Pipeline[AddNodesPipeline] execute successful
[root@k8s-master01 softwares]# 

可以看到,新增 k8s-master03 节点成功:

[root@k8s-master01 softwares]# kubectl get nodes
NAME           STATUS   ROLES    AGE     VERSION
k8s-master01   Ready    master   13h     v1.19.9
k8s-master02   Ready    master   13h     v1.19.9
k8s-master03   Ready    master   2m45s   v1.19.9
k8s-node01     Ready    worker   13h     v1.19.9
k8s-node02     Ready    worker   13h     v1.19.9
[root@k8s-master01 softwares]# 

查看pod:

[root@k8s-master01 softwares]# kubectl get pods -A
NAMESPACE                      NAME                                               READY   STATUS                  RESTARTS   AGE
kube-system                    calico-kube-controllers-7fc49b8c4-89gzg            1/1     Running                 16         13h
kube-system                    calico-node-6l4qp                                  1/1     Running                 2          13h
kube-system                    calico-node-85qfv                                  1/1     Running                 2          13h
kube-system                    calico-node-st6k8                                  1/1     Running                 0          6m17s
kube-system                    calico-node-t7zmq                                  1/1     Running                 1          13h
kube-system                    calico-node-ttd98                                  1/1     Running                 1          13h
kube-system                    coredns-86cfc99d74-5cw5r                           1/1     Running                 3          13h
kube-system                    coredns-86cfc99d74-mk6wp                           1/1     Running                 3          13h
kube-system                    haproxy-k8s-node01                                 1/1     Running                 0          5m25s
kube-system                    haproxy-k8s-node02                                 1/1     Running                 0          5m27s
kube-system                    kube-apiserver-k8s-master01                        1/1     Running                 31         13h
kube-system                    kube-apiserver-k8s-master02                        1/1     Running                 10         13h
kube-system                    kube-apiserver-k8s-master03                        1/1     Running                 0          3m38s
kube-system                    kube-controller-manager-k8s-master01               1/1     Running                 15         13h
kube-system                    kube-controller-manager-k8s-master02               1/1     Running                 17         13h
kube-system                    kube-controller-manager-k8s-master03               1/1     Running                 0          4m39s
kube-system                    kube-proxy-2rs56                                   1/1     Running                 1          13h
kube-system                    kube-proxy-9272j                                   1/1     Running                 2          13h
kube-system                    kube-proxy-9xsqh                                   1/1     Running                 0          6m6s
kube-system                    kube-proxy-bd9sx                                   1/1     Running                 1          13h
kube-system                    kube-proxy-msxzn                                   1/1     Running                 2          13h
kube-system                    kube-scheduler-k8s-master01                        1/1     Running                 17         13h
kube-system                    kube-scheduler-k8s-master02                        1/1     Running                 18         13h
kube-system                    kube-scheduler-k8s-master03                        1/1     Running                 0          4m39s
kube-system                    nodelocaldns-cwj2s                                 1/1     Running                 3          13h
kube-system                    nodelocaldns-ddvv9                                 1/1     Running                 2          13h
kube-system                    nodelocaldns-f6vpl                                 1/1     Running                 1          13h
kube-system                    nodelocaldns-rx8l6                                 1/1     Running                 1          13h
kube-system                    nodelocaldns-vr5tj                                 1/1     Running                 0          6m6s
kube-system                    openebs-localpv-provisioner-64fb84d4cc-whqvm       1/1     Running                 3          54m
kube-system                    snapshot-controller-0                              1/1     Running                 1          5h51m
kubesphere-controls-system     default-http-backend-76d9fb4bb7-5ftcf              1/1     Running                 0          54m
kubesphere-controls-system     kubectl-admin-69b8ff6d54-g5swp                     1/1     Running                 0          54m
kubesphere-devops-system       ks-jenkins-65db765f86-7gkdx                        1/1     Running                 0          54m
kubesphere-devops-system       s2ioperator-0                                      1/1     Running                 0          28m
kubesphere-monitoring-system   alertmanager-main-0                                1/2     CrashLoopBackOff        9          28m
kubesphere-monitoring-system   alertmanager-main-1                                2/2     Running                 0          4h17m
kubesphere-monitoring-system   alertmanager-main-2                                1/2     CrashLoopBackOff        9          28m
kubesphere-monitoring-system   kube-state-metrics-67588479db-978hf                3/3     Running                 0          4h18m
kubesphere-monitoring-system   node-exporter-57sk7                                2/2     Running                 2          4h18m
kubesphere-monitoring-system   node-exporter-dbqhj                                2/2     Running                 0          6m17s
kubesphere-monitoring-system   node-exporter-ds5jq                                2/2     Running                 0          4h18m
kubesphere-monitoring-system   node-exporter-mjnr7                                2/2     Running                 0          4h18m
kubesphere-monitoring-system   node-exporter-qntqg                                2/2     Running                 0          4h18m
kubesphere-monitoring-system   notification-manager-deployment-7bd887ffb4-k4mp6   1/1     Running                 0          54m
kubesphere-monitoring-system   notification-manager-deployment-7bd887ffb4-k7xln   1/1     Running                 0          4h16m
kubesphere-monitoring-system   notification-manager-operator-78595d8666-xhc4b     2/2     Running                 2          54m
kubesphere-monitoring-system   prometheus-k8s-0                                   3/3     Running                 1          4h18m
kubesphere-monitoring-system   prometheus-k8s-1                                   3/3     Running                 1          4h18m
kubesphere-monitoring-system   prometheus-operator-d7fdfccbf-2q85k                2/2     Running                 0          4h19m
kubesphere-system              ks-apiserver-655998d448-j2xdg                      0/1     CrashLoopBackOff        15         58m
kubesphere-system              ks-apiserver-655998d448-ptrqs                      1/1     Running                 1          74m
kubesphere-system              ks-console-d6446bd77-ckz7c                         1/1     Running                 0          74m
kubesphere-system              ks-console-d6446bd77-s4l8v                         1/1     Running                 1          5h47m
kubesphere-system              ks-controller-manager-5489dc9dd4-8t6vf             1/1     Running                 2          4h2m
kubesphere-system              ks-controller-manager-5489dc9dd4-jsxnz             0/1     CrashLoopBackOff        27         4h3m
kubesphere-system              ks-installer-66cb7455bb-b66td                      1/1     Running                 1          5h52m
kubesphere-system              minio-f69748945-758dm                              1/1     Running                 1          5h49m
kubesphere-system              openldap-0                                         1/1     Running                 1          4h8m
kubesphere-system              openldap-1                                         1/1     Running                 2          4h5m
kubesphere-system              redis-ha-haproxy-75575dcdd7-d8l59                  1/1     Running                 12         5h51m
kubesphere-system              redis-ha-haproxy-75575dcdd7-f7s75                  0/1     Init:CrashLoopBackOff   11         58m
kubesphere-system              redis-ha-haproxy-75575dcdd7-fxwjr                  0/1     Init:CrashLoopBackOff   13         74m
kubesphere-system              redis-ha-server-0                                  2/2     Running                 11         5h51m
kubesphere-system              redis-ha-server-1                                  2/2     Running                 0          4h8m
kubesphere-system              redis-ha-server-2                                  0/2     Init:CrashLoopBackOff   9          28m
[root@k8s-master01 softwares]# kubectl get sc
NAME              PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local (default)   openebs.io/local   Delete          WaitForFirstConsumer   false                  13h
[root@k8s-master01 softwares]# kubectl delete pod  ks-apiserver-655998d448-j2xdg -n kubesphere-system
pod "ks-apiserver-655998d448-j2xdg" deleted
[root@k8s-master01 softwares]# kubectl delete pod  ks-controller-manager-5489dc9dd4-jsxnz -n kubesphere-system
pod "ks-controller-manager-5489dc9dd4-jsxnz" deleted
[root@k8s-master01 softwares]# kubectl delete pod  redis-ha-haproxy-75575dcdd7-f7s75 -n kubesphere-system
pod "redis-ha-haproxy-75575dcdd7-f7s75" deleted
[root@k8s-master01 softwares]# kubectl delete pod  redis-ha-haproxy-75575dcdd7-fxwjr -n kubesphere-system
pod "redis-ha-haproxy-75575dcdd7-fxwjr" deleted
[root@k8s-master01 softwares]# kubectl delete pod  redis-ha-server-2 -n kubesphere-system
pod "redis-ha-server-2" deleted
[root@k8s-master01 softwares]# 

查看:

[root@k8s-master01 softwares]# kubectl get pods -A  -o wide
NAMESPACE                      NAME                                               READY   STATUS             RESTARTS   AGE     IP              NODE           NOMINATED NODE   READINESS GATES
kube-system                    calico-kube-controllers-7fc49b8c4-89gzg            1/1     Running            16         13h     10.233.113.18   k8s-master02   <none>           <none>
kube-system                    calico-node-6l4qp                                  1/1     Running            2          13h     11.0.1.10       k8s-master01   <none>           <none>
kube-system                    calico-node-85qfv                                  1/1     Running            2          13h     11.0.1.11       k8s-master02   <none>           <none>
kube-system                    calico-node-st6k8                                  1/1     Running            0          11m     11.0.1.12       k8s-master03   <none>           <none>
kube-system                    calico-node-t7zmq                                  1/1     Running            1          13h     11.0.1.20       k8s-node01     <none>           <none>
kube-system                    calico-node-ttd98                                  1/1     Running            1          13h     11.0.1.21       k8s-node02     <none>           <none>
kube-system                    coredns-86cfc99d74-5cw5r                           1/1     Running            3          13h     10.233.113.14   k8s-master02   <none>           <none>
kube-system                    coredns-86cfc99d74-mk6wp                           1/1     Running            3          13h     10.233.113.15   k8s-master02   <none>           <none>
kube-system                    haproxy-k8s-node01                                 1/1     Running            0          11m     11.0.1.20       k8s-node01     <none>           <none>
kube-system                    haproxy-k8s-node02                                 1/1     Running            0          11m     11.0.1.21       k8s-node02     <none>           <none>
kube-system                    kube-apiserver-k8s-master01                        1/1     Running            31         13h     11.0.1.10       k8s-master01   <none>           <none>
kube-system                    kube-apiserver-k8s-master02                        1/1     Running            10         13h     11.0.1.11       k8s-master02   <none>           <none>
kube-system                    kube-apiserver-k8s-master03                        1/1     Running            0          9m19s   11.0.1.12       k8s-master03   <none>           <none>
kube-system                    kube-controller-manager-k8s-master01               1/1     Running            15         13h     11.0.1.10       k8s-master01   <none>           <none>
kube-system                    kube-controller-manager-k8s-master02               1/1     Running            17         13h     11.0.1.11       k8s-master02   <none>           <none>
kube-system                    kube-controller-manager-k8s-master03               1/1     Running            0          10m     11.0.1.12       k8s-master03   <none>           <none>
kube-system                    kube-proxy-2rs56                                   1/1     Running            1          13h     11.0.1.20       k8s-node01     <none>           <none>
kube-system                    kube-proxy-9272j                                   1/1     Running            2          13h     11.0.1.10       k8s-master01   <none>           <none>
kube-system                    kube-proxy-9xsqh                                   1/1     Running            0          11m     11.0.1.12       k8s-master03   <none>           <none>
kube-system                    kube-proxy-bd9sx                                   1/1     Running            1          13h     11.0.1.21       k8s-node02     <none>           <none>
kube-system                    kube-proxy-msxzn                                   1/1     Running            2          13h     11.0.1.11       k8s-master02   <none>           <none>
kube-system                    kube-scheduler-k8s-master01                        1/1     Running            17         13h     11.0.1.10       k8s-master01   <none>           <none>
kube-system                    kube-scheduler-k8s-master02                        1/1     Running            18         13h     11.0.1.11       k8s-master02   <none>           <none>
kube-system                    kube-scheduler-k8s-master03                        1/1     Running            0          10m     11.0.1.12       k8s-master03   <none>           <none>
kube-system                    nodelocaldns-cwj2s                                 1/1     Running            3          13h     11.0.1.11       k8s-master02   <none>           <none>
kube-system                    nodelocaldns-ddvv9                                 1/1     Running            2          13h     11.0.1.10       k8s-master01   <none>           <none>
kube-system                    nodelocaldns-f6vpl                                 1/1     Running            1          13h     11.0.1.20       k8s-node01     <none>           <none>
kube-system                    nodelocaldns-rx8l6                                 1/1     Running            1          13h     11.0.1.21       k8s-node02     <none>           <none>
kube-system                    nodelocaldns-vr5tj                                 1/1     Running            0          11m     11.0.1.12       k8s-master03   <none>           <none>
kube-system                    openebs-localpv-provisioner-64fb84d4cc-whqvm       1/1     Running            4          59m     10.233.67.21    k8s-node01     <none>           <none>
kube-system                    snapshot-controller-0                              1/1     Running            1          5h57m   10.233.67.8     k8s-node01     <none>           <none>
kubesphere-controls-system     default-http-backend-76d9fb4bb7-5ftcf              1/1     Running            0          59m     10.233.67.25    k8s-node01     <none>           <none>
kubesphere-controls-system     kubectl-admin-69b8ff6d54-g5swp                     1/1     Running            0          59m     10.233.67.24    k8s-node01     <none>           <none>
kubesphere-devops-system       ks-jenkins-65db765f86-7gkdx                        1/1     Running            0          59m     10.233.123.31   k8s-node02     <none>           <none>
kubesphere-devops-system       s2ioperator-0                                      1/1     Running            0          34m     10.233.123.30   k8s-node02     <none>           <none>
kubesphere-monitoring-system   alertmanager-main-0                                1/2     CrashLoopBackOff   11         34m     10.233.123.29   k8s-node02     <none>           <none>
kubesphere-monitoring-system   alertmanager-main-1                                2/2     Running            0          4h23m   10.233.67.16    k8s-node01     <none>           <none>
kubesphere-monitoring-system   alertmanager-main-2                                1/2     CrashLoopBackOff   11         34m     10.233.123.28   k8s-node02     <none>           <none>
kubesphere-monitoring-system   kube-state-metrics-67588479db-978hf                3/3     Running            0          4h24m   10.233.67.11    k8s-node01     <none>           <none>
kubesphere-monitoring-system   node-exporter-57sk7                                2/2     Running            2          4h24m   11.0.1.10       k8s-master01   <none>           <none>
kubesphere-monitoring-system   node-exporter-dbqhj                                2/2     Running            0          11m     11.0.1.12       k8s-master03   <none>           <none>
kubesphere-monitoring-system   node-exporter-ds5jq                                2/2     Running            0          4h24m   11.0.1.20       k8s-node01     <none>           <none>
kubesphere-monitoring-system   node-exporter-mjnr7                                2/2     Running            0          4h24m   11.0.1.11       k8s-master02   <none>           <none>
kubesphere-monitoring-system   node-exporter-qntqg                                2/2     Running            0          4h24m   11.0.1.21       k8s-node02     <none>           <none>
kubesphere-monitoring-system   notification-manager-deployment-7bd887ffb4-k4mp6   1/1     Running            0          59m     10.233.67.22    k8s-node01     <none>           <none>
kubesphere-monitoring-system   notification-manager-deployment-7bd887ffb4-k7xln   1/1     Running            0          4h21m   10.233.67.17    k8s-node01     <none>           <none>
kubesphere-monitoring-system   notification-manager-operator-78595d8666-xhc4b     2/2     Running            3          59m     10.233.67.23    k8s-node01     <none>           <none>
kubesphere-monitoring-system   prometheus-k8s-0                                   3/3     Running            1          4h23m   10.233.67.14    k8s-node01     <none>           <none>
kubesphere-monitoring-system   prometheus-k8s-1                                   3/3     Running            1          4h23m   10.233.67.15    k8s-node01     <none>           <none>
kubesphere-monitoring-system   prometheus-operator-d7fdfccbf-2q85k                2/2     Running            0          4h24m   10.233.67.10    k8s-node01     <none>           <none>
kubesphere-system              ks-apiserver-655998d448-7s48f                      0/1     CrashLoopBackOff   3          3m56s   10.233.76.1     k8s-master03   <none>           <none>
kubesphere-system              ks-apiserver-655998d448-ptrqs                      1/1     Running            1          80m     10.233.113.27   k8s-master02   <none>           <none>
kubesphere-system              ks-console-d6446bd77-ckz7c                         1/1     Running            0          80m     10.233.67.20    k8s-node01     <none>           <none>
kubesphere-system              ks-console-d6446bd77-s4l8v                         1/1     Running            1          5h52m   10.233.113.16   k8s-master02   <none>           <none>
kubesphere-system              ks-controller-manager-5489dc9dd4-8t6vf             1/1     Running            2          4h8m    10.233.113.26   k8s-master02   <none>           <none>
kubesphere-system              ks-controller-manager-5489dc9dd4-wpssm             1/1     Running            2          3m24s   10.233.76.2     k8s-master03   <none>           <none>
kubesphere-system              ks-installer-66cb7455bb-b66td                      1/1     Running            1          5h58m   10.233.67.6     k8s-node01     <none>           <none>
kubesphere-system              minio-f69748945-758dm                              1/1     Running            1          5h55m   10.233.67.7     k8s-node01     <none>           <none>
kubesphere-system              openldap-0                                         1/1     Running            1          4h14m   10.233.113.25   k8s-master02   <none>           <none>
kubesphere-system              openldap-1                                         1/1     Running            2          4h10m   10.233.66.18    k8s-master01   <none>           <none>
kubesphere-system              redis-ha-haproxy-75575dcdd7-5bgkp                  0/1     Init:0/1           0          2m37s   10.233.76.3     k8s-master03   <none>           <none>
kubesphere-system              redis-ha-haproxy-75575dcdd7-7zpnt                  0/1     Init:0/1           0          90s     10.233.66.24    k8s-master01   <none>           <none>
kubesphere-system              redis-ha-haproxy-75575dcdd7-d8l59                  1/1     Running            13         5h56m   10.233.113.17   k8s-master02   <none>           <none>
kubesphere-system              redis-ha-server-0                                  2/2     Running            11         5h56m   10.233.66.21    k8s-master01   <none>           <none>
kubesphere-system              redis-ha-server-1                                  2/2     Running            0          4h13m   10.233.113.24   k8s-master02   <none>           <none>
kubesphere-system              redis-ha-server-2                                  0/2     Init:0/1           0          44s     10.233.123.32   k8s-node02     <none>           <none>
[root@k8s-master01 softwares]# 

再次查看集群:

[root@k8s-master01 softwares]# kubectl cluster-info
Kubernetes master is running at https://lb.kubesphere.local:6443
coredns is running at https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-master01 softwares]# 

关闭 k8s-master02 节点:

[root@k8s-master01 softwares]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   14h   v1.19.9
k8s-master02   Ready    master   14h   v1.19.9
k8s-master03   Ready    master   22m   v1.19.9
k8s-node01     Ready    worker   14h   v1.19.9
k8s-node02     Ready    worker   14h   v1.19.9
[root@k8s-master01 softwares]# kubectl get nodes
NAME           STATUS     ROLES    AGE   VERSION
k8s-master01   Ready      master   14h   v1.19.9
k8s-master02   NotReady   master   14h   v1.19.9
k8s-master03   Ready      master   24m   v1.19.9
k8s-node01     Ready      worker   14h   v1.19.9
k8s-node02     Ready      worker   14h   v1.19.9

file

k8s-master02 主节点挂了,整个集群仍然可以正常使用

[root@k8s-master01 softwares]# kubectl get pods -A -o wide
NAMESPACE                      NAME                                               READY   STATUS                  RESTARTS   AGE     IP              NODE           NOMINATED NODE   READINESS GATES
kube-system                    calico-kube-controllers-7fc49b8c4-89gzg            1/1     Running                 16         14h     10.233.113.18   k8s-master02   <none>           <none>
kube-system                    calico-node-6l4qp                                  1/1     Running                 2          14h     11.0.1.10       k8s-master01   <none>           <none>
kube-system                    calico-node-85qfv                                  1/1     Running                 2          14h     11.0.1.11       k8s-master02   <none>           <none>
kube-system                    calico-node-st6k8                                  1/1     Running                 0          31m     11.0.1.12       k8s-master03   <none>           <none>
kube-system                    calico-node-t7zmq                                  1/1     Running                 1          14h     11.0.1.20       k8s-node01     <none>           <none>
kube-system                    calico-node-ttd98                                  1/1     Running                 1          14h     11.0.1.21       k8s-node02     <none>           <none>
kube-system                    coredns-86cfc99d74-5cw5r                           1/1     Running                 3          14h     10.233.113.14   k8s-master02   <none>           <none>
kube-system                    coredns-86cfc99d74-mk6wp                           1/1     Running                 3          14h     10.233.113.15   k8s-master02   <none>           <none>
kube-system                    haproxy-k8s-node01                                 1/1     Running                 0          31m     11.0.1.20       k8s-node01     <none>           <none>
kube-system                    haproxy-k8s-node02                                 1/1     Running                 0          31m     11.0.1.21       k8s-node02     <none>           <none>
kube-system                    kube-apiserver-k8s-master01                        1/1     Running                 31         14h     11.0.1.10       k8s-master01   <none>           <none>
kube-system                    kube-apiserver-k8s-master02                        1/1     Running                 10         14h     11.0.1.11       k8s-master02   <none>           <none>
kube-system                    kube-apiserver-k8s-master03                        1/1     Running                 0          29m     11.0.1.12       k8s-master03   <none>           <none>
kube-system                    kube-controller-manager-k8s-master01               1/1     Running                 16         14h     11.0.1.10       k8s-master01   <none>           <none>
kube-system                    kube-controller-manager-k8s-master02               1/1     Running                 17         14h     11.0.1.11       k8s-master02   <none>           <none>
kube-system                    kube-controller-manager-k8s-master03               1/1     Running                 0          30m     11.0.1.12       k8s-master03   <none>           <none>
kube-system                    kube-proxy-2rs56                                   1/1     Running                 1          14h     11.0.1.20       k8s-node01     <none>           <none>
kube-system                    kube-proxy-9272j                                   1/1     Running                 2          14h     11.0.1.10       k8s-master01   <none>           <none>
kube-system                    kube-proxy-9xsqh                                   1/1     Running                 0          31m     11.0.1.12       k8s-master03   <none>           <none>
kube-system                    kube-proxy-bd9sx                                   1/1     Running                 1          14h     11.0.1.21       k8s-node02     <none>           <none>
kube-system                    kube-proxy-msxzn                                   1/1     Running                 2          14h     11.0.1.11       k8s-master02   <none>           <none>
kube-system                    kube-scheduler-k8s-master01                        1/1     Running                 18         14h     11.0.1.10       k8s-master01   <none>           <none>
kube-system                    kube-scheduler-k8s-master02                        1/1     Running                 18         14h     11.0.1.11       k8s-master02   <none>           <none>
kube-system                    kube-scheduler-k8s-master03                        1/1     Running                 0          30m     11.0.1.12       k8s-master03   <none>           <none>
kube-system                    nodelocaldns-cwj2s                                 1/1     Running                 3          14h     11.0.1.11       k8s-master02   <none>           <none>
kube-system                    nodelocaldns-ddvv9                                 1/1     Running                 2          14h     11.0.1.10       k8s-master01   <none>           <none>
kube-system                    nodelocaldns-f6vpl                                 1/1     Running                 1          14h     11.0.1.20       k8s-node01     <none>           <none>
kube-system                    nodelocaldns-rx8l6                                 1/1     Running                 1          14h     11.0.1.21       k8s-node02     <none>           <none>
kube-system                    nodelocaldns-vr5tj                                 1/1     Running                 0          31m     11.0.1.12       k8s-master03   <none>           <none>
kube-system                    openebs-localpv-provisioner-64fb84d4cc-whqvm       1/1     Running                 5          79m     10.233.67.21    k8s-node01     <none>           <none>
kube-system                    snapshot-controller-0                              1/1     Running                 1          6h17m   10.233.67.8     k8s-node01     <none>           <none>
kubesphere-controls-system     default-http-backend-76d9fb4bb7-5ftcf              1/1     Running                 0          79m     10.233.67.25    k8s-node01     <none>           <none>
kubesphere-controls-system     kubectl-admin-69b8ff6d54-g5swp                     1/1     Running                 0          79m     10.233.67.24    k8s-node01     <none>           <none>
kubesphere-devops-system       ks-jenkins-65db765f86-7gkdx                        1/1     Running                 0          79m     10.233.123.31   k8s-node02     <none>           <none>
kubesphere-devops-system       s2ioperator-0                                      1/1     Running                 0          54m     10.233.123.30   k8s-node02     <none>           <none>
kubesphere-monitoring-system   alertmanager-main-0                                2/2     Running                 14         54m     10.233.123.29   k8s-node02     <none>           <none>
kubesphere-monitoring-system   alertmanager-main-1                                2/2     Running                 0          4h43m   10.233.67.16    k8s-node01     <none>           <none>
kubesphere-monitoring-system   alertmanager-main-2                                2/2     Running                 14         54m     10.233.123.28   k8s-node02     <none>           <none>
kubesphere-monitoring-system   kube-state-metrics-67588479db-978hf                3/3     Running                 0          4h44m   10.233.67.11    k8s-node01     <none>           <none>
kubesphere-monitoring-system   node-exporter-57sk7                                2/2     Running                 2          4h44m   11.0.1.10       k8s-master01   <none>           <none>
kubesphere-monitoring-system   node-exporter-dbqhj                                2/2     Running                 0          31m     11.0.1.12       k8s-master03   <none>           <none>
kubesphere-monitoring-system   node-exporter-ds5jq                                2/2     Running                 0          4h44m   11.0.1.20       k8s-node01     <none>           <none>
kubesphere-monitoring-system   node-exporter-mjnr7                                2/2     Running                 0          4h44m   11.0.1.11       k8s-master02   <none>           <none>
kubesphere-monitoring-system   node-exporter-qntqg                                2/2     Running                 0          4h44m   11.0.1.21       k8s-node02     <none>           <none>
kubesphere-monitoring-system   notification-manager-deployment-7bd887ffb4-k4mp6   1/1     Running                 0          79m     10.233.67.22    k8s-node01     <none>           <none>
kubesphere-monitoring-system   notification-manager-deployment-7bd887ffb4-k7xln   1/1     Running                 0          4h41m   10.233.67.17    k8s-node01     <none>           <none>
kubesphere-monitoring-system   notification-manager-operator-78595d8666-xhc4b     2/2     Running                 4          79m     10.233.67.23    k8s-node01     <none>           <none>
kubesphere-monitoring-system   prometheus-k8s-0                                   3/3     Running                 1          4h43m   10.233.67.14    k8s-node01     <none>           <none>
kubesphere-monitoring-system   prometheus-k8s-1                                   3/3     Running                 1          4h43m   10.233.67.15    k8s-node01     <none>           <none>
kubesphere-monitoring-system   prometheus-operator-d7fdfccbf-2q85k                2/2     Running                 0          4h44m   10.233.67.10    k8s-node01     <none>           <none>
kubesphere-system              ks-apiserver-655998d448-vc5rr                      0/1     CrashLoopBackOff        8          17m     10.233.76.4     k8s-master03   <none>           <none>
kubesphere-system              ks-apiserver-655998d448-wf7dk                      0/1     CrashLoopBackOff        6          7m18s   10.233.66.25    k8s-master01   <none>           <none>
kubesphere-system              ks-console-d6446bd77-ckz7c                         1/1     Running                 0          100m    10.233.67.20    k8s-node01     <none>           <none>
kubesphere-system              ks-console-d6446bd77-hctmt                         1/1     Running                 0          7m18s   10.233.76.6     k8s-master03   <none>           <none>
kubesphere-system              ks-controller-manager-5489dc9dd4-2bh6w             0/1     CrashLoopBackOff        6          7m18s   10.233.66.26    k8s-master01   <none>           <none>
kubesphere-system              ks-controller-manager-5489dc9dd4-wpssm             0/1     CrashLoopBackOff        8          23m     10.233.76.2     k8s-master03   <none>           <none>
kubesphere-system              ks-installer-66cb7455bb-b66td                      1/1     Running                 1          6h18m   10.233.67.6     k8s-node01     <none>           <none>
kubesphere-system              minio-f69748945-758dm                              1/1     Running                 1          6h15m   10.233.67.7     k8s-node01     <none>           <none>
kubesphere-system              openldap-0                                         1/1     Running                 1          4h34m   10.233.113.25   k8s-master02   <none>           <none>
kubesphere-system              openldap-1                                         1/1     Running                 2          4h30m   10.233.66.18    k8s-master01   <none>           <none>
kubesphere-system              redis-ha-haproxy-75575dcdd7-7zpnt                  0/1     Init:CrashLoopBackOff   6          21m     10.233.66.24    k8s-master01   <none>           <none>
kubesphere-system              redis-ha-haproxy-75575dcdd7-c4xzx                  0/1     Init:0/1                3          7m18s   10.233.123.35   k8s-node02     <none>           <none>
kubesphere-system              redis-ha-haproxy-75575dcdd7-md4k8                  0/1     Init:0/1                4          11m     10.233.76.5     k8s-master03   <none>           <none>
kubesphere-system              redis-ha-server-0                                  2/2     Running                 11         6h16m   10.233.66.21    k8s-master01   <none>           <none>
kubesphere-system              redis-ha-server-1                                  2/2     Running                 0          4h33m   10.233.113.24   k8s-master02   <none>           <none>
kubesphere-system              redis-ha-server-2                                  0/2     Init:CrashLoopBackOff   6          12m     10.233.123.34   k8s-node02     <none>           <none>
[root@k8s-master01 softwares]# 

相关文章:
使用kubeadm和nginx LB的Kubernetes HA集群在1个主节点关闭时无法工作-来自服务器的错误: etcdserver:请求超时
知乎|灾备切换|Kubernetes核心架构与高可用集群详解(含100%部署成功的方案)

为者常成,行者常至