k8s 集群搭建-01-二进制安装-基础环境准备

使用二进制方式,手动部署kubernetes高可用集群,注意:所有操作使用root用户执行。

一、基础环境准备

file

使用部署工具安装 Kubernetes

二、系统设置

1.1. 节点要求

节点数 >=3台
CPU >=2
Memory >=2G
安全组:关闭(允许节点之间任意端口访问,以及ipip隧道协议通讯)

1.2 环境说明

我们这里使用的是三台centos 7.5的虚拟机,具体信息如下表:

系统类型 IP地址 节点角色 CPU Memory Hostname Alicename
centos-7.5 192.168.1.123 master >=2 >=2G node-1 hombd03
centos-7.5 192.168.1.124 master、worker >=2 >=2G node-2 hombd04
centos-7.5 192.168.1.125 worker >=2 >=2G node-3 hombd05

可以看到,node-2节点做 master 主节点的高可用,node-2节点同时做 master 和 worker 节点。

1.3 设置Linux环境(三个节点都执行)

关闭防火墙、selinux、swap,重置iptables

# 关闭selinux
$ setenforce 0
$ sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
# 关闭防火墙
$ systemctl stop firewalld && systemctl disable firewalld

# 设置iptables规则
$ iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# 关闭swap
$ swapoff -a && free –h

# 关闭dnsmasq(否则可能导致容器无法解析域名)
$ service dnsmasq stop && systemctl disable dnsmasq

1.4 k8s参数设置(三个节点都执行)

# 制作配置文件
$ cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
vm.overcommit_memory = 1
EOF

# 生效文件
$ sysctl -p /etc/sysctl.d/kubernetes.conf

file

报如下的错误:

[root@hombd05 ~]# cat > /etc/sysctl.d/kubernetes.conf <<EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_nonlocal_bind = 1
> net.ipv4.ip_forward = 1
> vm.swappiness = 0
> vm.overcommit_memory = 1
> EOF
[root@hombd05 ~]# sysctl -p /etc/sysctl.d/kubernetes.conf
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
vm.overcommit_memory = 1
[root@hombd05 ~]# 

使用如下命令即可解决:

 modprobe  br_netfilter

操作:

[root@hombd03 ~]# modprobe  br_netfilter
[root@hombd03 ~]# sysctl -p /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
vm.overcommit_memory = 1
[root@hombd03 ~]# 

1.5 配置免密登录

为了方便文件的copy我们选择一个中转节点(随便一个节点,可以是集群中的也可以是非集群中的),配置好跟其他所有节点的免密登录

# 看看是否已经存在rsa公钥,查看node1节点的
$ cat ~/.ssh/id_rsa.pub

# 如果不存在就创建一个新的
$ ssh-keygen -t rsa

# 把id_rsa.pub文件内容copy到其他机器的授权文件中
$ cat ~/.ssh/id_rsa.pub

# 在其他节点执行下面命令(包括worker节点)
# $ echo "<file_content>" >> ~/.ssh/authorized_keys

# 在三台节点都执行该命令:
 echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDwnvTZaqXmOUrPbvuNayCOZSGT9/OFBQDuxQ7uv9r4B9S57WclhO3B6Pp3nTt9uWXQviMV557PKsSnbY2JP9F1JTesrj0GxfFTKb5efG79qLTwZ3LVVdRrsBIYBUsKt+y7rM8k+nHsRxG+fbuvy2MsxpDw4Wz9I68HogIYUPOrAIjHz9+UQUURQ3urmIiTeM9pTknzoLTHgdxrlSwvlZsfu1Rmz0jOQN9+im/aNPB6Dd3wwQ4d58N/urE59cSK9z/q/7Bo+4Li0ZdEmcRm6HpZbC2rzjOVJ5LYUjVpr5mvdZLxSzR+Jqmxf+hem16mhhkbE0bqOT5FD0kjfrMxNdC/ root@localhost.localdomain" >> ~/.ssh/authorized_keys

然后进行免密测试:

[root@hombd03 ~]# ssh root@192.168.1.125
Last login: Fri Jun  3 11:34:14 2022 from 192.168.1.25
[root@hombd05 ~]# ssh root@192.168.1.124
Last login: Fri Jun  3 12:12:29 2022 from 192.168.1.25
[root@hombd04 ~]# 

三、准备k8s软件包

3.1 软件包下载

在任意一个节点下载好压缩包后,复制到所有节点即可

master节点组件:kube-apiserver、kube-controller-manager、kube-scheduler、kubectl
worker节点组件:kubelet、kube-proxy

点此从网盘下载 提取码: 9527

# 设定版本号
$ export VERSION=v1.20.2

# 下载master节点组件
$ wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-apiserver
$ wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-controller-manager
$ wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-scheduler
$ wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kubectl

# 下载worker节点组件
$ wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-proxy
$ wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kubelet

# 下载etcd组件
$ wget https://github.com/etcd-io/etcd/releases/download/v3.4.10/etcd-v3.4.10-linux-amd64.tar.gz
$ tar -xvf etcd-v3.4.10-linux-amd64.tar.gz
$ mv etcd-v3.4.10-linux-amd64/etcd* .
$ rm -fr etcd-v3.4.10-linux-amd64*

# 统一修改文件权限为可执行
$ chmod +x kube*

已经从百度网盘下载到本地了,然后进行解压:

[root@hombd03 softwards]# tar -xvf kubernetes-v1.20.2.tar.gz
kubernetes-v1.20.2/
kubernetes-v1.20.2/kube-apiserver
kubernetes-v1.20.2/kube-controller-manager
kubernetes-v1.20.2/kube-scheduler
kubernetes-v1.20.2/kubectl
kubernetes-v1.20.2/kube-proxy
kubernetes-v1.20.2/kubelet
kubernetes-v1.20.2/etcd-v3.4.10-linux-amd64.tar.gz
[root@hombd03 softwards]# 

3.2 软件包分发

完成下载后,分发文件,将每个节点需要的文件scp过去

# 把master相关组件分发到master节点
$ MASTERS=(node-1 node-2)
for instance in ${MASTERS[@]}; do
  scp kube-apiserver kube-controller-manager kube-scheduler kubectl root@${instance}:/usr/local/bin/
done

# 把worker先关组件分发到worker节点
$ WORKERS=(node-2 node-3)
for instance in ${WORKERS[@]}; do
  scp kubelet kube-proxy root@${instance}:/usr/local/bin/
done

# 把etcd组件分发到etcd节点,ETCD做高可用,我们在三台机器都部署
$ ETCDS=(node-1 node-2 node-3)
for instance in ${ETCDS[@]}; do
  scp etcd etcdctl root@${instance}:/usr/local/bin/
done

实际执行的文件:
完成下载后,分发文件,将每个节点需要的文件scp过去

# 把master相关组件分发到master节点
$ MASTERS=(homaybd03 homaybd04)
for instance in ${MASTERS[@]}; do
  scp kube-apiserver kube-controller-manager kube-scheduler kubectl root@${instance}:/usr/local/bin/
done

# 把worker先关组件分发到worker节点
$ WORKERS=(homaybd04 homaybd05)
for instance in ${WORKERS[@]}; do
  scp kubelet kube-proxy root@${instance}:/usr/local/bin/
done

# 把etcd组件分发到etcd节点,ETCD做高可用,我们在三台机器都部署
$ ETCDS=(homaybd03 homaybd04 homaybd05)
for instance in ${ETCDS[@]}; do
  scp etcd etcdctl root@${instance}:/usr/local/bin/
done

执行记录:

[root@hombd03 softwards]# cd kubernetes-v1.20.2
[root@hombd03 kubernetes-v1.20.2]# ls -l
total 473048
-rw-r--r--. 1 root root  17370166 Jul 17  2020 etcd-v3.4.10-linux-amd64.tar.gz
-rwxr-xr-x. 1 root root 118132736 Jan 14  2021 kube-apiserver
-rwxr-xr-x. 1 root root 112316416 Jan 14  2021 kube-controller-manager
-rwxr-xr-x. 1 root root  40230912 Jan 14  2021 kubectl
-rwxr-xr-x. 1 root root 114015176 Jan 14  2021 kubelet
-rwxr-xr-x. 1 root root  39485440 Jan 14  2021 kube-proxy
-rwxr-xr-x. 1 root root  42848256 Jan 14  2021 kube-scheduler
[root@homaybd03 kubernetes-v1.20.2]# MASTERS=(homaybd03 homaybd04)
[root@homaybd03 kubernetes-v1.20.2]# for instance in ${MASTERS[@]}; do
>   scp kube-apiserver kube-controller-manager kube-scheduler kubectl root@${instance}:/usr/local/bin/
> done
kube-apiserver                                                           100%  113MB 140.0MB/s   00:00    
kube-controller-manager                                                  100%  107MB 142.6MB/s   00:00    
kube-scheduler                                                           100%   41MB 141.2MB/s   00:00    
kubectl                                                                  100%   38MB 129.9MB/s   00:00    
kube-apiserver                                                           100%  113MB 112.7MB/s   00:01    
kube-controller-manager                                                  100%  107MB 107.1MB/s   00:01    
kube-scheduler                                                           100%   41MB  77.6MB/s   00:00    
kubectl                                                                  100%   38MB  82.2MB/s   00:00    
[root@hombd03 kubernetes-v1.20.2]# 

解压etcd的压缩包:

[root@homaybd03 kubernetes-v1.20.2]# ls -l
total 473048
-rw-r--r--. 1 root root  17370166 Jul 17  2020 etcd-v3.4.10-linux-amd64.tar.gz
-rwxr-xr-x. 1 root root 118132736 Jan 14  2021 kube-apiserver
-rwxr-xr-x. 1 root root 112316416 Jan 14  2021 kube-controller-manager
-rwxr-xr-x. 1 root root  40230912 Jan 14  2021 kubectl
-rwxr-xr-x. 1 root root 114015176 Jan 14  2021 kubelet
-rwxr-xr-x. 1 root root  39485440 Jan 14  2021 kube-proxy
-rwxr-xr-x. 1 root root  42848256 Jan 14  2021 kube-scheduler
[root@hombd03 kubernetes-v1.20.2]# tar -xvf etcd-v3.4.10-linux-amd64.tar.gz 

[root@hombd03 kubernetes-v1.20.2]# cd etcd-v3.4.10-linux-amd64
[root@hombd03 etcd-v3.4.10-linux-amd64]# ls -l
total 40564
drwxr-xr-x. 14 630384594 600260513     4096 Jul 17  2020 Documentation
-rwxr-xr-x.  1 630384594 600260513 23843808 Jul 17  2020 etcd
-rwxr-xr-x.  1 630384594 600260513 17620576 Jul 17  2020 etcdctl
-rw-r--r--.  1 630384594 600260513    43094 Jul 17  2020 README-etcdctl.md
-rw-r--r--.  1 630384594 600260513     8431 Jul 17  2020 README.md
-rw-r--r--.  1 630384594 600260513     7855 Jul 17  2020 READMEv2-etcdctl.md
[root@hombd03 etcd-v3.4.10-linux-amd64]# pwd
/opt/softwards/kubernetes-v1.20.2/etcd-v3.4.10-linux-amd64
[root@hombd03 etcd-v3.4.10-linux-amd64]# 

然后再进行分发:

$ cd /opt/softwards/kubernetes-v1.20.2/etcd-v3.4.10-linux-amd64
# 把etcd组件分发到etcd节点,ETCD做高可用,我们在三台机器都部署
$ ETCDS=(homaybd03 homaybd04 homaybd05)
for instance in ${ETCDS[@]}; do
  scp etcd etcdctl root@${instance}:/usr/local/bin/
done

查看 /usr/local/bin 目录下的文件:

[root@hombd03 bin]# ls -l
total 346676
-rwxr-xr-x. 1 root root  23843808 Jun  3 23:54 etcd
-rwxr-xr-x. 1 root root  17620576 Jun  3 23:54 etcdctl
-rwxr-xr-x. 1 root root 118132736 Jun  3 23:44 kube-apiserver
-rwxr-xr-x. 1 root root 112316416 Jun  3 23:44 kube-controller-manager
-rwxr-xr-x. 1 root root  40230912 Jun  3 23:44 kubectl
-rwxr-xr-x. 1 root root  42848256 Jun  3 23:44 kube-scheduler
[root@hombd03 bin]# 

相关文章:
集群-82-K8S 集群搭建

为者常成,行者常至