部署ETCD

下载和安装etcd Binaries(All nodes)

由于我们要在3个节点上都部署etcd,形成一个etcd集群,所以下面步骤需要分别在3个主机上创建etcd的配置文件。所有的etcd节点的配置文件基本相同,区别在于绑定的IP地址有所不同。

  1. 首先确保CentOS8虚拟机上安装了vim、wget和curl:
sudo dnf -y install curl vim wget
  1. 下载etcd二进制安装包:
export RELEASE="3.3.13"
wget https://github.com/etcd-io/etcd/releases/download/v${RELEASE}/etcd-v${RELEASE}-linux-amd64.tar.gz
  1. 解压压缩包:
tar xvf etcd-v${RELEASE}-linux-amd64.tar.gz
cd etcd-v${RELEASE}-linux-amd64
  1. 将etcd和etcdctl二进制文件移动到/usr/local/bin目录中:
sudo mv etcd etcdctl /usr/local/bin
  1. 确认etcd版本:
$ etcd --version
etcd Version: 3.3.13
Git SHA: 98d3084
Go Version: go1.10.8
Go OS/Arch: linux/amd64
$ etcdctl --version
etcdctl version: 3.3.13
API version: 2

创建etcd 目录和用户 (All nodes)

我们将etcd配置文件存储在/etc/etcd目录中,数据存储在/var/lib/etcd中。用于管理服务的用户和组称为etcd。

  1. 创建etcd用户和用户组
sudo groupadd --system etcd
sudo useradd -s /sbin/nologin --system -g etcd etcd
  1. 创建etcd配置文件目录和数据存储目录:
sudo mkdir -p /var/lib/etcd/
sudo mkdir /etc/etcd
sudo chown -R etcd:etcd /var/lib/etcd/

配置etcd(All nodes)

我们需要在所有三台服务器上填充systemd服务单元文件。但首先,我们需要一些环境变量才能继续。

  1. 在每台服务器上,通过运行以下命令保存这些变量:
INT_NAME="ens32"
ETCD_HOST_IP=$(ip addr show $INT_NAME | grep "inet\b" | awk '{print $2}' | cut -d/ -f1)
ETCD_NAME=$(hostname -s)

其中:

  • INT NAME是用于集群通信的网络接口的名称。更改它以匹配您的服务器配置。
  • ETCD HOST IP是指定网络接口的内部IP地址。它用于服务客户机请求并与etcd集群对等方通信。
  • ETCD NAME–每个ETCD成员在ETCD集群中必须具有唯一的名称。使用的命令将设置etcd名称以匹配当前计算实例的主机名。
  1. 设置完所有变量后,创建etcd.service系统单位文件
    首先,查看etc/hosts文件内容:
$ cat /etc/hosts
192.168.23.120 k8smaster
192.168.23.121 k8snode1
192.168.23.122 k8snode2

因此,对于Master节点来讲,配置如下:

cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd service
Documentation=https://github.com/etcd-io/etcd

[Service]
Type=notify
User=etcd
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --data-dir=/var/lib/etcd \\
  --initial-advertise-peer-urls http://${ETCD_HOST_IP}:2380 \\
  --listen-peer-urls http://${ETCD_HOST_IP}:2380 \\
  --listen-client-urls http://${ETCD_HOST_IP}:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls http://${ETCD_HOST_IP}:2379 \\
  --initial-cluster-token etcd-cluster \\
  --initial-cluster k8smaster=http://192.168.23.120:2380,k8snode1=http://192.168.23.121:2380,k8snode2=http://192.168.23.122:2380 \\
  --initial-cluster-state new \

[Install]
WantedBy=multi-user.target
EOF
  1. 启动etcd服务
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd

注意:首先启动两个从节点的etcd服务,然后再启动master节点上的etcd服务。因为etcd1的配置文件/etc/systemd/system/etcd.service 启动脚本中的ETCD_INITIAL_CLUSTER_STATE是new,而在配置中ETCD_INITIAL_CLUSTER写入了etcd2/3的IP:PORT,这时etcd1尝试去连接etcd2、etcd3,但是etcd2、3的etcd服务此时还未启动,因此需要先启动etcd2和3的etcd服务,再去启动etcd1。

在三台节点上检查etcd服务:

[root@k8smaster ~]# etcdctl cluster-health
member 2c2f9cff61deb56a is healthy: got healthy result from http://192.168.23.121:2379
member 6cbf2aac5ae784d9 is healthy: got healthy result from http://192.168.23.120:2379
member f05603112accb0c4 is healthy: got healthy result from http://192.168.23.122:2379
cluster is healthy

测试etcd集群安装情况

(1)通过列出etcd集群成员来测试您的设置:

[root@k8smaster ~]# etcdctl member list
2c2f9cff61deb56a: name=k8snode1 peerURLs=http://192.168.23.121:2380 clientURLs=http://192.168.23.121:2379 isLeader=false
6cbf2aac5ae784d9: name=k8smaster peerURLs=http://192.168.23.120:2380 clientURLs=http://192.168.23.120:2379 isLeader=false
f05603112accb0c4: name=k8snode2 peerURLs=http://192.168.23.122:2380 clientURLs=http://192.168.23.122:2379 isLeader=true
[root@k8smaster ~]# etcdctl set /message "Hello World"
Hello World
[root@k8snode1 ~]# etcdctl get /message
Hello World
[root@k8snode2 ~]# etcdctl get /message
Hello World

(2)创建文件夹

[root@k8snode2 ~]# etcdctl mkdir /myservice
[root@k8snode2 ~]# etcdctl set /myservice/container1 localhost:8080
localhost:8080
[root@k8snode2 ~]# etcdctl ls /myservice
/myservice/container1
[root@k8snode2 ~]#
  1. 测试Leader失效
    当一个Leader失败时,etcd集群会自动选择一个新的Leader。一旦Leader,选举不会立即进行。由于故障检测模型是基于超时的,因此选举新Leader需要大约一个选举超时。

在Leader选举期间,集群无法处理任何写入操作。在选举期间发送的写请求将排队等待处理,直到选出新的Leader。

当前Leader在k8snode2上:

[root@k8snode2 ~]# etcdctl member list
2c2f9cff61deb56a: name=k8snode1 peerURLs=http://192.168.23.121:2380 clientURLs=http://192.168.23.121:2379 isLeader=false
6cbf2aac5ae784d9: name=k8smaster peerURLs=http://192.168.23.120:2380 clientURLs=http://192.168.23.120:2379 isLeader=false
f05603112accb0c4: name=k8snode2 peerURLs=http://192.168.23.122:2380 clientURLs=http://192.168.23.122:2379 isLeader=true

停止node2上的etcd服务:

[root@k8snode2 ~]# systemctl stop etcd

查看当前的集群情况:

[root@k8smaster ~]# etcdctl member list
2c2f9cff61deb56a: name=k8snode1 peerURLs=http://192.168.23.121:2380 clientURLs=http://192.168.23.121:2379 isLeader=true
6cbf2aac5ae784d9: name=k8smaster peerURLs=http://192.168.23.120:2380 clientURLs=http://192.168.23.120:2379 isLeader=false
f05603112accb0c4: name=k8snode2 peerURLs=http://192.168.23.122:2380 clientURLs=http://192.168.23.122:2379 isLeader=false

此时leader切换到master节点上了。

至此,我们已经在三个CentOS8操作系统的节点上成功安装了etcd高可用的分布式键值数据库。

部署flannel

Flannel是为Kubernetes设计的一种简单易用的配置第3层网络结构的方法。

Flannel如何工作?

Flannel在每台主机上运行一个名为flanneld的二进制代理,负责从一个较大的预配置地址空间中为每台主机分配一个子网租用。Flannel直接使用kubernetes API或etcd来存储网络配置、分配的子网和任何辅助数据(例如主机的公共IP)。数据包使用多种后端机制之一进行转发,包括VXLAN和各种云集成。

网络详细信息

像Kubernetes这样的平台假设每个容器(pod)在集群内都有一个唯一的、可路由的IP。此模型的优点是,它消除了共享单个主机IP所带来的端口映射复杂性。

Flannel负责在集群中的多个节点之间提供第3层IPv4网络。Flannel不控制容器如何与主机联网,只控制通信量如何在主机之间传输。不过,flannel确实为Kubernetes提供了一个CNI插件,并提供了与Docker集成的指导。

Flannel专注于网络。对于网络策略,可以使用其他项目,如Calico。

在Kubernetes中如何使用?

使用Kubernetes部署flannel最简单的方法是使用几种部署工具和发行版中的一种,这些工具和发行版默认使用flannel进行集群。例如,Core OS的Tectonic在Kubernetes集群中设置flannel,它使用开源Tectonic安装程序来驱动设置过程。

尽管不是必需的,但建议flannel使用kubernetes API作为其后台存储,这样就不需要为flannel部署离散的etcd集群。这种Flannel模式称为kube子网管理器。

手动部署flannel

Flannel可以添加到任何现有的Kubernetes集群,不过在任何使用pod网络的pod启动之前添加Flannel是最简单的。
For Kubernetes v1.17+ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
如下演示手动安装flannel的方式:

  1. 下载并解压flannel二进制文件
wget https://github.com/flannel-io/flannel/releases/download/v0.12.0/flannel-v0.12.0-linux-amd64.tar.gz
tar zxvf flannel-v0.12.0-linux-amd64.tar.gz
  1. 将解压得到的flanneld和mk-docker-opts.sh复制到/usr/local/bin目录中
mv flanneld mk-docker-opts /usr/local/bin
  1. 创建flannel配置文件/etc/flannel/flanneld:
mkdir -p /etc/flannel/
vim /etc/flannel/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=http://192.168.23.120:2379,http://192.168.23.121:2379,http://192.168.23.122:2379"
  1. 在etcd集群中写入Pod的网络信息
etcdctl set /coreos.com/network/config '{"Network":"192.168.23.0/24","Backend":{"Type":"vxlan"}}'
  1. 创建flannel的系统服务单元文件/usr/lib/systemd/system/flanneld.service
vim /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/flannel/flanneld
ExecStart=/usr/local/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
  1. 启用flannel服务
systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld

至此,我们已经成功部署完成了flannel网络。

安装Kubernetes

Master节点

Master节点主要运行kube-apiserver、kube-scheduler、kube-controller-manager等组件。

  1. 下载kubernetes-server-linux-amd64.tar.gz
wget https://dl.k8s.io/v1.18.0/kubernetes-server-linux-amd64.tar.gz
tar -zxvf kubernetes-server-linux-amd64.tar.gz
  1. 将解压后得到的目录/server/bin/下的kube-apiserver、kube-controller-manager、kube-scheduler以及kubectl这四个文件复制到/usr/local/bin/目录下
cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
  1. 配置kube-apiserver
    (1)创建kube-apiserver配置文件
mkdir -p /etc/kubernetes/config
cd /etc/kubernetes/config
vim kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true --v=4 --etcd-servers=http://192.168.23.120:2379,http://192.168.23.121:2379,http://192.168.23.122:2379 --address=0.0.0.0 --port=8080 --advertise-address=192.168.23.120 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"

(2)创建kube-apiserver的系统服务单元文件

vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config/kube-apiserver
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

(3)启动kube-apiserver服务

systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl status kube-apiserver.service
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2021-01-26 21:13:21 UTC; 45min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 4752 (kube-apiserver)
    Tasks: 14 (limit: 23529)
   Memory: 290.3M
   CGroup: /system.slice/kube-apiserver.service
           └─4752 /usr/local/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=http://192.168.23.120:2379,http://192.168.23.121:2379,http://192.168.23.122:2379 --address=0.0.>
  1. 配置kube-scheduler
    (1)创建kube-scheduler配置文件
cd /etc/kubernetes/config
vim kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=192.168.23.120:8080 --leader-elect"

(2)创建kube-scheduler系统服务单元文件

vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config/kube-scheduler
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failures

[Install]
WantedBy=multi-user.target

(3)启动kube-scheduler服务并查看状态

systemctl daemon-reload
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
systemctl status kube-scheduler.service
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2021-01-26 21:14:35 UTC; 49min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 4832 (kube-scheduler)
    Tasks: 13 (limit: 23529)
   Memory: 17.0M
   CGroup: /system.slice/kube-scheduler.service
           └─4832 /usr/local/bin/kube-scheduler --logtostderr=true --v=4 --master=192.168.23.120:8080 --leader-elect
  1. 配置kube-controller-manager
    (1)创建kube-controller-manager配置文件
cd /etc/kubernetes/config
vim kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1  --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes"

(2)创建kube-controller-manager系统服务单元文件

vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/etc/kubernetes/config/kube-controller-manager
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

(3)启动服务并检查状态

systemctl daemon-reload
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl status kube-controller-manager.service
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2021-01-26 21:49:35 UTC; 18min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 5979 (kube-controller)
    Tasks: 12 (limit: 23529)
   Memory: 38.1M
   CGroup: /system.slice/kube-controller-manager.service
           └─5979 /usr/local/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/2>
  1. 通过kubectl命令查看集群中各个组件的状态
[root@k8smaster config]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}

Node节点(两台)

Node节点上主要运行kubelet和kube-proxy等组件,其中kubelet运行在每个工作节点上,用来接收kube-apiserver发送的请求,管理Pod容器,执行交互式命令。kubelet启动时自动向kube-apiserver注册节点信息,kube-proxy监听kube-apiserver中服务和端点的变化情况,并创建路由规则来进行服务负载均衡。

  1. 下载并解压kubernetes-node-linux-amd64.tar.gz,并将得到的node/bin/目录中的文件复制到/usr/local/bin目录中
$ tar -zxvf kubernetes-node-linux-amd64.tar.gz
$ pwd
/home/kubernetes/soft/kubernetes/node/bin
$ cp * /usr/local/bin/
  1. 配置kubelet参数配置模板文件/etc/kubernetes/config/kubelet.config
mkdir -p /etc/kubernetes/config
cd /etc/kubernetes/config
vim kubelet.config
apiVersion: v1
clusters:
- cluster:
    server: http://192.168.23.120:8080
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
  1. 配置kubelet
    (1)创建kubelet配置文件
[root@k8snode1 config]# vim kubelet
KUBELET_OPTS="--register-node=true --hostname-override=192.168.23.121 --kubeconfig=/etc/kubernetes/config/kubelet.config --cluster-dns=192.168.23.120 --cluster-domain=cluster.local --pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest --logtostderr=true --cgroup-driver=systemd"

(2)创建kubelet系统服务单元文件

[root@k8snode1 config]# vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/etc/kubernetes/config/kubelet
ExecStart=/usr/local/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

(3)启用kubelet服务并查看状态

systemctl daemon-reload
systemctl enable kubelet.service
systemctl start kubelet.service
systemctl status kubelet.service
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2021-01-26 21:32:03 UTC; 42min ago
 Main PID: 26476 (kubelet)
    Tasks: 14 (limit: 11245)
   Memory: 38.0M
   CGroup: /system.slice/kubelet.service
           └─26476 /usr/local/bin/kubelet --register-node=true --hostname-override=192.168.23.121 --kubeconfig=/etc/kubernetes/config/kubelet.config --cluster-dns=192.168.23.120 --cluster-domain=cluster.local >
  1. 配置kube-proxy
    (1)创建kube-proxy配置文件
[root@k8snode1 config]# vim kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true --hostname-override=192.168.23.121 --master=http://192.168.23.120:8080"

(2)创建kube-proxy系统服务单元文件

[root@k8snode1 config]# vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/config/kube-proxy
ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

(3)启用kube-proxy服务并查看状态

systemctl daemon-reload
systemctl enable kube-proxy.service
systemctl start kube-proxy.service
systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2021-01-26 21:01:52 UTC; 1h 14min ago
 Main PID: 22375 (kube-proxy)
    Tasks: 7 (limit: 11245)
   Memory: 12.6M
   CGroup: /system.slice/kube-proxy.service
           └─22375 /usr/local/bin/kube-proxy --logtostderr=true --hostname-override=192.168.23.121 --master=http://192.168.23.120:8080
  1. 部署完成后,在Master节点上通过kubectl命令开查看节点和组件的状态
[root@k8smaster config]# kubectl get nodes,cs
NAME                  STATUS   ROLES    AGE   VERSION
node/192.168.23.121   Ready    <none>   45m   v1.18.16
node/192.168.23.122   Ready    <none>   42m   v1.18.16

NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok
componentstatus/scheduler            Healthy   ok
componentstatus/etcd-0               Healthy   {"health":"true"}
componentstatus/etcd-2               Healthy   {"health":"true"}
componentstatus/etcd-1               Healthy   {"health":"true"}

至此,我们已经通过使用Kubernetes二进制包的形式基本安装完成了一个包含三个节点的Kubernetes集群。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐