K8S集群搭建详细教程

😀 本文介绍两种常用的Kubernetes(K8S)集群部署方法:kubeadm搭建K8S和二进制方式搭建K8S。kubeadm提供了简化的部署工具,适合快速搭建测试环境或小型生产环境。二进制方式更灵活,适用于需要定制化配置和深度控制的复杂部署场景。本文将提供详细步骤和截图指导,帮助大家成功搭建可靠高效的Kubernetes集群,为应用提供强大的容器化支持。


本文首发并存储于Notion个人博客:https://www.yimeifengyuliusu.love/


零、速览指南

K8S(Kubenetes)介绍

在生产部署K8S集群时常用的有两种方法:kubeadm搭建和二进制方式搭建:

  • 一是Kubeadm搭建:Kubeadm是一个简化K8S集群部署过程的工具,提供 kubeadm init 和 kubeadm join,用于快速部署 Kubernetes 集群。 官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
  • 二是进制包搭建:从GitHub下载发行版的二进制包,手动部署、配置和连接每个组件,从而组成Kubernetes集群。采用Kubeadm部署虽然降低了部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署 Kubernetes 集群,虽然手动部署麻烦点,但搭建期间还可以学习很多工作原理,加深我们对K8S的各个组件及其功能的理解和认识,也利于后期维护。

比较这两种搭建方式,Kubeadm提供了一种简化的方法,适合快速搭建测试环境或小型生产环境。而二进制方式则更加灵活,适用于需要定制化配置和深度控制的复杂部署场景。

官方文档参考链接:https://kubernetes.io/docs/home/

Kubeadm搭建步骤(简明)

  1. 安装三台虚拟机并安装操作系统Centos7.x
  2. 对三台虚拟机进行初始化操作
  3. 在三个节点安装docker、kebelet、kubeadm、kubectl
  4. master节点执行kubeadm init初始化
  5. node节点执行kubeadm join加入集群
  6. 配置网络插件
  7. K8S集群测试

二进制搭建步骤(简明)

  1. 创建多台虚拟机,安装Linux操作系统
  2. 操作系统初始化
  3. 设置自签证书
  4. 部署etcd集群
  5. 安装Docker
  6. 部署master组件(apiserver、controller-manager、scheduler)
  7. 部署node组件(docker、kubelet、kube-proxy)
  8. 部署集群CNI网络
  9. K8S集群测试

一、Kubeadm方式搭建

kubeadm方式搭建请参考:kubeadm方式搭建K8S集群详细教程

二、 二进制方式搭建

由于本人资源有限,所以在这里首先卸载上面Kubeadm方式搭建的Kubernetes再重新采用二进制方式安装,卸载请参考:K8S彻底卸载教程

2.0 部署指南

2.0.1 前提条件

在开始之前,部署 Kubernetes 集群机器需要满足以下几个条件:

  • 一台或多台机器,操作系统 CentOS7.x-86_x64
  • 硬件配置:2GB 或更多 RAM,2 个 CPU 或更多 CPU,硬盘 30GB 或更多
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点
  • 禁止 swap 分区

2.0.2 全部相关文件下载

全部文件的官方下载路径如下,同时也可以点击百度云一键打包下载全部文件。

百度云:

链接:https://pan.baidu.com/s/1fymTy5m4FOJzIvboG5hSVg?pwd=if2w
提取码:if2w

文件类别文件名称文件地址直接下载地址
自签证书cfssl_linux-amd64https://github.com/cloudflare/cfssl/releases/tag/1.2.0https://github.com/cloudflare/cfssl/releases/download/1.2.0/cfssl_linux-amd64
cfssljson_linux-amd64https://github.com/cloudflare/cfssl/releases/tag/1.2.0https://github.com/cloudflare/cfssl/releases/download/1.2.0/cfssl_linux-amd64
cfssl-certinfo_linux-amd64https://github.com/cloudflare/cfssl/releases/tag/1.2.0https://github.com/cloudflare/cfssl/releases/download/1.2.0/cfssl-certinfo_linux-amd64
etcd组件etcd-v3.4.9-linux-amd64.tar.gzhttps://github.com/etcd-io/etcd/releases/v3.4.9https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
dockerdocker-19.03.9.tgzhttps://download.docker.com/linux/static/stable/x86_64/%E3%80%81https://mirrors.aliyun.com/docker-ce/linux/static/stable/x86_64/%E3%80%81https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/static/stable/x86_64/https://download.docker.com/linux/static/stable/x86_64/docker19.03.9.tgz%E3%80%81https://mirrors.aliyun.com/docker-ce/linux/static/stable/x86_64/docker-19.03.9.tgz?spm=a2c6h.25603864.0.0.7bc715acFGCUbm%E3%80%81https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/static/stable/x86_64/docker-19.03.9.tgz
kubernetes组件kubernetes-server-linux-amd64.tar.gzhttps://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#server-binaries-17https://dl.k8s.io/v1.18.2/kubernetes-server-linux-amd64.tar.gz
CNI网络组件cni-plugins-linux-amd64-v0.8.6.tgzhttps://github.com/containernetworking/plugins/releases/v0.8.6https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz

2.0.3 二进制方式搭建所有文件位置速览

/root/TLS/
/opt/etcd/
/opt/kubernetes/
/etc/docker/
/usr/lib/systemd/system/

2.1 节点部署规划

角色IP
master192.168.31.102
node1192.168.31.103
node2(可选扩展)192.168.31.104

2.2 操作系统初始化

本小节命令在所有节点执行。

# 关闭防火墙
systemctl stop firewalld    # 临时
systemctl disable firewalld # 永久
systemctl status firewalld  # 检查

# 关闭selinux
setenforce 0  # 临时
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久

# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

# 根据规划设置主机名(以下两个命令都可以)
hostnamectl set-hostname master # master节点执行
hostnamectl set-hostname node01 # node01节点执行
hostnamectl set-hostname node02 # node02节点执行
hostname # 查看主机名看是否修改成功

# 添加hosts(root用户)
cat >> /etc/hosts << EOF
192.168.31.102 master
192.168.31.103 node01
192.168.31.104 node02
EOF

# 将桥接的IPv4流量传递到iptables的链(root用户)
cat >/etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 使得桥接配置生效

# 设置时间同步(root用户)
yum install ntpdate -y
timedatectl set-timezone Asia/Shanghai
ntpdate ntp1.aliyun.com

2.3 为etcd和API Server生成自签证书

本小节命令仅在master节点执行:

【如果下载过慢或者出现GitHub拒绝连接之类的话,请参考连接GitHub类报错

# 设置信任证书
yum install -y ca-certificates

# 下载证书
mkdir /root/cfssl
cd /root/cfssl
wget <https://pkg.cfssl.org/R1.2/cfssl_linux-amd64>
wget <https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64>
wget <https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64>

# 移动文件并重命名
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

# 授予执行权限
chmod +x /usr/local/bin/cfssl*
chmod +x /usr/bin/cfssl-certinfo

如果实在无法下载,可以尝试去GitHub网址直接下载文件再拷贝到虚拟机中,GitHub文件地址:https://github.com/cloudflare/cfssl/releases/tag/1.2.0

接着,创建文件夹:

mkdir -p /root/TLS/{etcd,k8s}
cd /root/TLS/etcd

2.3.1 自签证书颁发机构CA

第一步、编写ca-config.json文件:

cat > ca-config.json<< EOF
{
	"signing": {
		"default": {
			"expiry": "87600h"
		},
		"profiles": {
			"www": {
				"expiry": "87600h",
				"usages": [
					"signing",
					"key encipherment",
					"server auth",
					"client auth"
				]
			}
		}
	}
}
EOF

第二步、编写ca-csr.json文件:

cat > ca-csr.json<< EOF
{
	"CN": "etcd CA",
	"key": {
		"algo": "rsa",
		"size": 2048
	},
	"names": [
		{
			"C": "CN",
			"L": "Beijing",
			"ST": "Beijing"
		}
	]
}
EOF

第三步、生成证书

# 生成证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

# 查看生成文件,如下两个
ls *.pem
ca-key.pem ca.pem

执行效果如下

[feng@master etcd]$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2023/11/20 14:56:29 [INFO] generating a new CA key and certificate from CSR
2023/11/20 14:56:29 [INFO] generate received request
2023/11/20 14:56:29 [INFO] received CSR
2023/11/20 14:56:29 [INFO] generating key: rsa-2048
2023/11/20 14:56:29 [INFO] encoded CSR
2023/11/20 14:56:29 [INFO] signed certificate with serial number 519689395787660295802125385967591316831002283398

[feng@master etcd]$ ls *.pem
ca-key.pem  ca.pem

2.3.2 使用自签 CA 签发 Etcd HTTPS 证书

第一步、创建证书申请文件server-csr.json文件:

cat > server-csr.json<< EOF
{
	"CN": "etcd",
	"hosts": [
		"192.168.31.102",
		"192.168.31.103",
		"192.168.31.104"
	],
	"key": {
		"algo": "rsa",
		"size": 2048
	},
	"names": [
		{
			"C": "CN",
			"L": "BeiJing",
			"ST": "BeiJing"
		}
	]
}
EOF

注:上述文件 hosts 字段中 IP 为所有 etcd 节点的集群内部通信 IP,一个都不能少!为了 方便后期扩容可以多写几个预留的 IP。

第二步、生成证书:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

ls server*pem
server-key.pem server.pem

执行效果如下:

[feng@master etcd]$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2023/11/20 15:01:29 [INFO] generate received request
2023/11/20 15:01:29 [INFO] received CSR
2023/11/20 15:01:29 [INFO] generating key: rsa-2048
2023/11/20 15:01:30 [INFO] encoded CSR
2023/11/20 15:01:30 [INFO] signed certificate with serial number 709148494619695970036292034984063011600625863340
2023/11/20 15:01:30 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (<https://cabforum.org>);
specifically, section 10.2.3 ("Information Requirements").
[feng@master etcd]$ ls *.pem
ca-key.pem  ca.pem  server-key.pem  server.pem

注释:上面的WARNING警告不用管,这是自签证书有的不安全警告,可以不用理会。

2.4 部署etcd集群

本小节命令除说明外仅在master节点执行

2.4.0 etcd指南

参考链接:etcd官方文档etcd下载地址

2.4.1 部署八步曲

第一步、创建工作目录并下载和解压二进制包:

# 创建文件夹
mkdir -p /opt/etcd/{bin,cfg,ssl}
cd /opt/

# 下载二进制包
wget <https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz>

# 解压二进制压缩包
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz

# 移动到相应目录
mv ./etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

第二步、创建etcd配置文件:

cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.102:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.102:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.102:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.102:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.102:2380,etcd-2=https://192.168.31.103:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

参数说明:
ETCD_NAME: 节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通讯监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIALCLUSTER_TOKEN:集群Token
ETCD_INITIALCLUSTER_STATE:加入集群的状态:new是新集群,existing表示加入已有集群

第三步、systemd管理etcd

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \\
--cert-file=/opt/etcd/ssl/server.pem \\
--key-file=/opt/etcd/ssl/server-key.pem \\
--peer-cert-file=/opt/etcd/ssl/server.pem \\
--peer-key-file=/opt/etcd/ssl/server-key.pem \\
--trusted-ca-file=/opt/etcd/ssl/ca.pem \\
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \\
--logger=zap
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

第四步、移动证书

cp /root/TLS/etcd/ca*pem /root/TLS/etcd/server*pem /opt/etcd/ssl

第五步、拷贝配置到其他节点

scp -r /opt/etcd/ root@192.168.31.103:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.31.103:/usr/lib/systemd/system/

第六步、修改其他节点的配置

对于etcd.conf文件,需要修改名称和IP,在node01节点修改如下(如果有node02类似):

vi /opt/etcd/cfg/etcd.conf

#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="<https://192.168.31.103:2380>"
ETCD_LISTEN_CLIENT_URLS="<https://192.168.31.103:2379>"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="<https://192.168.31.103:2380>"
ETCD_ADVERTISE_CLIENT_URLS="<https://192.168.31.103:2379>"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.102:2380,etcd-2=https://192.168.31.103:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
~

第七步、启动etcd集群

在所有节点执行以下命令

systemctl daemon-reload && systemctl start etcd
systemctl status etcd

# 设置开机自启动
systemctl enable etcd

启动后效果如图:

K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2013.png

第八步、检查集群状态

这里的命令可以仅在master节点执行,因为在任意集群节点查看集群状态都是一样的结果。

网上有n多博客,给出的这个关于检查集群状态的命令可能各有不同,但是我的etcd版本是3.4.9对应的是以下命令能正确运行。

# 命令一(以列表形式呈现)
/opt/etcd/bin/etcdctl --write-out=table --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints=https://192.168.31.102:2379,<https://192.168.31.103:2379> endpoint health
# 示例如下
[root@master opt]# /opt/etcd/bin/etcdctl --write-out=table --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints=https://192.168.31.102:2379,<https://192.168.31.103:2379> endpoint health
+-----------------------------+--------+-------------+-------+
|          ENDPOINT           | HEALTH |    TOOK     | ERROR |
+-----------------------------+--------+-------------+-------+
| <https://192.168.31.102:2379> |   true | 16.521258ms |       |
| <https://192.168.31.103:2379> |   true | 19.867578ms |       |
+-----------------------------+--------+-------------+-------+

# 命令二(直接呈现)
/opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="<https://192.168.31.102:2379>,<https://192.168.31.103:2379>" endpoint health
示例如下:
[root@master etcd]# /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="<https://192.168.31.102:2379>,<https://192.168.31.103:2379>" endpoint health
<https://192.168.31.102:2379> is healthy: successfully committed proposal: took = 15.879209ms
<https://192.168.31.103:2379> is healthy: successfully committed proposal: took = 16.898771ms

截图如下:

K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2014.png

2.4.2 无厘头报错

这里我使用网络上常见的以下命令来检查集群状态,这时候就会得到“集群可能不健康”的输出提示,但是使用systemctl status etcd命令来查看etcd状态会显示running。一开始我也以为是etcd集群不健康、是etcd没部署正确,然后去查看了日志得到了如下的一些报错提示,想着我曾经在这些虚拟机上部署过Hadoop,可能是之前的一些网络配置或者其他etc影响了etcd,导致现在etcd部署失败,便重装了一遍虚拟机,但是再次部署完etcd并启动后依旧遇到完全相同的报错情况,也没有其他任何的改动,后面只是找到了正确的命令(如第八步所示)才发现压根不是部署失败,只是检查集群的命令错误啦。

[root@master opt]# ETCDCTL_API=2 /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="<https://192.168.31.100:2379>,<https://192.168.31.101:2379>" cluster-health
cluster may be unhealthy: failed to list members
Error:  unexpected status code 404

使用journalctl -u etcd查看日志:

  • 报错一:etcdserver/server.go:2065","msg":"failed to publish local

    K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2015.png

  • 报错二:Nov 21 08:47:42 master etcd: {"level":"warn","ts":"2023-11-21T08:47:42.930+0800","caller":"etcdserver/raft.go:390","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"5dbe154de8c43608","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"51.385035ms"}

    K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2016.png

  • 报错三:Received response from host 192.168.31.1 with invalid source port 56243 on interface 'ens32.0' Nov 17 13:05:12 master avahi-daemon[725]: Invalid legacy unicast query packet.

    K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2017.png

  • 报错四:master kernel: e1000: ens32 NIC Link is Down master dnsmasq[1431]: no servers found in /etc/resolv.conf, will retry

    K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2018.png

  • 报错五:device (ens32): ipv6: duplicate address check failed for the fe80::91af:2441:7755:f884/64 lft forever pref forever lifetime master systemd: Unit iscsi.service cannot be reloaded because it is inactive.

    K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2019.png

2.5 部署master组件

2.5.1 生成kube-apiserver证书

本小节内容全部在master节点执行。

生成kube-apiserver证书,用于部署master节点。

第一步、自签证书颁发机构(CA)

# 切换目录
cd /root/TLS/k8s

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json<< EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

第二步、生成自签机构证书

# 生成证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

# 查看生成的证书
ls *pem

第三步、使用自签CA签发kube-apiserver的HTTPS证书

创建kube-apiserver证书申请文件:

cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.31.102",
      "192.168.31.103",
      "192.168.31.104",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

第四步、生成kube-apiserver证书

# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

# 查看证书
ls server*pem

2.5.2 部署kube-apiserver

本小节命令都在master节点执行。

第一步、下载二进制文件

linux-server下载目录:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#server-binaries-17

K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2020.png

直接下载地址:

https://dl.k8s.io/v1.18.3/kubernetes-server-linux-amd64.tar.gz

cd ~
# 下载解压,也可以从Windows物理机下载再上传到虚拟机中
wget <https://dl.k8s.io/v1.18.3/kubernetes-server-linux-amd64.tar.gz>
tar zxvf kubernetes-server-linux-amd64.tar.gz

# 创建kubernetes目录
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
cd kubernetes/server/bin

# 移动文件
cp /root/kubernetes/server/bin/{kube-apiserver,kube-scheduler,kube-controller-manager} /opt/kubernetes/bin
cp /root/kubernetes/server/bin/kubectl /usr/bin/

第二步、创建配置文件

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.31.102:2379,<https://192.168.31.103:2379> \\
--bind-address=192.168.31.102 \\
--secure-port=6443 \\
--advertise-address=192.168.31.102 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem  \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF

字段说明:
– logtostderr:启用日志
– v:日志等级

– log-dir:日志目录
– etcd-servers:etcd集群地址
– bind-address:监听地址
– secure-port:https 安全端口
– advertise-address:集群通告地址
– allow-privileged:启用授权
– service-cluster-ip-range:Service虚拟 IP地址段
– enable-admission-plugins:准入控制模块
– authorization-mode:认证授权,启用 RBAC 授权和节点自管理
– enable-bootstrap-token-auth:启用 TLS bootstrap 机制
– token-auth-file:bootstrap token文件
– service-node-port-range:Service nodeport类型默认分配端口范围

– kubelet-client-xxx:apiserver 访问 kubelet客户端证书
– tls-xxx-file:apiserver https 证书
– etcd-xxxfile:连接 Etcd 集群证书
– audit-log-xxx:审计日志

第三步、拷贝证书

cp /root/TLS/k8s/ca*pem /root/TLS/k8s/server*pem /opt/kubernetes/ssl/

第四步、启用TLS Bootstrapping机制

cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF

第五步、systemd管理api-server

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \\$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

第六步、启动并设置开机自启动

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

systemctl status kube-apiserver

第七步、授权 kubelet-bootstrap 用户允许请求证书

kubectl create clusterrolebinding kubelet-bootstrap \\
--clusterrole=system:node-bootstrapper \\
--user=kubelet-bootstrap

2.5.3 部署kube-controller-manager

本小节内容全部在master节点执行。

第一步、创建配置文件

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF

字段说明:

– master:通过本地非安全本地端口 8080 连接 apiserver。
– leader-elect:当该组件启动多个时,自动选举(HA)
– cluster-signing-cert-file/–cluster-signing-key-file:自动为 kubelet颁发证书的 CA,与 apiserver保持一致

第二步、systemd管理controller-manager

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \\$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

第三步、启动并设置开机自启动

systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

systemctl status kube-controller-manager

2.5.4 部署kube-scheduler

本小节内容全部在master节点执行。

第一步、创建配置文件

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1"
EOF

字段说明:

– master:通过本地非安全本地端口 8080 连接 apiserver。

– leader-elect:当该组件启动多个时,自动选举(HA)

第二步、systemd 管理 scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \\$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
	EOF

第三步、启动并设置开机自启动

systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

systemctl status kube-scheduler

第四步、查看集群状态

这时候所有组件已经启动,我们可以使用kubectl工具查看集群状态:

kubectl get cs

# 运行示例如下
[root@master bin]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}

输出如下图说明Master节点组件运行正常。

K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2021.png

2.6 部署node组件

本2.6章节的命令在master节点执行(master既作为master节点也作为worker node节点):

2.6.1 安装Docker

本小节命令在所有节点执行。

文件docker19.03.9.tgz下载地址,这里我使用wget下载docker文件,同时也可以在Windows下载docker19.03.9.tgz文件并上传到/root/目录下。

下载地址汇总:

docker官方下载地址、阿里云镜像站下载地址、清华源镜像站下载地址

第一步、下载解压docker:

cd /root/
# 下载(这里我使用阿里云源)
wget <https://mirrors.aliyun.com/docker-ce/linux/static/stable/x86_64/docker-19.03.9.tgz?spm=a2c6h.25603864.0.0.7bc715acFGCUbm> -O /root/docker-19.03.9.tgz

# 解压
tar zxvf docker-19.03.9.tgz

# 移动
mv /root/docker/* /usr/bin/

第二步、systemd管理docker

cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF

第三步、创建配置文件(配置阿里云镜像加速)

mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["<https://b9pmyelo.mirror.aliyuncs.com>"]
}
EOF

第四步、启动docker并设置开机自启动

systemctl daemon-reload
systemctl start docker
systemctl enable docker

systemctl status docker

第一步、拷贝文件

注意:以下命令除说明外仅在master节点执行:

# ===================master节点执行===================
# 拷贝kubelet和kube-proxy到/opt/kubernetes/bin目录下
cp /root/kubernetes/server/bin/{kubelet,kube-proxy} /opt/kubernetes/bin

# ===================node01节点执行===================
# 创建工作目录(master节点在2.5.2已经创建过啦)
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
# 拷贝kubelet和kube-proxy到node01节点的/opt/kubernetes/bin目录下
scp root@master:/root/kubernetes/server/bin/{kubelet,kube-proxy} /opt/kubernetes/bin

2.6.2 安装kubelet

第一步、创建配置文件

cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master1 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF

字段说明:

–hostname-override:显示名称,集群中唯一

–network-plugin:启用 CNI

–kubeconfig:空路径,会自动生成,后面用于连接 apiserver

–bootstrap-kubeconfig:首次启动向 apiserver 申请证书

–config:配置参数文件

–cert-dir:kubelet 证书生成目录
–pod-infra-container-image:管理 Pod 网络容器的镜像

第二步、创建配置参数文件

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

第三步、生成bootstrap.kubeconfig文件

cd /root/TLS/k8s/

# 指定apiserver 内网负载均衡地址
KUBE_APISERVER="<https://192.168.31.102:6443>" # apiserver使用master节点IP
TOKEN=c47ffb939f5ca36231d9e3121a252940 #这个和上面2.6.2创建token文件的一致

# 设置集群参数
kubectl config set-cluster kubernetes \\
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \\
  --embed-certs=true \\
  --server=${KUBE_APISERVER} \\
  --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials "kubelet-bootstrap" \\
  --token=${TOKEN} \\
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文默认参数
kubectl config set-context default \\
  --cluster=kubernetes \\
  --user=kubelet-bootstrap \\
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

# 拷贝到配置文件目录下
cp /root/TLS/k8s/bootstrap.kubeconfig /opt/kubernetes/cfg

截图如下(下面是在/root/根目录下生成文件的)

K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2022.png

生成的bootstrap.kubeconfig文件内容示例如下:

K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2023.png

第四步、systemd管理kubelet

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \\$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

第五步、启动kubelet并设置开机自启动

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

systemctl status kubelet

启动成功后截图如下:

K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2024.png

2.6.3 批准 kubelet证书申请并加入集群

重要:所有node节点加入集群都需要master系欸但审批。

# 查看证书请求(拷贝下面name栏的申请证书)
[root@master k8s]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-WYwCxIYGgF1-i9YoFipWNzKG7FCi3GGxyYk1Y0O4yHk   2m15s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

# 批准证书(刚刚拷贝的证书名)
[root@master k8s]# kubectl certificate approve node-csr-WYwCxIYGgF1-i9YoFipWNzKG7FCi3GGxyYk1Y0O4yHk
certificatesigningrequest.certificates.k8s.io/node-csr-WYwCxIYGgF1-i9YoFipWNzKG7FCi3GGxyYk1Y0O4yHk approved

# 查看节点状态
[root@master k8s]# kubectl get node
NAME          STATUS     ROLES    AGE   VERSION
k8s-master1   NotReady   <none>   8s    v1.18.3

截图如下(注:由于网络插件还没有部署,节点会没有准备就绪 NotReady):

K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2025.png

2.6.4 安装kube-proxy

本小节内容全部在master节点执行。

第一步、创建kube-proxy配置文件

cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

第二步、创建配置参数文件

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master1
clusterCIDR: 10.0.0.0/24
EOF

第三步、生成 kube-proxy.kubeconfig文件

  • 创建证书申请文件
cat > /root/TLS/k8s/kube-proxy-csr.json<< EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

  • 生成证书
# 切换目录
cd /root/TLS/k8s/

# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

# 查看证书
ls kube-proxy*pem

  • 生成kube-proxy.kubeconfig文件
cd /root/TLS/k8s/

# 指定apiserver内网负载均衡地址
KUBE_APISERVER="<https://192.168.31.102:6443>"

# 设置集群参数
kubectl config set-cluster kubernetes \\
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \\
  --embed-certs=true \\
  --server=${KUBE_APISERVER} \\
  --kubeconfig=kube-proxy.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kube-proxy \\
  --client-certificate=./kube-proxy.pem \\
  --client-key=./kube-proxy-key.pem \\
  --embed-certs=true \\
  --kubeconfig=kube-proxy.kubeconfig

# 设置上下文默认参数
kubectl config set-context default \\
  --cluster=kubernetes \\
  --user=kube-proxy \\
  --kubeconfig=kube-proxy.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

# 拷贝到配置文件目录下
cp /root/TLS/k8s/kube-proxy.kubeconfig /opt/kubernetes/cfg/

截图如下

K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2026.png

第四步、systemd管理kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \\$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

第五步、启动kube-proxy并设置开机自启动

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

systemctl status kube-proxy

K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2027.png

2.7 部署集群CNI网络

首先下载CNI二进制文件:

# 下载文件
wget <https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz>

# 解压并移动
mkdir /opt/cni/bin -p
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

部署CNI网络,安装flannel插件:

wget <https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml>

默认镜像地址无法访问,修改为 docker hub 镜像仓库。
sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml

kubectl apply -f kube-flannel.yml
kubectl get pods -n kube-system
kubectl get node

这里出现了一个诡异的问题:

K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2028.png

说明:经过实际测试,该问题并无影响,后续的节点启动、节点状态和K8S集群测试都完全正常。

2.8 新增Worker Node节点

本小节命令除说明外全部在新的Worker Node即node01节点执行。

如果需要新增Node节点,我们可以按照以下几步来操作:

第一步、在 master 节点将 Worker Node 涉及文件拷贝到新节点(以下命令在master节点执行):

# node01
scp -r /opt/kubernetes root@192.168.31.103:/opt/
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.31.103:/usr/lib/systemd/system
scp -r /opt/cni/ root@192.168.31.103:/opt/

# node02
scp -r /opt/kubernetes root@192.168.31.104:/opt/
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.31.104:/usr/lib/systemd/system
scp -r /opt/cni/ root@192.168.31.104:/opt/

第二步、删除kubelet证书和kubeconfig文件

rm /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*

# 示例
[root@node01 ~]# ls /opt/kubernetes/cfg/kubelet*
/opt/kubernetes/cfg/kubelet.conf        /opt/kubernetes/cfg/kubelet.kubeconfig
/opt/kubernetes/cfg/kubelet-config.yml
[root@node01 ~]# rm /opt/kubernetes/cfg/kubelet.kubeconfig
rm: remove regular file ‘/opt/kubernetes/cfg/kubelet.kubeconfig’? y

[root@node01 ~]# ls /opt/kubernetes/ssl/kubelet*
/opt/kubernetes/ssl/kubelet-client-2023-11-24-08-54-08.pem
/opt/kubernetes/ssl/kubelet-client-current.pem
/opt/kubernetes/ssl/kubelet.crt
/opt/kubernetes/ssl/kubelet.key
[root@node01 ~]# rm -f /opt/kubernetes/ssl/kubelet*

注:这几个文件是证书申请审批后自动生成的,每个 Node不同,必须删除重新生成。

第三步、修改配文件中的主机名

vi /opt/kubernetes/cfg/kubelet.conf
# 修改这一行为:--hostname-override=k8s-node1

vi /opt/kubernetes/cfg/kube-proxy-config.yml
# 修改这一行为:hostnameOverride: k8s-node1

第四步、启动并设置开机自启动

systemctl daemon-reload

systemctl start kubelet
systemctl enable kubelet
systemctl status kubelet

systemctl start kube-proxy
systemctl enable kube-proxy
systemctl status kube-proxy

第五步、在master节点批准新节点的kubelet证书申请

kubectl get csr
kubectl certificate approve 生成的name

kubectl get node

Warning:如果执行完上述命令发现节点还是NotReady先不要慌,因为在新节点拉取镜像需要时间,所以多等一会在重新查看节点状态即可,运行成功截图如下。

K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2029.png

2.9 测试K8S集群

在Kubernetes集群中新建一个Pod,验证是否能正常运行:

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pods,svc

nginx服务访问地址格式:http://NodeIP:Port(这里的NodeIP用master或者node01的都可以)。

K8S%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%E8%AF%A6%E7%BB%86%E6%95%99%E7%A8%8B%20471e4d059dfe4b0bbf14476e2c080f8a/Untitled%2030.png

至此,说明我们的二进制部署Kubernetes是完美成功的,完结撒花~

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐