盘他!K8S单节点部署!
目录一、master部署1.1修改主机名,关闭交换分区(三台)1.2关闭防火墙与核心防护(三台)1.3创建目录上传etcd脚本,下载cffssl官方证书生成工具1.4创建证书1.5指定etcd三个节点之间的通信验证1.6下载并解压ETCD二进制包1.7创建命令,配置文件和证书的文件夹,并移动相应文件到相应目录1.8主节点执行脚本并声明本地节点名称和地址,此时会进入监控状态,等待其他节点加入,等待时
·
目录
- 一、master部署
- 1.1 修改主机名,关闭交换分区(三台)
- 1.2 关闭防火墙与核心防护(三台)
- 1.3 创建目录上传etcd脚本,下载cffssl官方证书生成工具
- 1.4 创建证书
- 1.5 指定etcd三个节点之间的通信验证
- 1.6 下载并解压ETCD二进制包
- 1.7 创建命令,配置文件和证书的文件夹,并移动相应文件到相应目录
- 1.8 主节点执行脚本并声明本地节点名称和地址,此时会进入监控状态,等待其他节点加入,等待时间2分钟
- 1.9 拷贝证书和启动脚本到两个工作节点
- 1.10 node1和node2两个工作节点修改修改etcd配置文件,修改相应的名称和IP地址
- 1.11 先开启主节点的集群脚本,然后两个节点启动etcd
- 1.12 检查集群状态:注意相对路径
- 二、部署Docker(node1,node2)
- 三、flannel容器集群网络部署
- 3.1 master节点写入分配的子网段到ETCD中,供flannel使用
- 3.2 在两个node节点部署flannel
- 3.3 两个node节点创建k8s工作目录,将两个脚本移动到对应工作目录(只展示node1)
- 3.4 两个node节点都编辑flannel.sh脚本:创建配置文件与启动脚本,定义的端口是2379,节点对外提供的端口
- 3.5 执行脚本,开启flannel网络功能(node1,node2)
- 3.6 配置docker连接flannel网络(只展示node1)
- 3.7 查看flannel分配给docker的IP地址
- 3.8 重启Docker服务,再次查看flannel网络是否有变化
- 3.9 创建容器测试两个node节点是否可以互联互通
- 四、部署master组件
- 五、node节点部署
- 5.1 master节点上将kubectl和kube-proxy拷贝到node节点
- 5.2 两个node节点解压node.zip(只展示node1)
- 5.3 master节点创建kubeconfig目录
- 5.4 生成配置文件并拷贝到node节点
- 5.5 创建bootstrap角色并赋予权限用于连接apiserver请求签名
- 5.6 node节点操作生成kubelet kubelet.config配置文件(只展示node1)
- 5.7 master上检查到node1,node2节点的请求,查看证书状态
- 5.8 颁发证书,再次查看证书状态
- 5.9 查看集群状态并启动proxy服务
- 5.10 两台node节点开启proxy服务(仅展示node1)
【实验环境】
master 20.0.0.15 kube-apiserver,kube-controller,kube-scheduler,etcd
node1 20.0.0.16 kubelet,kube-proxy,docker,flannel,etcd
node2 20.0.0.17 kubelet,kube-proxy,docker,flannel,etcd
一、master部署
1.1 修改主机名,关闭交换分区(三台)
hostnamectl set-hostname master
su
hostnamectl set-hostname node1
su
hostnamectl set-hostname node2
su
swapoff -a && sed -i ' / swap / s/^\(.*\)$/#\1/g' /etc/fstab
1.2 关闭防火墙与核心防护(三台)
systemctl stop firewalld && systemctl disable firewalld
setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config
1.3 创建目录上传etcd脚本,下载cffssl官方证书生成工具
mkdir -p k8s/etcd-cert
cd k8s
'//上传etcd脚本'
#!/bin/bash
# example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380
ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3
WORK_DIR=/opt/etcd
cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
ls
etcd-cert etcd-cert.sh etcd.sh
mv etcd-cert.sh etcd-cert
//移动到相应目录
//上传cfssl,cfssljson,cfssl-certinfo到/usr/local/bin目录下
或者官网下载
vim cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
bash cfssl.sh
//运行下载工具的脚本
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
ls /usr/local/bin/
cfssl cfssl-certinfo cfssljson
//cfssl:生成证书工具、cfssljson:通过传入json文件生成证书、cfssl-certinfo查看证书信息
1.4 创建证书
cd etcd-cert/
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
'//定义ca证书配置文件'
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
'//定义ca证书配置'
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
'//实现证书签名'
cat > ca-csr.json <<EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
'//生成证书:ca-key.pem、ca.pem'
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem etcd-cert.sh
1.5 指定etcd三个节点之间的通信验证
'//配置服务器端的签名文件'
cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"20.0.0.15",
"20.0.0.16",
"20.0.0.17"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
ls
ca-config.json ca-csr.json ca.pem server-csr.json
ca-.csr ca-key.pem etcd-cert.sh
'//服务器端使用签名文件生成ETCD证书,生成server-key.pem和server.pem证书'
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
ls
ca-config.json ca-csr.json ca.pem server.csr server-key.pem
ca.csr ca-key.pem etcd-cert.sh server-csr.json server.pem
1.6 下载并解压ETCD二进制包
cd ..
**//直接上传,还有flannel和kubernetes-server的软件也一起上传**
ls
cfssl.sh etcd.sh flannel-v0.10.0-linux-amd64.tar.gz
etcd-cert etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
[root@master k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
//解压软件
ls etcd-v3.3.10-linux-amd64
Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md
1.7 创建命令,配置文件和证书的文件夹,并移动相应文件到相应目录
mkdir -p /opt/etcd/{cfg,bin,ssl}
ls /opt/etcd/
bin cfg ssl
mv etcd-v3.3.10-linux-amd64/etcd* /opt/etcd/bin
//移动命令到刚刚创建的 bin目录
ls /opt/etcd/bin/
etcd etcdctl
cp etcd-cert/*.pem /opt/etcd/ssl
//将证书文件复制到刚刚创建的ssl目录
[root@master k8s]# ls /opt/etcd/ssl
ca--key.pem ca-.pem server-key.pem server.pem
//进入卡顿状态,等待其他节点加入
vim etcd.sh
//查看配置文件
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
//2380端口是etcd内部通信端口
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"
//2379是单个etcd对外提供的端口
1.8 主节点执行脚本并声明本地节点名称和地址,此时会进入监控状态,等待其他节点加入,等待时间2分钟
ls /opt/etcd/cfg/
//此时查看这个目录是没有文件的
bash etcd.sh etcd01 20.0.0.15 etcd02=https://20.0.0.16:2380,etcd03=https://20.0.0.17:2380
//执行命令进入监控状态
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
ps aux |grep etcd
ls /opt/etcd/cfg/
//此时重新打开终端,发现已经生成了文件
etcd
1.9 拷贝证书和启动脚本到两个工作节点
scp -r /opt/etcd/ root@20.0.0.16:/opt
scp -r /opt/etcd/ root@20.0.0.17:/opt
scp /usr/lib/systemd/system/etcd.service root@20.0.0.16:/usr/lib/systemd/system
scp /usr/lib/systemd/system/etcd.service root@20.0.0.17:/usr/lib/systemd/system
1.10 node1和node2两个工作节点修改修改etcd配置文件,修改相应的名称和IP地址
vim /opt/etcd/cfg/etcd
//两个节点相同方法修改,此处指展示node01的修改
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://20.0.0.16:2380"
ETCD_LISTEN_CLIENT_URLS="https://20.0.0.16:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://20.0.0.16:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://20.0.0.16:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://20.0.0.15:2380,etcd02=https://20.0.0.16:2380,etcd03=https://20.0.0.17:2380"
1.11 先开启主节点的集群脚本,然后两个节点启动etcd
[root@master k8s]# bash etcd.sh etcd01 20.0.0.15 etcd02=https://20.0.0.16:2380,etcd03=https://20.0.0.17:2380
//master节点开启集群脚本
[root@node1 ~]# systemctl start etcd && systemctl enable etcd
//然后两个节点启动etcd
[root@node1 ~]# systemctl status etcd
[root@node2 ~]# systemctl start etcd && systemctl enable etcd
[root@node2 ~]# systemctl status etcd
1.12 检查集群状态:注意相对路径
[root@master k8s]# cd /opt/etcd/ssl/
[root@master ssl]# ls
ca-key.pem ca.pem server-key.pem server.pem
[root@master ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.15:2379,https://20.0.0.16:2379,https://20.0.0.17:2379" cluster-health
member f41920a708162c0 is healthy: got healthy result from https://20.0.0.15:2379
member cebd8536c4381e64 is healthy: got healthy result from https://20.0.0.16:2379
member d0fe1497f5011688 is healthy: got healthy result from https://20.0.0.17:2379
cluster is healthy
//集群是健康的,没问题
二、部署Docker(node1,node2)
2.1 关闭防火墙、关闭核心防护
systemctl stop firewalld && systemctl disable firewalld
setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config
2.2 安装依赖包
yum -y install yum-utils device-mapper-persistent-data lvm2
//yum-utils:提供yum-config-manager
//device-mapper:存储驱动程序需要device-mapper-persistent-data和lvm2
//device-mapper:是Linux2.6内核中支持逻辑卷管理的通用设备映射机制,为实现用于存储资源管理的块设备驱动提供了一个高度模块化的内核架构。
2.3 设置阿里云镜像源
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
2.4 安装docker-ce
yum -y install docker-ce
systemctl start docker
systemctl enable docker
2.5 镜像加速
寻找镜像加速器方法:
登录阿里云官方网站-----》用自己的账户登录到控制台-----》导航栏搜索容器镜像服务,开通-----》选择镜像加速器------》下面的一串代码就是自己需要的。
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://2lb8t07e.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker
2.6 网络优化
echo "net.ipv4.ip_forward=1" >>/etc/sysctl.conf
sysctl -p
systemctl restart network
systemctl restart docker
三、flannel容器集群网络部署
3.1 master节点写入分配的子网段到ETCD中,供flannel使用
[root@master ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.15:2379,https://20.0.0.16:2379,https://20.0.0.17:2379" set /coreos.com/network/config '{"Network" :"172.17.0.0/16", "Backend": {"Type":"vxlan"}}'
{"Network" :"172.17.0.0/16", "Backend": {"Type":"vxlan"}}
//写入分配的网段
[root@master ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.15:2379,https://20.0.0.16:2379,https://20.0.0.17:2379" get /coreos.com/network/config
//查看写入的网段
{"Network" :"172.17.0.0/16", "Backend": {"Type":"vxlan"}}
3.2 在两个node节点部署flannel
[root@master ssl]# cd /root/k8s/
[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@20.0.0.16:/root
[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@20.0.0.17:/root
[root@node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
//node2也要解压,不在赘述
flanneld
mk-docker-opts.sh
README.md
//谁需要跑pod,谁就需要安装flannel网络
3.3 两个node节点创建k8s工作目录,将两个脚本移动到对应工作目录(只展示node1)
[root@node1 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
//创建对应配置文件,命令和证书目录
[root@node1 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
//移动flannel脚本命令到相应目录
[root@node1 ~]# ls kubernetes/bin/
flanneld mk-docker-opts.sh
3.4 两个node节点都编辑flannel.sh脚本:创建配置文件与启动脚本,定义的端口是2379,节点对外提供的端口
[root@node1~]# vim flannel.sh
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/k8s/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem \"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/k8s/cfg/flanneld
ExecStart=/opt/k8s/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
3.5 执行脚本,开启flannel网络功能(node1,node2)
[root@node1 ~]# bash flannel.sh https://20.0.0.15:2379,https://20.0.0.16:2379,https://20.0.0.17:2379
//两个node节点都开启
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@node1 ~]# systemctl status flanneld
//查看flanneld服务是否正常开启
[root@node1 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.58.0 netmask 255.255.255.255 broadcast 0.0.0.0
[root@node2 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.6.0 netmask 255.255.255.255 broadcast 0.0.0.0
//发现docker没有对接flannel,docker0地址没有变,与flannel不在同一网段,下面进行docker连接flannel网络配置
3.6 配置docker连接flannel网络(只展示node1)
[root@node1 ~]# vim /usr/lib/systemd/system/docker.service
.............省略..................
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
.................省略.................
3.7 查看flannel分配给docker的IP地址
[root@node1 ~]# cat /run/flannel/subnet.env
//node1节点分配的地址
DOCKER_OPT_BIP="--bip=172.17.58.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.58.1/24 --ip-masq=false --mtu=1450"
//MTU:最大传输单元,每个数据包最大封装的数据量
//bip指定启动时的子网
[root@node2 ~]# cat /run/flannel/subnet.env
//node2节点分配的地址
DOCKER_OPT_BIP="--bip=172.17.6.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.6.1/24 --ip-masq=false --mtu=1450"
3.8 重启Docker服务,再次查看flannel网络是否有变化
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl restart docker
[root@node1 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.58.1 netmask 255.255.255.0 broadcast 172.17.87.255
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.58.0 netmask 255.255.255.255 broadcast 0.0.0.0
[root@node02 ~]# systemctl daemon-reload
[root@node02 ~]# systemctl restart docker
[root@node02 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.6.1 netmask 255.255.255.0 broadcast 172.17.49.255
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.6.0 netmask 255.255.255.255 broadcast 0.0.0.0
//两个节点应该能查看到各自对应的flannel网络的网段
3.9 创建容器测试两个node节点是否可以互联互通
[root@node1 ~]# docker run -it centos:7 /bin/bash
[root@8ffe415fb35e /]# yum -y install net-tools
[root@8ffe415fb35e /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.58.2 netmask 255.255.255.0 broadcast 172.17.58.255
[root@node2 ~]# docker run -it centos:7 /bin/bash
[root@ce1a76b1a9cb /]# yum -y install net-tools
[root@ce1a76b1a9cb /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.6.2 netmask 255.255.255.0 broadcast 172.17.6.255
//经过查看,node1节点容器的IP地址是172.17.58.2,node2节点容器的IP地址是172.17.6.2
[root@8ffe415fb35e /]# ping 172.17.6.2
//node1节点的容器ping node2节点的容器成功
[root@ce1a76b1a9cb /]# ping 172.17.58.2
//node2 ping node1容器成功
//证明flannel网络部署成功
四、部署master组件
4.1 master节点操作,api-server生成证书
[root@master k8s]# mkdir -p /opt/kubernetes/{cfg,bin,ssl}
//创建k8s工作目录
[root@master k8s]# unzip master.zip
//解压 maste.zip
[root@master k8s]# mkdir k8s-cert
//创建k8s证书目录
[root@master k8s]# cd k8s-cert/
[root@master k8s-cert]# vim k8s-cert.sh
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"20.0.0.15", '//master1,配置文件中要删除此类注释'
"20.0.0.18", '//master2'
"20.0.0.200", '//VIP'
"20.0.0.19", '//nginx代理master'
"20.0.0.20", '//nginx代理backup'
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
'//为什么没有写node节点的IP地址?因为如果写了node节点IP地址,后期增加或者删除node节点的时候会非常麻烦'
4.2 生成证书
[root@master k8s-cert]# bash k8s-cert.sh
//生成证书
[root@master k8s-cert]# ls *pem
admin-key.pem ca-key.pem kube-proxy-key.pem server-key.pem
admin.pem ca.pem kube-proxy.pem server.pem
[root@master k8s-cert]# cp ca*.pem server*.pem /opt/kubernetes/ssl/
//复制证书到工作目录
[root@master k8s-cert]# ls /opt/kubernetes/ssl/
ca-key.pem ca.pem server-key.pem server.pem
4.3 解压k8s服务器端压缩包
[root@master k8s-cert]# cd ..
//解压kubernetes压缩包
[root@master k8s]# ls
apiserver.sh etcd-v3.3.10-linux-amd64 master.zip
controller-manager.sh flannel-v0.10.0-linux-amd64.tar.gz scheduler.sh
etcd-cert k8s-cert
etcd.sh kubernetes-server-linux-amd64.tar.gz
[root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
4.4 复制服务器端关键命令到k8s工作目录中
[root@master k8s]# cd /root/k8s/kubernetes/server/bin/
[root@master bin]# cp kube-controller-manager kube-scheduler kubectl kube-apiserver /opt/kubernetes/bin/
[root@master bin]# ls /opt/kubernetes/bin/
kube-apiserver kube-controller-manager kubectl kube-scheduler
4.5 编辑令牌并绑定角色kubelet-bootstrap
[root@master bin]# cd /root/k8s/
[root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ''
//随机生成序列号
2aa752946a90774ac5efdd1699f99d77
[root@master k8s]# cd /opt/kubernetes/cfg/
[root@master cfg]# vi token.csv
2aa752946a90774ac5efdd1699f99d77,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
//序列号,用户名,id,角色,这个用户是master用来管理node节点的
4.6 开启apiserver,将数据存放在etcd集群中并检查kube状态
[root@master cfg]# cd /root/k8s/
[root@master k8s]# bash apiserver.sh 20.0.0.15 https://20.0.0.15:2379,https://20.0.0.16:2379,https://20.0.0.17:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@master k8s]# ps aux | grep kube
//检查进程启动成功
[root@master k8s]# cat /opt/kubernetes/cfg/kube-apiserver
//查看配置文件
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://20.0.0.15:2379,https://20.0.0.16:2379,https://20.0.0.17:2379 \
--bind-address=20.0.0.15 \
--secure-port=6443 \
--advertise-address=20.0.0.15 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
[root@master k8s]# netstat -ntap |grep 6443
tcp 0 0 20.0.0.15:6443 0.0.0.0:* LISTEN 2967/kube-apiserver
tcp 0 0 20.0.0.15:6443 20.0.0.15:51782 ESTABLISHED 2967/kube-apiserver
tcp 0 0 20.0.0.15:51782 20.0.0.15:6443 ESTABLISHED 2967/kube-apiserver
[root@master k8s]# netstat -ntap | grep 8080
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 2967/kube-apiserver
4.7 启动scheduler服务
[root@master k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master k8s]# ps aux | grep kubernetes
4.8 启动controller-manager
[root@master k8s]# chmod +x controller-manager.sh
[root@master k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
4.9 查看master节点状态
[root@master k8s]# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
//发现是正常的,没问题
五、node节点部署
5.1 master节点上将kubectl和kube-proxy拷贝到node节点
[root@master k8s]# cd /root/k8s/kubernetes/server/bin/
[root@master bin]# ls
apiextensions-apiserver kube-controller-manager.tar
cloud-controller-manager kubectl
cloud-controller-manager.docker_tag kubelet
cloud-controller-manager.tar kube-proxy
hyperkube kube-proxy.docker_tag
kubeadm kube-proxy.tar
kube-apiserver kube-scheduler
kube-apiserver.docker_tag kube-scheduler.docker_tag
kube-apiserver.tar kube-scheduler.tar
kube-controller-manager mounter
kube-controller-manager.docker_tag
[root@master bin]# scp kubelet kube-proxy root@20.0.0.16:/opt/kubernetes/bin/
[root@master bin]# scp kubelet kube-proxy root@20.0.0.17:/opt/kubernetes/bin/
5.2 两个node节点解压node.zip(只展示node1)
[root@node1 ~]# ls
anaconda-ks.cfg Documents initial-setup-ks.cfg node.zip Public Videos
Desktop Downloads Music Pictures Templates
//上传node软件包
[root@node1 ~]# unzip node.zip
Archive: node.zip
inflating: proxy.sh
inflating: kubelet.sh
//解压node.zip,获得kubelet.sh proxy.sh
[root@node1 ~]# ls
anaconda-ks.cfg Downloads Music proxy.sh Videos
Desktop initial-setup-ks.cfg node.zip Public
Documents kubelet.sh Pictures Templates
5.3 master节点创建kubeconfig目录
[root@master bin]# cd /root/k8s/
[root@master k8s]# mkdir kubeconfig
[root@master k8s]# cd kubeconfig/
[root@master kubeconfig]# vim kubeconfig
APISERVER=$1
SSL_DIR=$2
# 创建kubelet bootstrapping kubeconfig
export KUBE_APISERVER="https://$APISERVER:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=2aa752946a90774ac5efdd1699f99d77 \ '//修改成序列号,通过cat /opt/kubernetes/cfg/token.csv命令获取'
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# 创建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=$SSL_DIR/kube-proxy.pem \
--client-key=$SSL_DIR/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
[root@master kubeconfig]# ls /opt/kubernetes/bin/
kube-apiserver kube-controller-manager kubectl kube-scheduler
[root@master kubeconfig]# export PATH=$PATH://opt/kubernetes/bin
[root@master kubeconfig]# echo "export PATH=$PATH://opt/kubernetes/bin" >>/etc/profile
//设置环境变量(可以写入到/etc/profile中)
[root@master kubeconfig]# cd /opt/kubernetes/bin/
//kubectl可以补全被系统所直接使用
[root@master kubeconfig]# kubectl get cs
NAME STATUS MESSAGE ERROR
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
scheduler Healthy ok
controller-manager Healthy ok
5.4 生成配置文件并拷贝到node节点
[root@master kubeconfig]# bash kubeconfig 20.0.0.15 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@master kubeconfig]# ls
bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@20.0.0.16:/opt/kubernetes/cfg
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@20.0.0.17:/opt/kubernetes/cfg
5.5 创建bootstrap角色并赋予权限用于连接apiserver请求签名
kubectl delete clusterrolebinding kubelet-bootstrap
//若之前创建过了,重新创建报错,则需要删除原有的
[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
5.6 node节点操作生成kubelet kubelet.config配置文件(只展示node1)
[root@node1 ~]# bash kubelet.sh 20.0.0.16
[root@node1 ~]# ls /opt/kubernetes/cfg/
bootstrap.kubeconfig flanneld kubelet kubelet.config kube-proxy.kubeconfig
[root@node1 ~]# systemctl status kubelet
//检查kubelet服务启动,running状态
5.7 master上检查到node1,node2节点的请求,查看证书状态
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-MxZ86KhFjv8ETHqxlabe3AbaYXgj1G9lIbnacyO64XA 50s kubelet-bootstrap Pending
node-csr-OWxJLZaH_fq9CE_04oGNYYBq4iN7UmLuwy3Ud2AUb7A 20s kubelet-bootstrap Pending
//pending:等待集群给该节点办法证书
5.8 颁发证书,再次查看证书状态
[root@master kubeconfig]# kubectl certificate approve node-csr-MxZ86KhFjv8ETHqxlabe3AbaYXgj1G9lIbnacyO64XA
certificatesigningrequest.certificates.k8s.io/node-csr-MxZ86KhFjv8ETHqxlabe3AbaYXgj1G9lIbnacyO64XA approved
[root@master kubeconfig]# kubectl certificate approve node-csr-OWxJLZaH_fq9CE_04oGNYYBq4iN7UmLuwy3Ud2AUb7A
certificatesigningrequest.certificates.k8s.io/node-csr-OWxJLZaH_fq9CE_04oGNYYBq4iN7UmLuwy3Ud2AUb7A approved
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-MxZ86KhFjv8ETHqxlabe3AbaYXgj1G9lIbnacyO64XA 5m3s kubelet-bootstrap Approved,Issued
node-csr-OWxJLZaH_fq9CE_04oGNYYBq4iN7UmLuwy3Ud2AUb7A 4m33s kubelet-bootstrap Approved,Issued
//已经被允许加入集群
5.9 查看集群状态并启动proxy服务
[root@master kubeconfig]# kubectl get node
//如果有一个节点noready,检查kubelet,如果很多节点noready,那就检查apiserver,如果没问题再检查VIP地址,keepalived
NAME STATUS ROLES AGE VERSION
20.0.0.16 Ready <none> 2m35s v1.12.3
20.0.0.17 Ready <none> 85s v1.12.3
5.10 两台node节点开启proxy服务(仅展示node1)
[root@node1 ~]# bash proxy.sh 20.0.0.16
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node1 ~]# systemctl status kube-proxy.service
//发现服务是running状态
更多推荐
已为社区贡献2条内容
所有评论(0)