超详细,K8S二进制单节点Master部署,值得一看
K8S二进制部署一、部署单节点master集群1、环境准备2、部署etcd集群在master节点上操作在node节点查看二、Flannel网络部署1、部署docker所有node节点部署docker一、部署单节点master集群1、环境准备k8s集群master01:192.168.22.100k8s集群node01:192.168.22.110k8s集群node02:12.168.22.119e
·
K8S二进制部署
一、部署单节点master集群
1、环境准备
k8s集群master01:192.168.22.100
k8s集群node01:192.168.22.110
k8s集群node02:12.168.22.119
etcd集群节点1:192.168.22.100
etcd集群节点2:192.168.22.110
etcd集群节点3:192.168.22.119
- 关闭所有实验环境虚拟机的防火墙
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
2、部署etcd集群
在master节点上操作
#创建/k8s目录
mkdir k8s
cd k8s
#创建证书制作的脚本
vim etcd-cert.sh
#!/bin/bash
cat > ca-config.json <<EOF #可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数
{
"signing": {
"default": {
"expiry": "87600h" #指定了证书的有效期,87600h 为10年
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing", #表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE
"key encipherment", #表示使用非对称密钥加密,如 RSA 加密
"server auth", #表示client可以用该 CA 对 server 提供的证书进行验证
"client auth" #表示server可以用该 CA 对 client 提供的证书进行验证
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "etcd",
"key": {
"algo": "rsa", #指定了加密算法,一般使用rsa(size:2048)
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"192.168.22.100",
"192.168.22.110", #定义三个节点的IP地址
"192.168.22.119"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
#创建启动脚本
vim etcd.sh
#!/bin/bash
#example: ./etcd.sh etcd01 192.168.22.100 etcd02=https://192.168.22.110:2380,etcd03=https://192.168.22.119:2380
#使用格式:etcd名称 当前etcd的IP地址+完整的集群名称和地址
ETCD_NAME=$1 #节点名称,集群中唯一
ETCD_IP=$2
ETCD_CLUSTER=$3
WORK_DIR=/opt/etcd #数据目录。指定节点的数据存储目录
cat > $WORK_DIR/cfg/etcd <<EOF
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380" #集群通信监听地址,用于监听其他member发送信息的地址
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379" #客户端访问监听地址,用于监听etcd客户发送信息的地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
cat > /usr/lib/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
-
生成证书
-
安装etcd
cd /opt/k8s
#上传 etcd-v3.3.10-linux-amd64.tar 到 k8s中
tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
#创建存放 etcd 配置文件,命令,证书目录
mkdir -p /opt/etcd/{cfg,bin,ssl}
mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin #拷贝命令文件
cp etcd-cert/*.pem /opt/etcd/ssl #拷贝证书文件
./etcd.sh etcd01 192.168.22.100 etcd02=https://192.168.22.110:2380,etcd03=https://192.168.22.119:2380 #进入卡住状态等待其他节点加入
ps -ef | grep etcd #另开一个窗口查看进程是否正常
#将证书和启动脚本推送/复制到两台node节点中
scp -r /opt/etcd/ root@192.168.22.110:/opt
scp -r /opt/etcd/ root@192.168.22.119:/opt
scp -r /usr/lib/systemd/system/etcd.service root@192.168.22.110:/usr/lib/systemd/system/
scp -r /usr/lib/systemd/system/etcd.service root@192.168.22.119:/usr/lib/systemd/system/
检查集群状态(master上执行)
/opt/etcd/bin/etcdtl\
--ca-file=ca.pem \
--cert-file=server.pem \
--key-file=server-key.pem \
--endpoints="https://192.168.22.100:2379,https://192.168.22.110:2379,https://192.168.22.119:2379"\
cluster-health
在node节点查看
注意这里我们需要将配置文件中的 ETCD_NAME=“etcd01” 名称修改为节点名称, ip地址也改成自己的
- 修改节点 cfg/etcd 文件
二、Flannel网络部署
1、部署docker
所有node节点部署docker
安装依赖包
yum install -y yum-utils device-mapper-persistent-data lvm2
设置阿里云镜像源
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce
systemctl start docker
systemctl status docker
2、k8s中Pod网络通信
flannel是实现不同node中Pod相互通信用的
flannel 会把内部的pod iP 封装到udp中,再根据在etcd 保存的路由表通过物理网卡发送给目的node,目的node在接受到转发来的数据后由flanneld解封装暴露出udp里的内部 Pod ip ,再根据目的IP由 flannel0 --> dockerO转发到目的pod 中
在master节点上
- 添加flannel 网络配置信息,写入分配的子网段到etcd中,让flannel使用
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.22.100:2379,https://192.168.22.110:2379,https://192.168.22.119:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
在所有node节点上
- 上传 flannel.sh 和 flannel-v0.10.0-linux-amnd64.tar.gz 到opt目录中 并解压
tar -zxvf flannel-v0.10.0-linux-amd64.tar.gz
- 创建工作目录
mkdir -p /opt/kubernetes/{cfg,bin,ssl}
将 flanneld 和 mk-docker-opts.sh 放入工作目录中的bin目录下
mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/
- 创建启动脚本
vim flannel.sh
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat > /opt/kubernetes/cfg/flanneld <<EOF
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \\
-etcd-cafile=/opt/etcd/ssl/ca.pem \\
-etcd-certfile=/opt/etcd/ssl/server.pem \\
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat > /usr/lib/systemd/system/flanneld.service <<EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
#flanneld启动后会使用 mk-docker-opts.sh 脚本生成 docker 网络相关配置信息
#mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS:将组合选项键设置为环境变量DOCKER_NETWORK_OPTIONS,docker启动时将使用此变量
#mk-docker-opts.sh -d /run/flannel/subnet.env:指定要生成的docker网络相关信息配置文件的路径,docker启动时候引用此配置
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
- 开启flannel网络功能
cd /opt
chmod +x flannel.sh
./flannel.sh https://192.168.22.100:2379,https://192.168.22.110:2379,https://192.168.22.119:2379
- 配置 docker 连接 flannel
vim /usr/lib/systemd/system/docker.service
-----12行添加
EnvironmentFile=/run/flannel/subnet.env
-----13行修改(添加参数$DOCKER_NETWORK_OPTIONS)
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
- 查看flannel分配的子网段
cat /run/flannel/subnet.env
systemctl daemon-reload
systemctl restart docker
3、测试
三、再次部署master节点
- 上传master.zip 、k8s-cert.sh到 opt/k8s目录上
unzip master.zip
chmod +x *.sh
- 创建kubernetes目录
mkdir -p /opt/kubernetes/{cfg,bin,ssl}
- 创建用于生成CA证书、相关组件的证书和私钥的目录
mkdir /opt/k8s/k8s-cert
mv /opt/k8s/k8s-cert.sh /opt/k8s/k8s-cert
cd /opt/k8s/k8s-cert/
chmod +x *
vim k8s-cert.sh # 创建k8s-cert.sh脚本
#!/bin/bash
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cat > apiserver-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.22.100", #主节点1
"192.168.22.111", #主节点2
"192.168.22.120",
"192.168.22.123",
"192.168.22.222",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes apiserver-csr.json | cfssljson -bare apiserver
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
./k8s-cert.sh
ls *.pem
- 复制CA证书、apiserver相关证书和私钥到kubernetes工作目录的ssl目录中
cp ca*pem apiserver*pem /opt/kubernetes/ssl/
- 上传 kubernetes-server-linux-amd64.tar.gz到/opt/k8s/目录中
tar zxvf kubernetes-server-linux-amd64.tar.gz
- 复制master组件的关键命令文件到kubernetes工作目录的 bin子目录中
cd /opt/k8s/kubernetes/server/bin/
cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
ln -s /opt/kubernetes/bin/* /usr/local/bin/
- 创建bootstrap token认证文件
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
3a65d00378a6fc75e4b7a087baeb9b5e
vim /opt/kubernetes/cfg/token.csv
3a65d00378a6fc75e4b7a087baeb9b5e,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
- 准备启动api-serve
./apiserver.sh 192.168.22.100 https://192.168.22.100:2379,https://192.168.22.110:2379,https://192.168.22.119:2379
systemctl status kube-apiserver.service
netstat -natp | grep 6443
netstat -natp | grep 8080
- 启动scheduler服务、controller-manager服务和查看master状态
./scheduler.sh 127.0.0.1
./controller-manager.sh 127.0.0.1
kubectl get componentstatuses
- 把kubelet和kube-proxy拷贝到node节点
cd /opt/k8s/kubernetes/server/bin
scp kubelet kube-proxy root@192.168.22.110:/opt/kubernetes/bin/
scp kubelet kube-proxy root@192.168.22.119:/opt/kubernetes/bin/
- 创建用于生成kubelet的配置文件的目录并且上传kubeconfig.sh文件到/opt/k8s / kubeconfig目录中
mkdir /opt/k8s/kubeconfig
chmod +x kubeconfig.sh
vim kubeconfig.sh
#!/bin/bash
#example: kubeconfig 192.168.22.100 /opt/k8s/k8s-cert/
BOOTSTRAP_TOKEN=$(awk -F ',' '{print $1}' /opt/kubernetes/cfg/token.csv)
APISERVER=$1
SSL_DIR=$2
export KUBE_APISERVER="https://$APISERVER:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=$SSL_DIR/kube-proxy.pem \
--client-key=$SSL_DIR/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
- 生成kubelet的配置文件
./kubeconfig.sh 192.168.22.100 /opt/k8s/k8s-cert/
- 把配置文件bootstrap.kubeconfig和kube-proxy.kubeconfig拷贝到node节点
scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.22.110:/opt/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.22.119:/opt/kubernetes/cfg/
- RBAC授权
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
- 查看角色和已授权的角色
kubectl get clusterroles | grep system:node-bootstrapper
kubectl get clusterrolebinding
四、再次在2个node节点上部署
- 上传node.zip 到/opt目录中,解压 node.zip压缩包,获得kubelet.sh、proxy.sh
unzip node.zip
- 使用Kubelet.sh脚本启动kubelet服务
cd /opt
chmod +x kubelet.sh
./kubelet.sh 192.168.22.110
ps aux | grep kubelet #检查kueblet服务
- 查看证书和kubelet.kubeconfig文件
ls /opt/kubernetes/cfg/kubelet.kubeconfig
ls /opt/kubernetes/ssl/
- 加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i > /dev/null 2>&1 && /sbin/modprobe $i;done
- 使用proxy.sh脚本启动proxy服务
cd /opt
chmod +x proxy.sh
./proxy.sh 192.168.22.110
systemctl status kube-proxy.service
- 将node01节点上将kubelet.sh、proxy.sh文件拷贝到node02节点中
scp kubelet.sh proxy.sh root@192.168.22.119:/opt/
- 上面部署完毕我们再次返回master节点
在Master上查看node是否连接成功
kubectl get csr
通过CSR请求
kubectl certificate approve node-csr-6svmGrXeU1cDstNz5APDx3P2SchgyBvrpxQGnOoCS1I
kubectl certificate approve node-csr-JhUoEdcRKKk3f0hY-meUjGBe1m1da7uqUDfttPidM1o
- 查看状态
kubectl get nodes
更多推荐
已为社区贡献2条内容
所有评论(0)