扩容单机k8s服务到多个master节点
本文接虚拟机部署安装k8s最新稳定版章尾的扩容服务扩容etcd
本文接虚拟机部署安装k8s最新稳定版章尾的扩容服务
通过kubeadm init安装好的k8s集群,会存在k8s服务只跑在第一台初始化的master节点的现象
需要将k8s的单点服务扩容到多个master节点上,以增加服务可用性
这种不是企业级的做法,个人玩玩还行。 会有一堆潜在的问题。。。
现象:可以看到环境中有3台master节点,而k8s服务只是单份的实例跑在了1台master节点上
规划操作步骤
确定扩容顺序
- etcd
- kube-apiserver
- kube-controller-manager
- kube-scheduler
确定master节点的数量的ip
由于是规划,要具有一定的远瞻性,目光看的远些
本次规划了3台master节点
准备证书
由于环境中的证书是在kubeadm init时,自动签发的,证书信息中只有一台host,需要进行证书信息的更新
k8s和etcd集群都需要证书,它们可以由不同的CA来签发,当然也可以用相同的CA机构
安装证书生成工具cfssl二进制文件
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo
chmod +x /usr/bin/cfssl*
创建证书文件
mkdir -p ~/k8scerts/pki/etcd/ && cd ~/k8scerts/pki/
证书目录结构如下,按照如下方式准备好相关证书
可以看到有3套ca签发的证书,
apiserver-etcd-client、etcd是一个ca
apiserver-kubelet、apiserver是一个ca
front-proxy-client是一个ca
生成ca自签证书
在pki目录下,创建CA自签配置文件 ca-csr.json 文件
vi ca-csr.json
{
"CN": "DanHuangPai",
"hosts":[
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "MyK8sCompany",
"OU": "ops1team"
}
],
"ca": {
"expiry": "438000h"
}
}
生成CA证书
cfssl gencert -initca ca-csr.json | cfssl-json -bare ca
创建ca签发配置文件
在pki目录下,创建CA签发配置 ca-config.json 文件,留作下面步骤给其它机构签发证书时使用
vi ca-config.json
{
"signing": {
"default": {
"expiry": "438000h"
},
"profiles": {
"server": {
"expiry": "438000h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
},
"client": {
"expiry": "438000h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
},
"peer": {
"expiry": "438000h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
生成etcd peer证书
vi etcd-peer-csr.json
{
"CN": "KaoYaPai",
"hosts": [
"127.0.0.1",
"192.168.56.191",
"192.168.56.193",
"192.168.56.195"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "GuangZhou",
"L": "YunBaiShan",
"O": "xiaowang",
"OU": "ops2team"
}
]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssl-json -bare etcd-peer
生成api-server证书
在创建server证书前,先看下环境中证书的信息,在做计较
vi apiserver-csr.json
{
"CN": "ZhaJiPai",
"hosts": [
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"master1",
"master2",
"master3",
"10.2.0.1",
"10.2.0.2",
"10.2.0.3",
"127.0.0.1",
"192.168.56.191",
"192.168.56.193",
"192.168.56.195"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "PuDong",
"O": "yyds",
"OU": "ops4team"
}
]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver
生成client证书
vi client-csr.json
{
"CN": "XunZhengPai",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "ShenZhen",
"L": "DongGuan",
"O": "yiming",
"OU": "ops8team"
}
]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssl-json -bare client
生成key pair, Service Account 秘钥
openssl genrsa -out sa.key 1024
openssl rsa -in sa.key -pubout -out sa.pub
生成证书
- 按照上面步骤,生成证书所需要的json文件
mkdir -p k8scert/etcd/
cd k8scert
- ca证书使用环境中的就可以了
scp root@192.168.56.191:/etc/kubernetes/pki/ca.* ./
scp root@192.168.56.191:/etc/kubernetes/pki/front-proxy-ca.* ./
scp root@192.168.56.191:/etc/kubernetes/pki/etcd/ca.* ./etcd/
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=ca-config.json -profile=client client-csr.json | cfssl-json -bare apiserver-kubelet-client
cfssl gencert -ca=front-proxy-ca.crt -ca-key=front-proxy-ca.key -config=ca-config.json -profile=client client-csr.json | cfssl-json -bare front-proxy-client
cfssl gencert -ca=./etcd/ca.crt -ca-key=./etcd/ca.key -config=ca-config.json -profile=client client-csr.json | cfssl-json -bare apiserver-etcd-client
cfssl gencert -ca=./etcd/ca.crt -ca-key=./etcd/ca.key -config=ca-config.json -profile=client client-csr.json | cfssl-json -bare etcd/healthcheck-client
cfssl gencert -ca=./etcd/ca.crt -ca-key=./etcd/ca.key -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssl-json -bare etcd/etcd-peer
cfssl gencert -ca=./etcd/ca.crt -ca-key=./etcd/ca.key -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare etcd/etcd-server
将pki的目录结构准备成和环境中的一样,方便之后直接拷到环境中使用
mkdir -p pki/etcd/
cp apiserver-key.pem pki/apiserver.key
cp apiserver.pem pki/apiserver.crt
cp apiserver-kubelet-client.pem pki/apiserver-kubelet-client.crt
cp apiserver-kubelet-client-key.pem pki/apiserver-kubelet-client.key
cp apiserver-etcd-client-key.pem pki/apiserver-etcd-client.key
cp apiserver-etcd-client.pem pki/apiserver-etcd-client.crt
cp front-proxy-client.pem pki/front-proxy-client.crt
cp front-proxy-client-key.pem pki/front-proxy-client.key
cp etcd/etcd-server.pem pki/etcd/server.crt
cp etcd/etcd-server-key.pem pki/etcd/server.key
cp etcd/healthcheck-client-key.pem pki/etcd/healthcheck-client.key
cp etcd/healthcheck-client.pem pki/etcd/healthcheck-client.crt
cp etcd/etcd-peer.pem pki/etcd/peer.crt
cp etcd/etcd-peer-key.pem pki/etcd/peer.key
准备ca证书和key pair
scp root@192.168.56.191:/etc/kubernetes/pki/ca.* pki/
scp root@192.168.56.191:/etc/kubernetes/pki/front-proxy-ca.* pki/
scp root@192.168.56.191:/etc/kubernetes/pki/etcd/ca.* pki/etcd/
scp root@192.168.56.191:/etc/kubernetes/pki/sa.* pki/
更新master1证书
替换证书
原因是环境中的证书泛域名只有一个节点的,需要将上面生成的多节点证书域名拷过来
rsync -avP pki/ root@192.168.56.191:/etc/kubernetes/pki/
这是更新后的
重启机器
重启机器是让证书生效的一个快速的办法
不过重启机器后,由于证书的改动,会导致k8s有异常现象
异常一
[root@master1 ~]# kubectl logs kube-apiserver-master1 -n kube-system
Error from server (Forbidden): Forbidden (user=XunZhengPai, verb=get, resource=nodes, subresource=proxy) ( pods/log kube-apiserver-master1)
这是因为新签发的证书client的角色发生变化,重新绑定下
参考链接:https://blog.csdn.net/weixin_34331102/article/details/92225474
kubectl create clusterrolebinding clusterrolebinding-XunZhengPai --clusterrole=cluster-admin --user=XunZhengPai
或者直接用文件创建权限也行
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-XunZhengPai
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-apiserve-XunZhengPai
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-XunZhengPai
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: XunZhengPai
异常二
[root@master1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get “http://127.0.0.1:10251/healthz”: dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get “http://127.0.0.1:10252/healthz”: dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {“health”:“true”}
[root@master1 ~]#
参考链接:http://www.qishunwang.net/news_show_27825.aspx
需要注释或者删除掉 /etc/kubernetes/manifests/kube-scheduler.yaml 和 /etc/kubernetes/manifests/kube-scheduler.yaml 中的 启动参数 --port=0 这一行
一段时间之后,便会看到恢复正常
扩容etcd
这里以扩容到master2为例
准备证书
将证书拷到master2上
rsync -avP pki/etcd/ root@192.168.56.193:/etc/kubernetes/pki/etcd/
准备yaml文件
将文件从master1拷到master2上
rsync -avP /etc/kubernetes/manifests/ 192.168.56.193:/tmp/k8syaml
在master2上操作
cd /tmp/k8syaml
# 将ip替换成mater2的
sed -i 's#192.168.56.191#192.168.56.193#' etcd.yaml
# 编辑etcd.yaml文件,添加配置信息
vi etcd.yaml
参考链接:https://my.oschina.net/u/2306127/blog/2990359
需要修改–initial-cluster 和 --name
还需要添加两行
- --initial-cluster-token=etcd-cluster
- --initial-cluster-state=existing
触发拉起etcd的pod
cp etcd.yaml /etc/kubernetes/manifests/
此时的etcd还拉不起来,需要手动在etcd上添加节点
# 查看member list
ETCDCTL_API=3 etcdctl --endpoints=https://[192.168.56.191]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key member list
# 添加节点
ETCDCTL_API=3 etcdctl --endpoints=https://[192.168.56.191]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key member add master2 --peer-urls=https://192.168.56.193:2380
一段时间后,可以看到拉起的etcd-master2
增加多实例
同样的方法扩容master3上的etcd节点
扩容api-server
准备证书
rsync -avP /etc/kubernetes/pki/apiserver* 192.168.56.193:/etc/kubernetes/pki/
rsync -avP /etc/kubernetes/pki/ca.* 192.168.56.193:/etc/kubernetes/pki/
rsync -avP /etc/kubernetes/pki/front-proxy-* 192.168.56.193:/etc/kubernetes/pki/
rsync -avP /etc/kubernetes/pki/sa.* 192.168.56.193:/etc/kubernetes/pki/
准备yaml文件
cd /tmp/k8syaml
# 将ip替换成mater2的
sed -i 's#192.168.56.191#192.168.56.193#' kube-apiserver.yaml
# 编辑kube-apiserver.yaml文件,添加配置信息
vi kube-apiserver.yaml
触发拉起kube-apiserver的pod
cp kube-apiserver.yaml /etc/kubernetes/manifests/
一段时间后,会拉起Kube-apiserver
增加多实例
同样的方法,在master3上拉起
扩容kube-controller
准备证书
上一步已经准备过,有相同的部分
准备配置文件
# 在master1上拷贝配置文件到master2上
rsync -avP /etc/kubernetes/controller-manager.conf 192.168.56.193:/etc/kubernetes/
准备yaml文件
cd /tmp/k8syaml
# 将ip替换成mater2的
# sed -i 's#192.168.56.191#192.168.56.193#' kube-controller-manager.yaml
# 当前kube-controller-manager.yaml文件,不需要任何操作
# vi kube-controller-manager.yaml
触发拉起kube-controller的pod
cp kube-controller-manager.yaml /etc/kubernetes/manifests/
增加多实例
同样的方法在master3操作
扩容kube-scheduler
准备配置文件
# 在master1上拷贝配置文件到master2上
rsync -avP /etc/kubernetes/scheduler.conf 192.168.56.193:/etc/kubernetes/
准备yaml文件
cd /tmp/k8syaml
# 将ip替换成mater2的
# sed -i 's#192.168.56.191#192.168.56.193#' kube-scheduler.yaml
# 当前kkube-scheduler.yaml文件,不需要任何操作
# vi kube-scheduler.yaml
触发拉起kube-scheduler的pod
cp kube-scheduler.yaml /etc/kubernetes/manifests/
可以看到,一段时间后会拉起来
增加多实例
同样的方法在master3上操作
后续
这种扩容方法,不是很标准,需要一些hack操作,而当前的k8s版本对kubeadm init部署支持的又不太好
case by case进行解决吧
更多推荐
所有评论(0)