k8s学习笔记-部署master节点
主机列表本次实验选择5台主机,3台作为master主机,2台作为node节点节点ipOS版本hostname -f安装软件192.168.0.1RHEL7.4k8s-master01docker,etcd,flanneld,kube-apiserver,kube-controller-manager,kube-scheduler1...
主机列表
本次实验选择5台主机,3台作为master主机,2台作为node节点
节点ip | OS版本 | hostname -f | 安装软件 |
---|---|---|---|
192.168.0.1 | RHEL7.4 | k8s-master01 | docker,etcd,flanneld,kube-apiserver,kube-controller-manager,kube-scheduler |
192.168.0.2 | RHEL7.4 | k8s-master02 | docker,etcd,flanneld,kube-apiserver,kube-controller-manager,kube-scheduler |
192.168.0.3 | RHEL7.4 | k8s-master03 | docker,etcd,flanneld,kube-apiserver,kube-controller-manager,kube-scheduler |
192.168.0.4 | RHEL7.4 | k8s-node01 | docker,flanneld,kubelet,kube-proxy |
192.168.0.5 | RHEL7.4 | k8s-node02 | docker,flanneld,kubelet,kube-proxy |
kubernetes master 节点包含的组件:
- kube-apiserver
- kube-scheduler
- kube-controller-manager
目前这三个组件需要部署在同一台机器上。
- kube-scheduler、kube-controller-manager 和 kube-apiserver三者的功能紧密相关;
- 同时只能有一个 kube-scheduler、kube-controller-manager 进程处于工作状态,如果运行多个,则需要通过选举产生一个 leader;
下载解压二进制文件
# wget https://dl.k8s.io/v1.15.3/kubernetes-server-linux-amd64.tar.gz
# tar xf kubernetes-server-linux-amd64.tar.gz# cd kubernetes/server/bin/
# cp kubeadm kube-apiserver kube-controller-manager kubectl kube-scheduler /k8s/kubernetes/bin/
配置和启动kube-apiserver
创建kubernetes 证书
创建kubernetes 证书签名请求:
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.0.1",
"192.168.0.2",
"192.168.0.3",
"101.254.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
- 如果 hosts 字段不为空,则需要指定授权使用该证书的 IP 或域名列表,所以上面分别指定了当前部署的 master 节点主机 IP,如有apiserver 负载地址或域名,也需一并指定
- 添加 kube-apiserver 注册的名为 kubernetes 的服务 IP (Service Cluster IP),一般是 kube-apiserver --service-cluster-ip-range 选项值指定的网段的第一个IP,如 “101.254.0.1”
生成kubernetes 证书和私钥
# cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem -ca-key=/k8s/kubernetes/ssl/ca-key.pem -config=/k8s/kubernetes/ssl/ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
# ls kub*
kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem# cp kubernetes*.pem /k8s/kubernetes/ssl/
创建kube-apiserver 使用的客户端token 文件
kubelet 首次启动时向kube-apiserver 发送TLS Bootstrapping 请求,kube-apiserver 验证请求中的token 是否与它配置的token.csv 一致,如果一致则自动为kubelet 生成证书和密钥。
TLS Bootstrapping 使用的Token可以使用命令生成
# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
ef502f26a00ac235b04977cde1dc9916cat << EOF > /k8s/kubernetes/cfg/token.csv
ef502f26a00ac235b04977cde1dc9916,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
创建apiserver配置文件
cat << EOF > /k8s/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.0.3:2379,https://192.168.0.2:2379,https://192.168.0.1:2379 \
--bind-address=192.168.0.1 \
--secure-port=6443 \
--advertise-address=192.168.0.1 \
--allow-privileged=true \
--service-cluster-ip-range=101.254.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/kubernetes.pem \
--tls-private-key-file=/k8s/kubernetes/ssl/kubernetes-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/kubernetes/ssl/ca.pem \
--etcd-certfile=/k8s/kubernetes/ssl/kubernetes.pem \
--etcd-keyfile=/k8s/kubernetes/ssl/kubernetes-key.pem"
EOF
提示:其他master节点需要将红字部分替换
创建kube-apiserver 的systemd unit文件
cat << EOF > /lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOF
启动kube-apiserver服务
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver
授予kubernetes证书访问kubelet api权限
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
配置和启动kube-controller-manager
创建kube-controller-manager配置文件
cat << EOF > /k8s/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=101.254.0.0/24 \
--cluster-cidr=100.100.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"
EOF
提示:
- 之前的kube-apiserver默认开启监听非安全端口8080,开启了127.0.0.1的本地非安全认证,所以同一节点的kube-controller-manager可以直接配置本地认证,无需单独为其创建证书
- 如kube-apiserver配置
--insecure-port=0,
关闭监听非安全端口(8080),则需要单独为kube-controller-manager创建证书。
创建kube-controller-manager systemd unit 文件
cat << EOF > /lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOF
启动 kube-controller-manager服务
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
systemctl status kube-controller-manager
配置和启动kube-scheduler
创建kube-scheduler配置文件
cat << EOF > /k8s/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true"
EOF
创建kube-scheduler systemd unit 文件
cat << EOF > /lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOF
启动kube-scheduler服务
systemctl daemon-reload
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service
systemctl status kube-scheduler.service
验证master 节点
将可执行文件添加到 PATH 变量中
echo "export PATH=$PATH:/k8s/kubernetes/bin/" >>/etc/profile
source /etc/profile
#查看master服务状态
# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
分发文件至其他节点
scp -r /k8s/kubernetes/bin/* 192.168.0.2:/k8s/kubernetes/bin/
scp -r /k8s/kubernetes/bin/* 192.168.0.3:/k8s/kubernetes/bin/
scp -r /k8s/kubernetes/ssl/* 192.168.0.2:/k8s/kubernetes/ssl/
scp -r /k8s/kubernetes/ssl/* 192.168.0.3:/k8s/kubernetes/ssl/
scp -r /k8s/kubernetes/cfg/* 192.168.0.2:/k8s/kubernetes/cfg/
scp -r /k8s/kubernetes/cfg/* 192.168.0.3:/k8s/kubernetes/cfg/
scp /lib/systemd/system/kube-* 192.168.0.2:/lib/systemd/system/
scp /lib/systemd/system/kube-* 192.168.0.3:/lib/systemd/system/
更多推荐
所有评论(0)