主机列表

本次实验选择5台主机,3台作为master主机,2台作为node节点

节点ipOS版本hostname -f安装软件
192.168.0.1RHEL7.4k8s-master01docker,etcd,flanneld,kube-apiserver,kube-controller-manager,kube-scheduler
192.168.0.2RHEL7.4k8s-master02docker,etcd,flanneld,kube-apiserver,kube-controller-manager,kube-scheduler
192.168.0.3RHEL7.4k8s-master03docker,etcd,flanneld,kube-apiserver,kube-controller-manager,kube-scheduler
192.168.0.4RHEL7.4k8s-node01docker,flanneld,kubelet,kube-proxy
192.168.0.5RHEL7.4k8s-node02docker,flanneld,kubelet,kube-proxy

kubernetes Node 节点包含如下组件:

  • kubelet
  • kube-proxy

安装和配置kubelet

  • kubelet行在Node节点上,接收kube-apiserver发送的请求,管理Pod容器,执行交互式命令,如exec、run、logs等
  • 允许把Master节点也用作Node节点,可以在Master节点上安装
  • kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况;

下载解压二进制文件(k8s-master)

# wget https://dl.k8s.io/v1.15.3/kubernetes-server-linux-amd64.tar.gz 
# tar xf kubernetes-server-linux-amd64.tar.gz

# cd kubernetes/server/bin/
# cp kubelet  kube-proxy  /k8s/kubernetes/bin/
# scp kubelet kube-proxy 192.168.0.4:/k8s/kubernetes/bin/
# scp kubelet kube-proxy 192.168.0.5:/k8s/kubernetes/bin/

创建token(k8s-master)

 

kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:k8s-master1 --kubeconfig ~/.kube/config
kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:k8s-master2 --kubeconfig ~/.kube/config
kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:k8s-master3 --kubeconfig ~/.kube/config

 查看 kubeadm 为各节点创建的 token(k8s-master)

# kubeadm token list --kubeconfig ~/.kube/config

 创建bootstrap kubeconfig(k8s-master)

kubectl config set-cluster kubernetes --certificate-authority=/k8s/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.0.3:6443 --kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials kubelet-bootstrap --token=j53og3.p7bdy6ezasrlbszg --kubeconfig=bootstrap.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig    

  • --embed-certs 为 true 时表示将 certificate-authority 证书写入到生成的 bootstrap.kubeconfig 文件中;
  • 各节点用各节点的token生成证书,替换上面红字部分为对应节点的token
  • 设置 kubelet 客户端认证参数时没有指定秘钥和证书,后续由 kube-apiserver 自动生成;
  • 创建的 token 有效期为 1 天,超期后将不能再被使用,且会被 kube-controller-manager 的 tokencleaner 清理(如果启用该 controller 的话);
  • kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers;

Bootstrap Token Auth和授予权限(k8s-master)

kubelet 启动时向kube-apiserver 发送TLS bootstrapping 请求,需要先将bootstrap token 文件中的kubelet-bootstrap 用户赋予system:node-bootstrapper 角色,然后kubelet 才有权限创建认证请求certificatesigningrequests

# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

如不配置可能出现报错:
failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope

#查看kubelet-bootstrap绑定信息
# kubectldescribe clusterrolebinding kubelet-bootstrap
#删除kubelet-bootstrap绑定信息
# kubectl delete clusterrolebinding kubelet-bootstrap

 

创建 kubelet 参数配置模板文件(k8s-node)

cat << EOF >  /k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.0.4
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["101.254.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
EOF

 提示:其他node节点参照配置,修改对应红字部分的地址即可

创建kubelet配置文件(k8s-node)

cat << EOF > /k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.0.4 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF

 提示:其他node节点参照配置,修改对应红字部分的地址即可 

创建kubelet systemd unit 文件(k8s-node)

cat << EOF > /usr/lib/systemd/system/kubelet.service 

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target    
EOF

分发文件(k8s-node)

scp /k8s/kubernetes/cfg/kubelet* 192.168.0.5:/k8s/kubernetes/cfg/
scp /lib/systemd/system/kubelet.service 192.168.0.5:/lib/systemd/system/kubelet.service 

其他节点需要修改对应的address和hostname-override地址

启动kubelet(k8s-node)

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
systemctl status kubelet

提示:启动必须关闭swap,不然会报错“[ERROR Swap]: running with swap on is not supported. Please disable swap” 

通过kubelet 的TLS 证书请求(k8s-master)

kubelet 首次启动时向kube-apiserver 发送证书签名请求,必须通过后kubernetes 系统才会将该 Node 加入到集群。

手工通过请求(k8s-master)

#查看未授权的CSR 请求
# kubectl get csr
NAME                                                   AGE   REQUESTOR                 CONDITION
node-csr-puQ6ol8tyJt1g_zXZLWEh7NjnTyDCKZE_qBbRPBFiCk   56s   system:bootstrap:q3sb4h   Pending

# kubectl get nodes
No resources found.    

#通过CSR 请求:
# kubectl certificate approve node-csr-puQ6ol8tyJt1g_zXZLWEh7NjnTyDCKZE_qBbRPBFiCk
certificatesigningrequest.certificates.k8s.io/node-csr-puQ6ol8tyJt1g_zXZLWEh7NjnTyDCKZE_qBbRPBFiCk approved
 
# kubectl get csr  
NAME                                                   AGE     REQUESTOR                 CONDITION
node-csr-puQ6ol8tyJt1g_zXZLWEh7NjnTyDCKZE_qBbRPBFiCk   2m18s   system:bootstrap:q3sb4h   Approved,Issued

# kubectl get nodes
NAME           STATUS     ROLES    AGE   VERSION
10.124.3.104   NotReady   <none>   0s    v1.15.0

#查看详情
# kubectl describe csr node-csr-puQ6ol8tyJt1g_zXZLWEh7NjnTyDCKZE_qBbRPBFiCk
Name:               node-csr-puQ6ol8tyJt1g_zXZLWEh7NjnTyDCKZE_qBbRPBFiCk
Labels:             <none>
Annotations:        <none>
CreationTimestamp:  Wed, 18 Sep 2019 22:54:27 +0800
Requesting User:    system:bootstrap:q3sb4h
Status:             Approved,Issued
Subject:
         Common Name:    system:node:10.124.3.104
         Serial Number:  
         Organization:   system:nodes
Events:  <none>

自动approve csr请求(k8s-master)

创建三个 ClusterRoleBinding,分别用于自动 approve client、renew client、renew server 证书:

cat > csr-crb.yaml <<EOF
# Approve all CSRs for the group "system:bootstrappers"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:nodes" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io
EOF

  • auto-approve-csrs-for-group:自动 approve node 的第一次 CSR; 注意第一次 CSR 时,请求的 Group 为 system:bootstrappers;
  • node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group 为 system:nodes;
  • node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group 为 system:nodes;

#生效配置
# kubectl apply -f csr-crb.yaml
clusterrolebinding.rbac.authorization.k8s.io/auto-approve-csrs-for-group created
clusterrolebinding.rbac.authorization.k8s.io/node-client-cert-renewal created
clusterrole.rbac.authorization.k8s.io/approve-node-server-renewal-csr created
clusterrolebinding.rbac.authorization.k8s.io/node-server-cert-renewal created

查看kubelet服务(k8s-node)

# netstat -lnpt|grep kubelet
tcp        0      0 127.0.0.1:39037         0.0.0.0:*               LISTEN      18584/kubelet       
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      18584/kubelet       
tcp        0      0 10.124.3.105:10250      0.0.0.0:*               LISTEN      18584/kubelet       
tcp        0      0 10.124.3.105:10255      0.0.0.0:*               LISTEN      18584/kubelet

  • 39037: cadvisor http 服务;
  • 10248: healthz http 服务;
  • 10250: https API 服务;
  • 10255:只读端口

配置kube-proxy(k8s-master)

kube-proxy 运行在所有 node节点上,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。

创建kube-proxy 证书签名请求(k8s-master)

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF    

  • kube-apiserver 预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限

生成kube-proxy 客户端证书和私钥(k8s-master)

# cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem -ca-key=/k8s/kubernetes/ssl/ca-key.pem -config=/k8s/kubernetes/ssl/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy    

# ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem

# cp kube-proxy*.pem /k8s/kubernetes/ssl/
# scp kube-proxy*.pem 192.168.0.4:/k8s/kubernetes/ssl/
# scp kube-proxy*.pem 192.168.0.5:/k8s/kubernetes/ssl/

创建kube-proxy kubeconfig 文件(k8s-master)

kubectl config set-cluster kubernetes --certificate-authority=/k8s/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.0.1:6443 --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy --client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem --client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig    

# 将kube-proxy kubeconfig文件拷贝到所有 nodes节点
cp kube-proxy.kubeconfig /k8s/kubernetes/cfg/
scp kube-proxy.kubeconfig 192.168.0.4:/k8s/kubernetes/cfg/
scp kube-proxy.kubeconfig 192.168.0.5:/k8s/kubernetes/cfg/

创建 kube-proxy 配置文件(k8s-node)

cat << EOF > /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.0.4 \
--cluster-cidr=101.254.0.0/16 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"    
EOF

  • --cluster-cidr 必须与 kube-apiserver 的 --service-cluster-ip-range 选项值一致

创建kube-proxy systemd unit 文件(k8s-node)

cat << EOF > /lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target    
EOF

启动kube-proxy 服务(k8s-node)

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
systemctl status kube-proxy

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐