部署kubelet

集群规划

主机名						角色				ip
HDSS7-21.host.com			kubelet				10.4.7.21
HDSS7-22.host.com			kubelet				10.4.7.22


注意:这里部署文档以HDSS7-21.host.com主机为例,另外一台计算节点安装部署方法类似

签发kubelet证书

运维主机 10.4.7.200
创建生成证书签名请求(csr)的JSON配置文件

[root@hdss7-200 etc]# cd /opt/certs/
[root@hdss7-200 certs]# vi kubelet-csr.json
{
    "CN": "k8s-kubelet",
    "hosts": [
    "127.0.0.1",
    "10.4.7.10",
    "10.4.7.21",
    "10.4.7.22",
    "10.4.7.23",
    "10.4.7.24",
    "10.4.7.25",
    "10.4.7.26",
    "10.4.7.27",
    "10.4.7.28"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "shengzhen",
            "L": "shengzhen",
            "O": "od",
            "OU": "ops"
        }
    ]
}

# 添加node节点IP,多些一些可有能安装使用的IP,如果新node的ip不在证书内,需要重新编写证书,拷贝至所有主机

生成证书:建立kubelet(客户端)与apiserver(服务端)的通信,完成kubelet向apiserver的状态汇报

[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet

[root@hdss7-200 certs]# ll
-rw-r--r-- 1 root root 1115 1211 16:16 kubelet.csr
-rw-r--r-- 1 root root  498 1211 16:16 kubelet-csr.json
-rw------- 1 root root 1679 1211 16:16 kubelet-key.pem
-rw-r--r-- 1 root root 1468 1211 16:16 kubelet.pem

拷贝证书、私钥,注意私钥文件属性600,分别拷贝至10.4.7.21和10.4.7.22的/opt/kubernetes/server/bin/cert/目录下

[root@hdss7-21 ~]# cd /opt/kubernetes/server/bin/cert/
root@hdss7-21 cert]# scp 10.4.7.200:/opt/certs/kubelet.pem .
[root@hdss7-21 cert]# scp 10.4.7.200:/opt/certs/kubelet-key.pem .

注意:后面的点代表当前目录

生成kubelet.kubeconfig 配置文件

只执行一次,最后生成的 kubelet.kubeconfig 文件拷贝至其他节点即可。

set-context – 只做一次,最后生成的 kubelet.kubeconfig 文件拷贝至其他节点即可,其它节点无需再执行该段指令重新生成 kubelet.kubeconfig 文件

# 注意:在conf目录下
[root@hdss7-21 cert]# cd /opt/kubernetes/server/bin/conf

# IP地址提前改好再粘贴复制IP为keeplive的VIP地址
[root@hdss7-21 conf]# kubectl config set-cluster myk8s \
    --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
    --embed-certs=true \
    --server=https://10.4.7.10:7443 \
    --kubeconfig=kubelet.kubeconfig
    

Cluster "myk8s" set.
备注:配置目的是让kubelet通过vip去找apiserver,汇报给apiserver

[root@localhost conf]# ll
总用量 8
-rw-r--r-- 1 root root 2223 627 11:38 audit.yaml
-rw------- 1 root root 1986 628 14:38 kubelet.kubeconfig
# kubelet.kubeconfig包含了经过base64编码后的ca.pem证书文件,后面证书的升级更新,需要重新生成ubelet.kubeconfig文件才能完成通信

[root@hdss7-21 conf]# kubectl config set-credentials k8s-node \
  --client-certificate=/opt/kubernetes/server/bin/cert/client.pem \
  --client-key=/opt/kubernetes/server/bin/cert/client-key.pem \
  --embed-certs=true \
  --kubeconfig=kubelet.kubeconfig 

User "k8s-node" set.

[root@hdss7-21 conf]# kubectl config set-context myk8s-context \
  --cluster=myk8s \
  --user=k8s-node \
  --kubeconfig=kubelet.kubeconfig

Context "myk8s-context" created.

[root@hdss7-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
Switched to context "myk8s-context".

[root@localhost conf]# cat kubelet.kubeconfig 
# cat 时会发现文件才完整


授予权限,角色绑定

(只创建一次就好,存到etcd里,然后拷贝到各个node节点上)

# 创建一个k8s-node的用户,并授予集群权限,让k8s-node的用户成为具有运算节点角色的权限
[root@hdss7-21 conf]# vi k8s-node.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node

[root@hdss7-21 conf]# kubectl create -f k8s-node.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-node created

# 查看配置是否生效
[root@hdss7-21 conf]# kubectl get clusterrolebinding k8s-node -o yaml

在HDSS7-22上:无需再生成kubelet.kubeconfig 文件和创建k8s-node用户:

[root@hdss7-22 cert]# cd /opt/kubernetes/server/bin/conf
[root@hdss7-22 conf]# scp 10.4.7.21:/opt/kubernetes/server/bin/conf/kubelet.kubeconfig .

准备pause基础镜像 – 边车模式

运维主机10.4.7.200上:

[root@hdss7-200 ~]# docker pull kubernetes/pause
docker login
[root@hdss7-200 ~]# docker tag f9d5de079539 harbor.od.com/public/pause:latest
[root@localhost harbor]# docker login harbor.od.com


[root@hdss7-200 ~]# docker push harbor.od.com/public/pause:latest


# 下载连接超时时,请检查网络问题(重新连接网络)
# 如果私有仓库访问错误,请重新启动所有容器,再或者删除所有容器,用compose工具再生成所有相关harbor的容器

编写启动脚本 – # 更改主机名

[root@hdss7-21 conf]# vi /opt/kubernetes/server/bin/kubelet.sh
#!/bin/sh
./kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 192.168.0.2 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice \
  --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on="false" \
  --client-ca-file ./cert/ca.pem \
  --tls-cert-file ./cert/kubelet.pem \
  --tls-private-key-file ./cert/kubelet-key.pem \
  --hostname-override hdss7-21.host.com \
  --kubeconfig ./conf/kubelet.kubeconfig \
  --log-dir /data/logs/kubernetes/kube-kubelet \
  --pod-infra-container-image harbor.od.com/public/pause:latest \
  --root-dir /data/kubelet

[root@hdss7-21 conf]# mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet

[root@hdss7-21 conf]# chmod +x /opt/kubernetes/server/bin/kubelet.sh


[root@hdss7-22 conf]# vi /opt/kubernetes/server/bin/kubelet.sh
#!/bin/sh
./kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 192.168.0.2 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice \
  --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on="false" \
  --client-ca-file ./cert/ca.pem \
  --tls-cert-file ./cert/kubelet.pem \
  --tls-private-key-file ./cert/kubelet-key.pem \
  --hostname-override hdss7-22.host.com \
  --kubeconfig ./conf/kubelet.kubeconfig \
  --log-dir /data/logs/kubernetes/kube-kubelet \
  --pod-infra-container-image harbor.od.com/public/pause:latest \
  --root-dir /data/kubelet

[root@hdss7-22 conf]# mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet

[root@hdss7-22 conf]# chmod +x /opt/kubernetes/server/bin/kubelet.sh
# 根据主机修改主机名称-[program:kube-kubelet-7-21]
[root@hdss7-21 conf]# vi /etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-7-21]	
command=/opt/kubernetes/server/bin/kubelet.sh     ; the program (relative uses PATH, can take args)
numprocs=1                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin              ; directory to cwd to before exec (def no cwd)
autostart=true                                    ; start at supervisord start (default: true)
autorestart=true              		          ; retstart at unexpected quit (default: true)
startsecs=30                                      ; number of secs prog must stay running (def. 1)
startretries=3                                    ; max # of serial start failures (default 3)
exitcodes=0,2                                     ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                   ; signal used to kill process (default TERM)
stopwaitsecs=10                                   ; max num secs to wait b4 SIGKILL (default 10)
user=root                                         ; setuid to this UNIX account to run the program
redirect_stderr=true                              ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log   ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                       ; emit events on stdout writes (default false)

[root@hdss7-22 conf]# supervisorctl update

[root@hdss7-21 conf]# supervisorctl status
etcd-server-7-21                 RUNNING   pid 6565, uptime 0:24:15
kube-apiserver-7-21              RUNNING   pid 6566, uptime 0:24:15
kube-controller-manager-7-21     RUNNING   pid 6551, uptime 0:24:15
kube-kubelet-7-21                RUNNING   pid 16663, uptime 0:01:14
kube-scheduler-7-21              RUNNING   pid 6552, uptime 0:24:15

备注:当启动失败时,查看报错日志 
tail -fn 200 /data/logs/kubernetes/kube-kubelet/kubelet.stdout.log 

[root@hdss7-21 cert]# kubectl get nodes
NAME                STATUS   ROLES    AGE     VERSION
hdss7-21.host.com   Ready    <none>   15h     v1.15.2
hdss7-22.host.com   Ready    <none>   8m51s   v1.15.2
# ROlES添加标签,设定节点角色,可同时加两个标签
[root@hdss7-21 cert]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/master=
[root@hdss7-21 cert]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/node=

[root@hdss7-22 cert]# kubectl label node hdss7-22.host.com node-role.kubernetes.io/master=
[root@hdss7-22 cert]# kubectl label node hdss7-22.host.com node-role.kubernetes.io/node=

[root@hdss7-22 ~]# kubectl get nodes                               
NAME                STATUS   ROLES         AGE   VERSION
hdss7-21.host.com   Ready    master,node   15h   v1.15.2
hdss7-22.host.com   Ready    master,node   12m   v1.15.2
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐