K8s Node 两个组件:

  • kubelet
  • kube-proxy

每个节点上都运行一个 kubelet 服务进程,默认监听 10250 端口,接收并执行 master 发来的指令,管理 Pod 及 Pod 中的容器。每个 kubelet 进程会在 API Server 上注册节点自身信息,定期向 master 节点汇报节点的资源使用情况,并通过 cAdvisor 监控节点和容器的资源。
每台机器上都运行一个 kube-proxy 服务,它监听 API server 中 service 和 endpoint 的变化情况,并通过 iptables、ipvs 等来为服务配置负载均衡(仅支持 TCP 和 UDP)。

依赖的组件

  • Docker
  • flannel

依赖主键的安装参考我以前的博客docker,flannel

工作原理

kubelet.png
kube-proxy.png

安装步骤

  • 启用ipvs
  • 下载文件
  • 制作证书
  • 部署kubelet组件
  • 部署kube-proxy组件
  • 测试示例(Nginx)

启用ipvs

#在所有的Kubernetes节点node1和node2上执行以下脚本:
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr   ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh   ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
    /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
    if [ $? -eq 0 ]; then
        /sbin/modprobe \${kernel_module}
    fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

下载文件,解压缩

wget https://dl.k8s.io/v1.13.6/kubernetes-node-linux-amd64.tar.gz
[root@k8s ~]# tar zxf kubernetes-node-linux-amd64.tar.gz 
[root@k8s ~]# cd kubernetes/node/bin/
[root@k8s bin]# cp kube-proxy kubelet /opt/kubernetes/bin/
[root@k8s bin]# cp kubectl /usr/bin/

制作证书

从master上获取ca.pem,ca-key.pem,server.pem.server-key.pem

[root@k8s ssl]# scp *.pem root@10.0.52.14:/opt/kubernetes/ssl/
ca-key.pem                                                                             100% 1675     3.2MB/s   00:00    
ca.pem                                                                                 100% 1359     3.7MB/s   00:00    
server-key.pem                                                                         100% 1679     3.8MB/s   00:00    
server.pem                                                                             100% 1643     4.4MB/s   00:00    
[root@k8s ssl]# 

制作kube-proxy csr文件,并生成kube-proxy.pem,kube-proxy-key.pem证书
可以在master上生成之后scp到node节点,也可以直接在node节点生成,但node节点需要安装cfssl
cfssl安装,可以参考我之前的博客etcd 集群部署

cat << EOF | tee /opt/kubernetes/ssl/kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

部署kubelet组件

1. 创建bootstrap.kubeconfig,kube-proxy.kubeconfig文件 通过脚本实现

参数说明
BOOTSTRAP_TOKEN : 部署master时生成的token.csv 第一个参数
APISERVER : master 地址
SSL_DIR : 放置证书文件的地址

[root@k8s ~]# cat environment.sh 
#!/bin/bash
BOOTSTRAP_TOKEN=7d558bb3a5206cf78f881de7d7b82ca6
APISERVER=10.0.52.13
SSL_DIR=/opt/kubernetes/ssl

# 创建kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://$APISERVER:6443"

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=$SSL_DIR/kube-proxy.pem \
  --client-key=$SSL_DIR/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

在 /opt/kubernetes/cfg/执行environment.sh ,生成bootstrap.kubeconfig,kube-proxy.kubeconfig两个配置文件

[root@k8s ~]# cd /opt/kubernetes/cfg/
[root@k8s cfg]# sh environment.sh 
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@k8s cfg]# ls
bootstrap.kubeconfig  environment.sh  flanneld  kube-proxy.kubeconfig
[root@k8s cfg]# 

2. 创建kubelet.config文件

参数说明:
address : 当前节点的ip地址
clusterDNS: clusterDNS 必须于在apiserver 设置的service-cluster-ip-range=10.0.0.0/24相对应

cat << EOF | tee /opt/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.0.52.14
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- "10.0.0.2"
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
EOF
2. 创建kubelet文件

参数说明
address : 本node的ip地址
hostname-override: 本node的hostname地址,建议直接使用本机IP,也可以不用配置
pod-infra-container-image: 启动容器镜像,原来时k8s.gcr.io/pause-amd64:3.0,替换成registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0

cat << EOF | tee /opt/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--address=10.0.52.14 \\
--hostname-override=10.0.52.14 \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet.config \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF
3. 创建kubelet systemd文件
cat << EOF | tee /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF
4. 在master上操作,将kubelet-bootstrap用户绑定到系统集群角色

注意这个默认连接127.0.0.1:8080端口,所以在master上操作
如果创建失败,建议先删除之后再创建,删除操作如下:
kubectl delete clusterrolebinding kubelet-bootstrap

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap
4. 启动服务
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet

[root@k8s cfg]# systemctl daemon-reload
[root@k8s cfg]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s cfg]# systemctl start kubelet
[root@k8s cfg]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2019-05-29 15:24:30 CST; 12s ago
 Main PID: 14428 (kubelet)
   Memory: 17.3M
   CGroup: /system.slice/kubelet.service
           └─14428 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --address=10.0.52.14 --hostname-override=10.0....

May 29 15:24:30 k8s.node1 kubelet[14428]: I0529 15:24:30.774250   14428 feature_gate.go:206] feature gates: &{map[]}
May 29 15:24:31 k8s.node1 kubelet[14428]: I0529 15:24:31.224772   14428 server.go:825] Using self-signed cert (/o....key)
May 29 15:24:31 k8s.node1 kubelet[14428]: I0529 15:24:31.228968   14428 mount_linux.go:180] Detected OS with systemd
May 29 15:24:31 k8s.node1 kubelet[14428]: I0529 15:24:31.229037   14428 server.go:407] Version: v1.13.6
May 29 15:24:31 k8s.node1 kubelet[14428]: I0529 15:24:31.229154   14428 feature_gate.go:206] feature gates: &{map[]}
May 29 15:24:31 k8s.node1 kubelet[14428]: I0529 15:24:31.229220   14428 feature_gate.go:206] feature gates: &{map[]}
May 29 15:24:31 k8s.node1 kubelet[14428]: I0529 15:24:31.229316   14428 plugins.go:103] No cloud provider specified.
May 29 15:24:31 k8s.node1 kubelet[14428]: I0529 15:24:31.229330   14428 server.go:523] No cloud provider specifie...e: ""
May 29 15:24:31 k8s.node1 kubelet[14428]: I0529 15:24:31.229355   14428 bootstrap.go:65] Using bootstrap kubeconf... file
May 29 15:24:31 k8s.node1 kubelet[14428]: I0529 15:24:31.231105   14428 bootstrap.go:96] No valid private key and...w one
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s cfg]# 

5. Master接受kubelet CSR请求

可以手动或自动 approve CSR 请求。如下是手动 approve CSR请求操作方法 查看CSR列表
csr 是certificatesigningrequests 的缩写,查看缩写的方法为:kubectl api-resources
下面操作都是再master上操作

[root@k8s ~]# kubectl  get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-vRVNCJT48G9I8g2k9A7vVDkCj1cmFpdrxXQnCLhrwe0   119s   kubelet-bootstrap   Pending

接受node

[root@k8s ~]# kubectl certificate approve node-csr-vRVNCJT48G9I8g2k9A7vVDkCj1cmFpdrxXQnCLhrwe0
certificatesigningrequest.certificates.k8s.io/node-csr-vRVNCJT48G9I8g2k9A7vVDkCj1cmFpdrxXQnCLhrwe0 approved

再查看CSR

[root@k8s ~]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-vRVNCJT48G9I8g2k9A7vVDkCj1cmFpdrxXQnCLhrwe0   77m   kubelet-bootstrap   Approved,Issued

这时候发现CONDITION 变更为:Approved,Issued

查看集群状态

[root@k8s ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
10.0.52.14   Ready    <none>   2m57s   v1.13.6

这时候10.0.52.14这个节点已经加入到集群当中了

node 上查看 这时候10.0.52.14这个节点的/opt/kubernetes/ssl自动生成了kubelet-client的证书

[root@k8s ssl]# ls
ca-key.pem  kubelet-client-2019-05-29-16-41-26.pem  kubelet.crt  kube-proxy-key.pem  server-key.pem
ca.pem      kubelet-client-current.pem              kubelet.key  kube-proxy.pem      server.pem

注意期间要是kubelet,kube-proxy配置错误,比如监听IP或者hostname错误导致node not found,需要删除kubelet-client证书,重启kubelet服务,重启认证csr即可

部署kube-proxy组件

1. 创建 kube-proxy 配置文件

参数说明:
hostname-override: 本机ip地址
cluster-cidr: 必须于在apiserver 设置的service-cluster-ip-range=10.0.0.0/24相对应
proxy-mode:代理模式,有ipvs,iptables等设置,我们选择ipvs

cat << EOF | tee /opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=10.0.52.14 \\
--cluster-cidr=10.0.0.0/24 \\
--proxy-mode=ipvs \\
--ipvs-scheduler=wrr \\
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
EOF
2. 创建kube-proxy systemd文件
cat << EOF | tee /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
3. 启动服务
[root@k8s cfg]# systemctl daemon-reload
[root@k8s cfg]# systemctl enable kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@k8s cfg]# systemctl start kube-proxy
[root@k8s cfg]# systemctl status  kube-proxy
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2019-05-29 16:58:30 CST; 7s ago
 Main PID: 21608 (kube-proxy)
   Memory: 12.8M
   CGroup: /system.slice/kube-proxy.service
           ‣ 21608 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=10.0.52.14 --cluster-ci...

May 29 16:58:30 k8s.node1 kube-proxy[21608]: I0529 16:58:30.914320   21608 iptables.go:391] running iptables-rest...ters]
May 29 16:58:30 k8s.node1 kube-proxy[21608]: I0529 16:58:30.916289   21608 proxier.go:728] syncProxyRules took 45.60983ms
May 29 16:58:31 k8s.node1 kube-proxy[21608]: I0529 16:58:31.268300   21608 config.go:141] Calling handler.OnEndpo...pdate
May 29 16:58:31 k8s.node1 kube-proxy[21608]: I0529 16:58:31.269856   21608 config.go:141] Calling handler.OnEndpo...pdate
May 29 16:58:33 k8s.node1 kube-proxy[21608]: I0529 16:58:33.274659   21608 config.go:141] Calling handler.OnEndpo...pdate
May 29 16:58:33 k8s.node1 kube-proxy[21608]: I0529 16:58:33.277684   21608 config.go:141] Calling handler.OnEndpo...pdate
May 29 16:58:35 k8s.node1 kube-proxy[21608]: I0529 16:58:35.280655   21608 config.go:141] Calling handler.OnEndpo...pdate
May 29 16:58:35 k8s.node1 kube-proxy[21608]: I0529 16:58:35.284373   21608 config.go:141] Calling handler.OnEndpo...pdate
May 29 16:58:37 k8s.node1 kube-proxy[21608]: I0529 16:58:37.287116   21608 config.go:141] Calling handler.OnEndpo...pdate
May 29 16:58:37 k8s.node1 kube-proxy[21608]: I0529 16:58:37.291050   21608 config.go:141] Calling handler.OnEndpo...pdate
Hint: Some lines were ellipsized, use -l to show in full.

4. 同样操作部署node 10.0.52.6
  • 将对应的文件复制到10.0.52.6上
# 将systemd文件copy到10.0.52.6
[root@k8s cfg]#  scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@10.0.52.6:/usr/lib/systemd/system/
root@10.0.52.6's password: 
kubelet.service                                                                        100%  264   512.0KB/s   00:00    
kube-proxy.service                                                                     100%  231   565.5KB/s   00:00    

# 将config文件copy到10.0.52.6
[root@k8s cfg]# scp bootstrap.kubeconfig kubelet.config kube-proxy kubelet kubelet.kubeconfig kube-proxy.kubeconfig root@10.0.52.6:/opt/kubernetes/cfg/
root@10.0.52.6's password: 
bootstrap.kubeconfig                                                                   100% 2164     4.0MB/s   00:00    
kubelet.config                                                                         100%  264   713.2KB/s   00:00    
kube-proxy                                                                             100%  184   495.6KB/s   00:00    
kubelet                                                                                100%  408     1.1MB/s   00:00    
kubelet.kubeconfig                                                                     100% 2293     5.6MB/s   00:00    
kube-proxy.kubeconfig                                                                  100% 6266    13.3MB/s   00:00    

# 将ssl文件copy到10.0.52.6
[root@k8s ssl]# scp ca-key.pem kube-proxy-key.pem server-key.pem ca.pem kube-proxy.pem server.pem root@10.0.52.6:/opt/kubernetes/ssl/
root@10.0.52.6's password: 
ca-key.pem                                                                             100% 1675     2.7MB/s   00:00    
kube-proxy-key.pem                                                                     100% 1675     3.5MB/s   00:00    
server-key.pem                                                                         100% 1679     4.0MB/s   00:00    
ca.pem                                                                                 100% 1359     3.4MB/s   00:00    
kube-proxy.pem                                                                         100% 1403     3.7MB/s   00:00    
server.pem                                                                             100% 1643     4.4MB/s   00:00    

#将二进制文件copy到10.0.52.6
[root@k8s bin]# scp kubelet kube-proxy root@10.0.52.6:/opt/kubernetes/bin/
root@10.0.52.6's password: 
kubelet                                                                                100%  108MB 107.9MB/s   00:01    
kube-proxy                                                                             100%   33MB 122.0MB/s   00:00    

  • 将修改配置文件
[root@k8s cfg]# grep 10.0.52.14 *
kubelet:--address=10.0.52.14 \
kubelet:--hostname-override=10.0.52.14 \
kubelet.config:address: 10.0.52.14
kube-proxy:--hostname-override=10.0.52.14 \

#将上述4处地方修改为10.0.52.6即可
  • 启动
systemctl  daemon-reload
systemctl  enable kubelet
systemctl  start  kubelet
systemctl  enable kube-proxy
systemctl  start  kube-proxy
  • Master接受kubelet CSR请求
[root@k8s ~]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-EXa7F9nMgEwPdURbGErwkcjIH_opG_wVhvQ4p4Hru3g   2m12s   kubelet-bootstrap   Pending
node-csr-vRVNCJT48G9I8g2k9A7vVDkCj1cmFpdrxXQnCLhrwe0   112m    kubelet-bootstrap   Approved,Issued
[root@k8s ~]# kubectl certificate approve node-csr-EXa7F9nMgEwPdURbGErwkcjIH_opG_wVhvQ4p4Hru3g
certificatesigningrequest.certificates.k8s.io/node-csr-EXa7F9nMgEwPdURbGErwkcjIH_opG_wVhvQ4p4Hru3g approved
[root@k8s ~]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-EXa7F9nMgEwPdURbGErwkcjIH_opG_wVhvQ4p4Hru3g   2m37s   kubelet-bootstrap   Approved,Issued
node-csr-vRVNCJT48G9I8g2k9A7vVDkCj1cmFpdrxXQnCLhrwe0   112m    kubelet-bootstrap   Approved,Issued
[root@k8s ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
10.0.52.14   Ready    <none>   35m   v1.13.6
10.0.52.6    Ready    <none>   13s   v1.13.6
[root@k8s ~]# 

至此,Node部署成功。若要扩容node,就按照部署到10.0.52.6方式操作即可!

测试示例(Nginx)

  • 创建nginx.yaml
[root@k8s ~]# cat nginx.yaml 
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.10
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  labels:
    run: nginx
  name: nginx
  namespace: default
spec:
  type: NodePort 
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx
[root@k8s ~]# 

  • 创建deployment,svc资源
[root@k8s ~]# kubectl create -f nginx.yaml 
deployment.apps/nginx-deployment created
service/nginx created

  • 查看资源
[root@k8s ~]# kubectl get all
NAME                                    READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-7544fc9954-6nxvd   1/1     Running   0          9m3s
pod/nginx-deployment-7544fc9954-hhnhw   1/1     Running   0          9m3s
pod/nginx-deployment-7544fc9954-qlvzs   1/1     Running   0          9m3s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP        47h
service/nginx        NodePort    10.0.0.207   <none>        80:37085/TCP   9m3s

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   3/3     3            3           9m3s

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-7544fc9954   3         3         3       9m3s

  • 测试访问

node节点因为安装有flannel所以可以直接通过内部port端口访问,:port 是提供给集群内部客户访问service的入口。
外部访问需要nodePort暴露的端口访问,:nodePort 是提供给集群外部客户访问service的入口。
如果没有指定,会自动从30000~50000间选择一个端口。

node节点访问:
curl 10.0.0.207:80
外部访问:
http://NodeIP:37085

如本示例访问http://10.0.52.14:37085/, 出现我们熟悉的欢迎页面,证明集群正常工作
nginx欢迎页面

如果出现如下权限错误:

[root@k8s ~]# kubectl logs nginx-deployment-7544fc9954-6nxvd -f
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-deployment-7544fc9954-6nxvd)

解决办法,给用户system:anonymous绑定到cluster-admin角色当中,可以有其他角色,自行百度。
kubectl create clusterrolebinding cluster-system-anonymous \
  --clusterrole=cluster-admin \
  --user=system:anonymous
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐