新版本目前 kube-proxy 组件全部采用 ipvs 方式负载,所以为了 kube-proxy 能正常工作需要预先处理一下 ipvs 配置以及相关依赖(每台 node 都要处理)
## 开启ipvs [root@k8s-master01 ~]# ansible k8s-node -m shell -a "yum install -y ipvsadm ipset conntrack" [root@k8s-master01 ~]# ansible k8s-node -m shell -a 'modprobe -- ip_vs' [root@k8s-master01 ~]# ansible k8s-node -m shell -a 'modprobe -- ip_vs_rr' [root@k8s-master01 ~]# ansible k8s-node -m shell -a 'modprobe -- ip_vs_wrr' [root@k8s-master01 ~]# ansible k8s-node -m shell -a 'modprobe -- ip_vs_sh' [root@k8s-master01 ~]# ansible k8s-node -m shell -a 'modprobe -- nf_conntrack_ipv4'
[root@k8s-master01 ~]# vim /opt/k8s/certs/kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "ShangHai", "L": "ShangHai", "O": "system:kube-proxy", "OU": "System" } ] }
[root@k8s-master01 ~]# cd /opt/k8s/certs/ [root@k8s-master01 certs]# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \ -ca-key=/etc/kubernetes/ssl/ca-key.pem \ -config=/opt/k8s/certs/ca-config.json \ -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 2019/04/25 17:39:22 [INFO] generate received request 2019/04/25 17:39:22 [INFO] received CSR 2019/04/25 17:39:22 [INFO] generating key: rsa-2048 2019/04/25 17:39:22 [INFO] encoded CSR 2019/04/25 17:39:22 [INFO] signed certificate with serial number 265052874363255358468035370835573343349230196562 2019/04/25 17:39:22 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
[root@k8s-master01 certs]# ll kube-proxy* -rw-r--r-- 1 root root 1029 Apr 25 17:39 kube-proxy.csr -rw-r--r-- 1 root root 302 Apr 25 17:37 kube-proxy-csr.json -rw------- 1 root root 1675 Apr 25 17:39 kube-proxy-key.pem -rw-r--r-- 1 root root 1428 Apr 25 17:39 kube-proxy.pem
[root@k8s-master01 certs]# ansible k8s-node -m copy -a 'src=/opt/k8s/certs/kube-proxy-key.pem dest=/etc/kubernetes/ssl/' [root@k8s-master01 certs]# ansible k8s-node -m copy -a 'src=/opt/k8s/certs/kube-proxy.pem dest=/etc/kubernetes/ssl/'
kube-proxy组件连接 apiserver 所需配置文件
## 配置集群参数 [root@k8s-master01 ~]# kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-proxy.kubeconfig Cluster "kubernetes" set. ## 配置客户端认证参数 [root@k8s-master01 ~]# kubectl config set-credentials system:kube-proxy \ --client-certificate=/opt/k8s/certs/kube-proxy.pem \ --embed-certs=true \ --client-key=/opt/k8s/certs/kube-proxy-key.pem \ --kubeconfig=kube-proxy.kubeconfig User "system:kube-proxy" set. ## 配置集群上下文 [root@k8s-master01 ~]# kubectl config set-context system:kube-proxy@kubernetes \ --cluster=kubernetes \ --user=system:kube-proxy \ --kubeconfig=kube-proxy.kubeconfig Context "system:kube-proxy@kubernetes" created. ## 配置集群默认上下文 [root@k8s-master01 ~]# kubectl config use-context system:kube-proxy@kubernetes --kubeconfig=kube-proxy.kubeconfig Switched to context "system:kube-proxy@kubernetes". ##配置文件分发至node节点 [root@k8s-master01 ~]# ansible k8s-node -m copy -a 'src=/root/kube-proxy.kubeconfig dest=/etc/kubernetes/config/'
[root@k8s-master01 ~]# vim /opt/k8s/cfg/kube-proxy.conf ### # kubernetes proxy config # default config should be adequate # Add your own! KUBE_PROXY_ARGS=" --bind-address=0.0.0.0 \ --cleanup-ipvs=true \ --cluster-cidr=10.254.0.0/16 \ --hostname-override=k8s-node01 \ --healthz-bind-address=0.0.0.0 \ --healthz-port=10256 \ --masquerade-all=true \ --proxy-mode=ipvs \ --ipvs-min-sync-period=5s \ --ipvs-sync-period=5s \ --ipvs-scheduler=wrr \ --kubeconfig=/etc/kubernetes/config/kube-proxy.kubeconfig \ --logtostderr=true \ --v=2" ## 分发参数配置文件 ### 修改hostname-override字段所属主机名
[root@k8s-master01 ~]# ansible k8s-node -m copy -a 'src=/opt/k8s/cfg/kube-proxy.conf dest=/etc/kubernetes/config/'
[root@k8s-master01 ~]# vim /opt/k8s/unit/kube-proxy.service
[Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] EnvironmentFile=-/etc/kubernetes/config/kube-proxy.conf ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target ## 分发至node节点 [root@k8s-master01 ~]# ansible k8s-node -m copy -a 'src=/opt/k8s/unit/kube-proxy.service dest=/usr/lib/systemd/system/' ## 启动服务 [root@k8s-master01 ~]# ansible k8s-node -m shell -a 'systemctl daemon-reload' [root@k8s-master01 ~]# ansible k8s-node -m shell -a 'systemctl enable kube-proxy' [root@k8s-master01 ~]# ansible k8s-node -m shell -a 'systemctl start kube-proxy'
检查LVS状态,可以看到已经创建了一个LVS集群,将来自10.254.0.1:443的请求转到三台master的6443端口,而6443就是api-server的端口
[root@k8s-node01 ~]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.254.0.1:443 wrr -> 10.10.0.18:6443 Masq 1 0 0 -> 10.10.0.19:6443 Masq 1 0 0 -> 10.10.0.20:6443 Masq 1 0 0
所有评论(0)