本文在以下主机上操作部署k8s集群
k8s-master1:10.0.0.21
k8s-master2:10.0.0.22
k8s-master3:10.0.0.23
配置Kubernetes master集群
kubernetes master 节点包含的组件:
kube-apiserver
kube-scheduler
kube-controller-manager
目前这三个组件需要部署在同一台机器上。
kube-scheduler、kube-controller-manager 和 kube-apiserver 三者的功能紧密相关;
同时只能有一个 kube-scheduler、kube-controller-manager 进程处于工作状态,如果运行多个,则需要通过选举产生一个 leader;

一、安装haproxy和keepalived

yum install -y keepalived haproxy

2、三个master配置haproxy代理api-server服务
# mv /etc/haproxy/haproxy.cfg{,.bak} -v
# vim /etc/haproxy/haproxy.cfg 
global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    stats socket /var/run/haproxy-admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon
    nbproc 1

defaults
    log     global
    timeout connect 5000
    timeout client  10m
    timeout server  10m

listen  admin_stats
    bind 0.0.0.0:10080
    mode http
    log 127.0.0.1 local0 err
    stats refresh 30s
    stats uri /status
    stats realm welcome login\ Haproxy
    stats auth admin:123456
    stats hide-version
    stats admin if TRUE

listen kube-master
    bind 0.0.0.0:8443
    mode tcp
    option tcplog
    balance roundrobin
    server k8s-master1 10.0.0.21:6443 check inter 2000 fall 2 rise 2 weight 1
    server k8s-master2 10.0.0.22:6443 check inter 2000 fall 2 rise 2 weight 1
    server k8s-master3 10.0.0.23:6443 check inter 2000 fall 2 rise 2 weight 1
	
systemctl enable haproxy
systemctl start haproxy
systemctl status haproxy


3、三个master配置keepalived服务

# mv /etc/keepalived/keepalived.conf{,.bak} -v
‘/etc/keepalived/keepalived.conf’ -> ‘/etc/keepalived/keepalived.conf.bak’

# vim /etc/keepalived/keepalived.conf 
global_defs {
    router_id k8s-master1             #路由器标识,最好写主机名,用于标识
}

vrrp_script check-haproxy {
    script "killall -0 haproxy"
    interval 3
}

vrrp_instance VI-kube-master {
    state BACKUP
    nopreempt    #设置不抢占,必须设置在backup上且priority最高的节点上
    priority 120
    dont_track_primary
    interface eth0
    virtual_router_id 68
    advert_int 3
    track_script {
        check-haproxy
    }
    virtual_ipaddress {
        10.0.0.252    #VIP,访问此IP调用api-server
    }
}

使用 killall -0 haproxy 命令检查所在节点的 haproxy 进程是否正常。
router_id、virtual_router_id 用于标识属于该 HA 的 keepalived 实例,如果有多套 keepalived HA,则必须各不相同;
其他2个backup把nopreempt去掉,及priority分别设置110和100即可。

systemctl enable keepalived
systemctl start keepalived
systemctl status keepalived

二、部署kubectl命令工具

kubectl 是 kubernetes 集群的命令行管理工具,本文档介绍安装和配置它的步骤。
kubectl 默认从 ~/.kube/config 文件读取 kube-apiserver 地址、证书、用户名等信息,如果没有配置,执行 kubectl 命令时可能会出错。
 ~/.kube/config只需要部署一次,然后拷贝到其他的master。
 
 1、下载kubectl

wget https://dl.k8s.io/v1.14/kubernetes-server-linux-amd64.tar.gz
tar -xf kubernetes-server-linux-amd64.tar.gz -C /usr/local/
cd /usr/local/kubernetes/server/bin
cp kube-apiserver kubeadm kube-controller-manager kubectl kube-scheduler /usr/local/bin -av
cd

2、创建请求证书

cat > /tmp/certs/admin-csr.json <<EOF
{
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "O": "system:masters",
      "OU": "k8s Security",
      "L": "ChengDU",
      "ST": "SiChuan",
      "C": "CN"
    }
  ],
  "CN": "admin",
  "hosts": []
}
EOF



O 为 system:masters,kube-apiserver 收到该证书后将请求的 Group 设置为 system:masters;
预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予所有 API的权限;
该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

生成证书和私钥

cfssl gencert -ca=/etc/k8s/ssl/ca.pem   \
-ca-key=/etc/k8s/ssl/ca-key.pem   \
-config=/etc/k8s/ssl/ca-config.json   \
-profile=kubernetes admin-csr.json | cfssljson -bare admin

2019/06/27 13:42:36 [INFO] generate received request
2019/06/27 13:42:36 [INFO] received CSR
2019/06/27 13:42:36 [INFO] generating key: rsa-2048
2019/06/27 13:42:37 [INFO] encoded CSR
2019/06/27 13:42:37 [INFO] signed certificate with serial number 11636747093070887055866227336130776563713097270
2019/06/27 13:42:37 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

3、创建~/.kube/config文件


kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=https://10.0.0.252:8443 \
  --kubeconfig=kubectl.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials admin \
  --client-certificate=/etc/kubernetes/cert/admin.pem \
  --client-key=/etc/kubernetes/cert/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=kubectl.kubeconfig

# 设置上下文参数
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin \
  --kubeconfig=kubectl.kubeconfig
  
# 设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig


4、分发~/.kube/config文件


cp kubectl.kubeconfig ~/.kube/config -av
ssh root@k8s-master3 'mkdir -pv ~/.kube'
ssh root@k8s-master2 'mkdir -pv ~/.kube'
scp kubectl.kubeconfig k8s-master2:~/.kube/config                                                                                                                                                                                    100% 6285     2.2MB/s   00:00    
scp kubectl.kubeconfig k8s-master3:~/.kube/config

三、部署api-server

1、创建kube-apiserver的证书签名请求:
# cat > /tmp/certs/kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "10.0.0.21",
    "10.0.0.22",
    "10.0.0.23",
    "10.0.0.252",
    "10.254.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.shuyuan",
    "kubernetes.default.svc.zmjcd.cc"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
	 "names": [
    {
      "O": "system:masters",
      "OU": "k8s Security",
      "L": "ChengDU",
      "ST": "SiChuan",
      "C": "CN"
    }
  ]
  
}
EOF



hosts 字段指定授权使用该证书的 IP 或域名列表,这里列出了 VIP 、apiserver 节点 IP、kubernetes 服务 IP 和域名;
域名最后字符不能是 .(如不能为 kubernetes.default.svc.cluster.local.),否则解析时失败,提示: x509: cannot parse dnsName "kubernetes.default.svc.cluster.local.";
如果使用非 cluster.local 域名,如 bqding.com,则需要修改域名列表中的最后两个域名为:kubernetes.default.svc.bqding、kubernetes.default.svc.bqding.com
红色的主机依次为master节点的ip,以及负载均衡器的内网和公网IP


生成证书和私钥:

cfssl gencert -ca=/etc/k8s/ssl/ca.pem \
  -ca-key=/etc/k8s/ssl/ca-key.pem \
  -config=/etc/k8s/ssl/ca-config.json \
  -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
2、将生成的证书和私钥文件拷贝到 master 节点:

 cp kubernetes*.pem /etc/kubernetes/cert/ -av
 scp kubernetes*.pem k8s-master2:/etc/kubernetes/cert/
 scp kubernetes*.pem k8s-master3:/etc/kubernetes/cert/

3、创建加密配置文件
# cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: $(head -c 32 /dev/urandom | base64)
      - identity: {}
EOF

4、分发加密配置文件到master节点
cp encryption-config.yaml /etc/kubernetes/cert/ -av
scp encryption-config.yaml k8s-master2:/etc/kubernetes/cert/
scp encryption-config.yaml k8s-master3:/etc/kubernetes/cert/


 5、创建kube-apiserver systemd unit文件
# cat > /etc/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
  --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
  --anonymous-auth=false \
  --experimental-encryption-provider-config=/etc/kubernetes/cert/encryption-config.yaml \
  --advertise-address=10.0.0.21 \
  --bind-address=10.0.0.21 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.254.0.0/16 \
  --service-node-port-range=30000-32700 \
  --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \
  --client-ca-file=/etc/kubernetes/cert/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \
  --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \
  --service-account-key-file=/etc/kubernetes/cert/ca-key.pem \
  --etcd-cafile=/etc/kubernetes/cert/ca.pem \
  --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \
  --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \
  --etcd-servers=https://10.0.0.11:2379,https://10.0.0.12:22379,https://10.0.0.13:32379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

--experimental-encryption-provider-config:启用加密特性;
--authorization-mode=Node,RBAC: 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
--enable-admission-plugins:启用 ServiceAccount 和 NodeRestriction ;
--service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 --service-account-private-key-file 指定私钥文件,两者配对使用;
--tls-*-file:指定 apiserver 使用的证书、私钥和 CA 文件。--client-ca-file 用于验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;
--kubelet-client-certificate、--kubelet-client-key:如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes*.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
--bind-address: 不能为 127.0.0.1,否则外界不能访问它的安全端口 6443;
--insecure-port=0:关闭监听非安全端口(8080);
--service-cluster-ip-range: 指定 Service Cluster IP 地址段;
--service-node-port-range: 指定 NodePort 的端口范围;
--runtime-config=api/all=true: 启用所有版本的 APIs,如 autoscaling/v2alpha1;
--enable-bootstrap-token-auth:启用 kubelet bootstrap 的 token 认证;
--apiserver-count=3:指定集群运行模式,多台 kube-apiserver 会通过 leader 选举产生一个工作节点,其它节点处于阻塞状态;
红色部分为各个master主机部分


6、分发kube-apiserver.service文件到其他master
 scp /etc/systemd/system/kube-apiserver.service k8s-master2:/etc/systemd/system/kube-apiserver.service
 scp /etc/systemd/system/kube-apiserver.service k8s-master3:/etc/systemd/system/kube-apiserver.service

7、创建日志目录
mkdir -pv /var/log/kubernetes

8、启动api-server服务
 systemctl daemon-reload
 systemctl enable kube-apiserver
 systemctl restart kube-apiserver
9、检查api-server和集群状态
# netstat -ptln | grep kube-apiserve
tcp        0      0 192.168.80.9:6443       0.0.0.0:*               LISTEN      22348/kube-apiserve

# kubectl cluster-info
Kubernetes master is running at https://10.0.0.252:8443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

10、授予kubernetes证书访问kubelet api权限
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

四、部署kube-controller-manager

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。
为保证通信安全,本文档先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书:
与 kube-apiserver 的安全端口通信时;
在安全端口(https,10252) 输出 prometheus 格式的 metrics;
1、创建kube-controller-manager证书请求:
# cat > kube-controller-manager-csr.json << EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "10.0.0.21",
      "10.0.0.22",
      "10.0.0.23",
      "k8s-master1.zmjcd.cc",
      "k8s-master2.zmjcd.cc",
      "k8s-master3.zmjcd.cc",
      "k8s-master1",
      "k8s-master2",
      "k8s-master3"
    ],
    "names": [
    {
      "O": "system:kube-controller-manager",
      "OU": "k8s Security",
      "L": "ChengDU",
      "ST": "SiChuan",
      "C": "CN"
    }
  ]
}

EOF
hosts 列表包含所有 kube-controller-manager 节点 IP;
CN 为 system:kube-controller-manager、
O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限。

生成证书和私钥:

cfssl gencert -ca=/etc/k8s/ssl/ca.pem \
  -ca-key=/etc/k8s/ssl/ca-key.pem \
  -config=/etc/k8s/ssl/ca-config.json \
  -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
  
2、将生成的证书和私钥分发到所有 master 节点

# scp kube-controller-manager*.pem k8s-master1:/etc/kubernetes/cert/
# scp kube-controller-manager*.pem k8s-master2:/etc/kubernetes/cert/
# scp kube-controller-manager*.pem k8s-master3:/etc/kubernetes/cert/

3、创建和分发kubeconfig文件

复制代码
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=https://10.0.0.252:8443 \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=/etc/kubernetes/cert/kube-controller-manager.pem \
  --client-key=/etc/kubernetes/cert/kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

分发 kube-controller-manager.kubeconfig 到所有 master 节点

 scp kube-controller-manager.kubeconfig k8s-master1:/etc/kubernetes/cert/
 scp kube-controller-manager.kubeconfig k8s-master2:/etc/kubernetes/cert/
 scp kube-controller-manager.kubeconfig k8s-master3:/etc/kubernetes/cert/

4、创建和分发kube-controller-manager systemd unit文件

复制代码
# cat > /etc/systemd/system/kube-controller-manager.service  << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
  --address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/cert/kube-controller-manager.kubeconfig \
  --authentication-kubeconfig=/etc/kubernetes/cert/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.254.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \
  --experimental-cluster-signing-duration=8760h \
  --root-ca-file=/etc/kubernetes/cert/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

--port=0:关闭监听 http /metrics 的请求,同时 --address 参数无效,--bind-address 参数有效;
--secure-port=10252、--bind-address=0.0.0.0: 在所有网络接口监听 10252 端口的 https /metrics 请求;
--address:指定监听的地址为127.0.0.1
--kubeconfig:指定 kubeconfig 文件路径,kube-controller-manager 使用它连接和验证 kube-apiserver;
--cluster-signing-*-file:签名 TLS Bootstrap 创建的证书;
--experimental-cluster-signing-duration:指定 TLS Bootstrap 证书的有效期;
--root-ca-file:放置到容器 ServiceAccount 中的 CA 证书,用来对 kube-apiserver 的证书进行校验;
--service-account-private-key-file:签名 ServiceAccount 中 Token 的私钥文件,必须和 kube-apiserver 的 --service-account-key-file 指定的公钥文件配对使用;
--service-cluster-ip-range :指定 Service Cluster IP 网段,必须和 kube-apiserver 中的同名参数一致;
--leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
--feature-gates=RotateKubeletServerCertificate=true:开启 kublet server 证书的自动更新特性;
--controllers=*,bootstrapsigner,tokencleaner:启用的控制器列表,tokencleaner 用于自动清理过期的 Bootstrap token;
--horizontal-pod-autoscaler-*:custom metrics 相关参数,支持 autoscaling/v2alpha1;
--tls-cert-file、--tls-private-key-file:使用 https 输出 metrics 时使用的 Server 证书和秘钥;
--use-service-account-credentials=true:
分发kube-controller-manager systemd unit文件

# scp /etc/systemd/system/kube-controller-manager.service k8s-master2:/etc/systemd/system/kube-controller-manager.service
# scp /etc/systemd/system/kube-controller-manager.service k8s-master3:/etc/systemd/system/kube-controller-manager.service
5、启动kube-controller-manager服务

 systemctl daemon-reload
 systemctl enable kube-controller-manager
 systemctl start kube-controller-manager
 systemctl status kube-controller-manager
 

6、检查kube-controller-manager服务
# netstat -lnpt|grep kube-controll
tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      4779/kube-controlle 
tcp6       0      0 :::10257                :::*                    LISTEN      4779/kube-controlle 


7、查看当前kube-controller-manager的leader 
# kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master1_ab1fd92f-7779-48ff-a244-a2c50aa7f6bf","leaseDurationSeconds":15,"acquireTime":"2019-06-27T09:34:23Z","renewTime":"2019-06-27T09:35:23Z","leaderTransitions":0}'
  creationTimestamp: "2019-06-27T09:34:23Z"
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "3453"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: 0cdeb0f6-f8bb-4898-b559-d32f528e22c3
 
可见,当前的 leader 为 kube-master1 节点

五、部署kube-scheduler

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。
为保证通信安全,本文档先生成 x509 证书和私钥,kube-scheduler 在如下两种情况下使用该证书:
与 kube-apiserver 的安全端口通信;
在安全端口(https,10251) 输出 prometheus 格式的 metrics;

1、创建kube-scheduler证书请求

# cat > kube-scheduler-csr.json << EOF
{
    "CN": "system:kube-scheduler",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "10.0.0.21",
      "10.0.0.22",
      "10.0.0.23",
	  "k8s-master1.zmjcd.cc",
	  "k8s-master2.zmjcd.cc",
	  "k8s-master3.zmjcd.cc",
	  "k8s-master1",
	  "k8s-master2",
	  "k8s-master3"
    ],
 "names": [
    {
      "O": "system:kube-scheduler",
      "OU": "k8s Security",
      "L": "ChengDU",
      "ST": "SiChuan",
      "C": "CN"
    }
  ]
}
EOF


hosts 列表包含所有 kube-scheduler 节点 IP;
CN 为 system:kube-scheduler、
O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。




生成证书和私钥:

cfssl gencert -ca=/etc/k8s/ssl/ca.pem \
  -ca-key=/etc/k8s/ssl/ca-key.pem \
  -config=/etc/k8s/ssl/ca-config.json \
  -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

# scp kube-scheduler*.pem k8s-master1:/etc/kubernetes/cert/
# scp kube-scheduler*.pem k8s-master2:/etc/kubernetes/cert/
# scp kube-scheduler*.pem k8s-master3:/etc/kubernetes/cert/

2、创建和分发kube-scheduler.kubeconfig文件
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=https://10.0.0.252:8443 \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
  --client-certificate=/etc/kubernetes/cert/kube-scheduler.pem \
  --client-key=/etc/kubernetes/cert/kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
上一步创建的证书、私钥以及 kube-apiserver 地址被写入到 kubeconfig 文件中;


分发 kubeconfig 到所有 master 节点:
scp kube-scheduler.kubeconfig k8s-master1:/etc/kubernetes/cert/
scp kube-scheduler.kubeconfig k8s-master2:/etc/kubernetes/cert/
scp kube-scheduler.kubeconfig k8s-master3:/etc/kubernetes/cert/


3、创建和分发kube-scheduler systemd unit文件
# cat > /etc/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
  --address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/cert/kube-scheduler.kubeconfig \
  --leader-elect=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

--address:在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求;
--kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
--leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

分发 systemd unit 文件到所有 master 节点:

 scp /etc/systemd/system/kube-scheduler.service k8s-master2:/etc/systemd/system/kube-scheduler.service
 scp /etc/systemd/system/kube-scheduler.service k8s-master3:/etc/systemd/system/kube-scheduler.service

4、启动kube-scheduler服务

systemctl daemon-reload
 systemctl enable kube-scheduler
 systemctl start kube-scheduler
 systemctl status kube-scheduler


5、查看kube-scheduler运行监听端口

# netstat -lnpt|grep kube-sche
tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      17921/kube-schedule

6、查看当前kube-scheduler的leader
# kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master1_db4b2dac-5a2f-4e71-a125-e311464fb77a","leaseDurationSeconds":15,"acquireTime":"2019-06-27T09:54:31Z","renewTime":"2019-06-27T09:55:37Z","leaderTransitions":0}'
  creationTimestamp: "2019-06-27T09:54:31Z"
  name: kube-scheduler
  namespace: kube-system
  resourceVersion: "4461"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
  uid: c76ed8b4-5ba3-4248-a0c7-23615cbbdd36

可见,当前的 leader 为 kube-master1 节点。

七、在所有master节点上验证功能是否正常
systemctl stop kube-scheduler 
systemctl stop kube-controller-manager
systemctl stop kube-apiserver
systemctl stop docker
systemctl stop flanneld

systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐