K8s Master 三个组件:

  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler

kubernetes master 节点运行如下组件: kube-apiserver kube-scheduler kube-controller-manager. 其中kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式,master高可用模式下可用

github 官网

https://github.com/kubernetes/kubernetes

基本流程图

基本流程

基本功能图

基本功能描述

安装步骤

  • 下载文件
  • 制作证书
  • 创建TLS Bootstrapping Token
  • 部署apiserver组件
  • 部署kube-scheduler组件
  • 部署kube-controller-manager组件
  • 验证服务

下载文件,解压缩

wget https://dl.k8s.io/v1.13.6/kubernetes-server-linux-amd64.tar.gz
[root@k8s ~]# mkdir /opt/kubernetes/{bin,cfg,ssl} -p
[root@k8s ~]# tar zxf kubernetes-server-linux-amd64.tar.gz 
[root@k8s ~]# cd kubernetes/server/bin/
[root@k8s bin]# cp kube-scheduler kube-apiserver kube-controller-manager /opt/kubernetes/bin/
[root@k8s bin]# cp kubectl /usr/bin/
[root@k8s bin]# 

制作kubernetes ca证书

  • 制作ca配置文件
cd  /opt/kubernetes/ssl/
cat << EOF | tee /opt/kubernetes/ssl/ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF


cat << EOF | tee /opt/kubernetes/ssl/ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
  • 生成ca证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
[root@k8s ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2019/05/28 14:47:18 [INFO] generating a new CA key and certificate from CSR
2019/05/28 14:47:18 [INFO] generate received request
2019/05/28 14:47:18 [INFO] received CSR
2019/05/28 14:47:18 [INFO] generating key: rsa-2048
2019/05/28 14:47:18 [INFO] encoded CSR
2019/05/28 14:47:19 [INFO] signed certificate with serial number 34219464473634319112180195944445301722929678647
[root@k8s ssl]# 

  • 制作apiserver证书
cat << EOF | tee server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "10.0.52.13",
      "10.0.52.7",
      "10.0.52.8",
      "10.0.52.9",
      "10.0.52.10",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

[root@k8s ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2019/05/28 15:03:31 [INFO] generate received request
2019/05/28 15:03:31 [INFO] received CSR
2019/05/28 15:03:31 [INFO] generating key: rsa-2048
2019/05/28 15:03:31 [INFO] encoded CSR
2019/05/28 15:03:31 [INFO] signed certificate with serial number 114040551556369232239873744650692828468613738631
2019/05/28 15:03:31 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s ssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem
[root@k8s ssl]# 

server-csr.json文件中的hosts配置,需要"10.0.52.13", “10.0.52.7”, “10.0.52.8”, “10.0.52.9”, “10.0.52.10”,这几个IP是自己设置的,其他的都是内置的,无需改动,其中我们k8smaster的ip,如果高可用的话,几个master的ip,lb的ip都需要设置,否则无法连接apiserver.

创建TLS Bootstrapping Token

[root@k8s ssl]# BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
[root@k8s ssl]# cat << EOF | tee /opt/kubernetes/cfg/token.csv
> ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
> EOF
7d558bb3a5206cf78f881de7d7b82ca6,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[root@k8s ssl]# cat /opt/kubernetes/cfg/token.csv
7d558bb3a5206cf78f881de7d7b82ca6,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[root@k8s ssl]# 

部署kube-apiserver组件

1. 创建kube-apiserver配置文件

Apiserver配置文件里面需要配置的有etcd-servers地址,bind-address,advertise-address都是当前master节点的IP地址.token-auth-file和各种pem认证文件,需要把对应的文件位置输入即可.

cat << EOF | tee /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379 \\
--bind-address=10.0.52.13 \\
--secure-port=6443 \\
--advertise-address=10.0.52.13 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
2.创建apiserver systemd文件
cat << EOF | tee /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
3.启动服务
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver

[root@k8s ssl]# ps -ef |grep kube-apiserver
root     19404     1 89 15:50 ?        00:00:09 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379 --bind-address=10.0.52.13 --secure-port=6443 --advertise-address=10.0.52.13 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root     19418 19122  0 15:50 pts/1    00:00:00 grep --color=auto kube-apiserver
[root@k8s ssl]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-05-28 15:50:21 CST; 26s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 19404 (kube-apiserver)
   Memory: 221.2M
   CGroup: /system.slice/kube-apiserver.service
           └─19404 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379 --bind-address=10.0.52.13 --secure-port=6443 --advertise-address=10.0.52.13 --allow-privileged=true --...

May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.057378   19404 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.709711ms) 200 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.076300   19404 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.984796ms) 201 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.076874   19404 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.095073   19404 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.887241ms) 404 [kube-apiserver/v1.13.6 (linux/amd64)...f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.097100   19404 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.654384ms) 200 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.115586   19404 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.390436ms) 201 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.115766   19404 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.134609   19404 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.458696ms) 404 [kube-apiserver/v1.13.6 (linux/amd64) ...f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.136356   19404 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.420447ms) 200 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
May 28 15:50:32 k8s.master kube-apiserver[19404]: I0528 15:50:32.155628   19404 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.433057ms) 201 [kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f 10.0.52.13:41238]
Hint: Some lines were ellipsized, use -l to show in full.

[root@k8s ssl]# ps -ef |grep -v grep |grep kube-apiserver 
root     19404     1  1 15:50 ?        00:00:25 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.0.52.13:2379,https://10.0.52.14:2379,https://10.0.52.6:2379 --bind-address=10.0.52.13 --secure-port=6443 --advertise-address=10.0.52.13 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
[root@k8s ssl]# 

[root@k8s ssl]# netstat -tulpn |grep kube-apiserve
tcp        0      0 10.0.52.13:6443         0.0.0.0:*               LISTEN      19404/kube-apiserve 
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      19404/kube-apiserve 
[root@k8s ssl]# 


部署kube-scheduler组件

1.创建kube-scheduler配置文件

–master: kube-scheduler 使用它连接 kube-apiserver; –leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

cat << EOF | tee /opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=127.0.0.1:8080 \\
--leader-elect"
EOF
2.创建kube-scheduler systemd文件
cat << EOF | tee /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
3.启动&验证
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
[root@k8s ssl]# systemctl status kube-scheduler.service
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-05-28 16:06:21 CST; 9s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 19524 (kube-scheduler)
   Memory: 10.8M
   CGroup: /system.slice/kube-scheduler.service
           └─19524 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect

May 28 16:06:22 k8s.master kube-scheduler[19524]: I0528 16:06:22.942604   19524 shared_informer.go:123] caches populated
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.042738   19524 shared_informer.go:123] caches populated
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.142882   19524 shared_informer.go:123] caches populated
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.243024   19524 shared_informer.go:123] caches populated
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.243057   19524 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.343173   19524 shared_informer.go:123] caches populated
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.343195   19524 controller_utils.go:1034] Caches are synced for scheduler controller
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.343249   19524 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-scheduler...
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.351601   19524 leaderelection.go:214] successfully acquired lease kube-system/kube-scheduler
May 28 16:06:23 k8s.master kube-scheduler[19524]: I0528 16:06:23.451916   19524 shared_informer.go:123] caches populated

部署kube-controller-manager组件

1.创建kube-controller-manager配置文件
cat << EOF | tee /opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=127.0.0.1:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF
2.创建kube-controller-manager systemd文件
cat << EOF | tee /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
3.启动&验证
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager

[root@k8s ssl]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-05-28 16:18:52 CST; 10s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 19606 (kube-controller)
   Memory: 31.9M
   CGroup: /system.slice/kube-controller-manager.service
           └─19606 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca...

May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.140091   19606 controller_utils.go:1034] Caches are synced for garbage collector controller
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.140098   19606 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.168470   19606 request.go:530] Throttling request took 1.399047743s, request: GET:http://127.0.0.1:8080/apis/apiextensions.k8s.io/v1beta1?timeout=32s
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.169456   19606 resource_quota_controller.go:427] syncing resource quota controller with updated resources from discovery: map[/v1, Resource=replicationcontrollers:{} extensio...beta1, Resource=eve
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.169593   19606 resource_quota_monitor.go:180] QuotaMonitor unable to use a shared informer for resource "extensions/v1beta1, Resource=networkpolicies": no informer found for ...rce=networkpolicies
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.169632   19606 resource_quota_monitor.go:243] quota synced monitors; added 0, kept 29, removed 0
May 28 16:18:55 k8s.master kube-controller-manager[19606]: E0528 16:18:55.169647   19606 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable ...ce=networkpolicies"
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.225106   19606 shared_informer.go:123] caches populated
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.225138   19606 controller_utils.go:1034] Caches are synced for garbage collector controller
May 28 16:18:55 k8s.master kube-controller-manager[19606]: I0528 16:18:55.225146   19606 garbagecollector.go:245] synced garbage collector
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s ssl]# 

验证master服务状态

[root@k8s ssl]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
[root@k8s ssl]# 

如果出现如上界面,表示Master安装成功!

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐