1. 基础环境准备

1.1. 服务规划

kube102

192.168.0.102

k8s-master

etcd、kube-apiserver、kube-controller-manager、kube-scheduler

kube101

192.168.0.101

k8s-node

etcd、kubelet、docker、kube_proxy

kube100

192.168.0.100

k8s-node

etcd、kubelet、docker、kube_proxy

kube99

192.168.0.99

k8s-node

etcd、kubelet、docker、kube_proxy

  1.2. 关闭防火墙

systemctl stop firewalld.service 
systemctl disable firewalld.service 
PS:第一个命令关闭防火墙,第二个命令使开机不启动防火墙(永久关闭)

  1.3. 关闭selinux

vim /etc/selinux/config 
修改为:SELINUX=disabled

1.4. 修改/etc/hostname

Vim /etc/hostsname
修改为:kube99
PS: 每台机器改为对应的IP后缀

1.5. 添加kube用户

        新建用户

adduser kube
passwd kube     
chmod u+w /etc/sudoers

        给kube用户赋sudo权限

vim  /etc/sudoers  添加kube用户
kube    ALL=(ALL)       NOPASSWD: ALL
chmod u-w /etc/sudoers

1.6. 修改/etc/hosts

vim /etc/hosts
192.168.0.102  kube102
192.168.0.101  kube101
192.168.0.100  kube100
192.168.0.99   kube99

1.7. 配置ssh无密码登录

        生成公钥

ssh-keygen -t rsa

        分发公钥

ssh-copy-id  -i ~/.ssh/id_rsa.pub kube101
ssh-copy-id  -i ~/.ssh/id_rsa.pub kube100
ssh-copy-id  -i ~/.ssh/id_rsa.pub kube99

       以上步骤每台机器都做

2. 安装Docker

2.1. 移除旧版本

yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-selinux \
                  docker-engine-selinux \
                  docker-engine

2.2. 安装yum-utils

yum install yum-utils 

2.3. 安装其他依赖

yum install -y yum-utils device-mapper-persistent-data lvm2

 2.4. 添加Docker源

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2.5. 下载Docker安装软件

     创建文件夹

 mkdir -p /data/mydepot

     下载安装文件:

yumdownloader docker-ce --resolve --destdir=/data/mydepot

2.6. 执行安装命令

yum install *.rpm

2.7. 同步安装docker到其他机器

      将该文件下的安装程序,copy到其他服务器

scp -r /data/mydepot   kube@kube99:~/
scp -r /data/mydepot   kube@kube100:~/
scp -r /data/mydepot   kube@kube101:~/

       在所有机器上执行

yum install *.rpm

2.8. 启动docker

      所有节点机器都需要启动

sudo systemctl start docker
sudo systemctl enable docker

2.9. 测试docker

sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete 
Digest: sha256:c3b4ada4687bbaa170745b3e4dd8ac3f194ca95b2d0518b417fb47e5879d9b5f
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

   3.  生成ETCD证书

3.1.  创建文件夹

sudo mkdir /k8s/etcd/{bin,cfg,ssl} –p
sudo chmod 777 /k8s/
cd k8s/etcd/ssl

 3.2. 下载安装cfssl软件

wget  https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget  https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64  cfssljson_linux-amd64  cfssl_linux-amd64
sudo cp  cfssl-certinfo_linux-amd64  /usr/local/bin/cfssl-certinfo
sudo cp cfssljson_linux-amd64  /usr/local/bin/cfssljson
sudo cp cfssl_linux-amd64  /usr/local/bin/cfssl 
进入到/usr/local/bin目录,进行sudo chmod +x *

3.3. etcd ca配置

cd /k8s/etcd/ssl/
cat << EOF |sudo  tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "etcd": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

3.4. etcd ca证书

cd /k8s/etcd/ssl/
cat << EOF |sudo  tee ca-csr.json
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

3.5. etcd server证书

cd /k8s/etcd/ssl/
cat << EOF | sudo  tee server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "127.0.0.1",
    "localhost",
    "192.168.0.102",
    "192.168.0.101",
    "192.168.0.100",
    "192.168.0.99"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
    }]
}
EOF

3.6. 生成etcd ca证书和私钥 初始化ca

cfssl gencert -initca ca-csr.json | cfssljson -bare ca 

3.7. 生成server证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server

3.8. 分发证书

scp -r /k8s/etcd/ssl kube@kube99:/k8s/etcd/ssl
scp -r /k8s/etcd/ssl kube@kube100:/k8s/etcd/ssl
scp -r /k8s/etcd/ssl kube@kube101:/k8s/etcd/ssl

4. 安装ETCD

4.1. 下载etcd安装包

        下载

wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz

        解压后移动至/k8s/etcd/bin目录下,并赋权

tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/
cd /k8s/etcd/bin/
sudo chmod +x *

        将目录 /k8s/etcd/bin/设置到环境变量中

export PATH=/k8s/etcd/bin/:$PATH

4.2. 配置etcd服务主文件

vim /k8s/etcd/cfg/etcd.conf 

#[Member]
# 节点名
ETCD_NAME="etcd01"
# 数据存放位置
ETCD_DATA_DIR="/data2/etcd"
# 监听其他 Etcd 实例的地址
ETCD_LISTEN_PEER_URLS="https://192.168.0.102:2380"
# 监听客户端地址
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.102:2379,http://127.0.0.1:2379"
#[clustering]
# 通知其他 Etcd 实例地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.102:2380"
#通知 客户端地址
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.102:2379"
#初始化集群地址
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.0.102:2380,etcd02=https://192.168.0.101:2380,etcd03=https://192.168.0.100:2380,etcd04=https://192.168.0.99:2380"
# 初始化集群 token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
# 初始化集群状态,new 表示新建
ETCD_INITAIL_CLUSTER_STATE="new"
#[Security]
ETCD_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
#ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
#ETCD_PEER_CLIENT_CERT_AUTH="true"

4.3. 创建数据存放目录

        删除旧数据,否则启动会获取不到最新的配置

sudo rm -rf /data2/etcd 

        创建目录

sudo mkdir /data2/etcd -p

4.4. 启动服务并查看状态

         在启动etcd服务之前,我们在其他几个节点上做同样的配置,注意etcd的节点名称和IP,做完配置后在每台机器上做如下操作

sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
sudo systemctl status etcd

        查看服务状态为running即为正常启动

4.5. 检测etcd集群

/k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://192.168.0.102:2379,https://192.168.0.101:2379,https://192.168.0.100:2379,https://192.168.0.99:2379," cluster-health
member e859fd91c72423b is healthy: got healthy result from https://192.168.0.99:2379
member 2be632abc5c80051 is healthy: got healthy result from https://192.168.0.102:2379
member 3966f8f8613ac7f2 is healthy: got healthy result from https://192.168.0.100:2379
member 424e4200315b9d7f is healthy: got healthy result from https://192.168.0.101:2379
cluster is healthy

5. 安装Flannel

        Flannel是CoreOS团队针对Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的Docker容器都具有全集群唯一的虚拟IP地址。但在默认的Docker配置中,每个Node的Docker服务会分别负责所在节点容器的IP分配。Node内部得容器之间可以相互访问,但是跨主机(Node)网络相互间是不能通信。Flannel设计目的就是为集群中所有节点重新规划IP地址的使用规则,从而使得不同节点上的容器能够获得"同属一个内网"且"不重复的"IP地址,并让属于不同节点上的容器能够直接通过内网IP通信。 
       Flannel 使用etcd存储配置数据和子网分配信息。flannel 启动之后,后台进程首先检索配置和正在使用的子网列表,然后选择一个可用的子网,然后尝试去注册它。etcd也存储这个每个主机对应的ip。flannel 使用etcd的watch机制监视/coreos.com/network/subnets下面所有元素的变化信息,并且根据它来维护一个路由表。为了提高性能,flannel优化了Universal TAP/TUN设备,对TUN和UDP之间的ip分片做了代理。

        原理图

5.1. 为flannel注册网段

/k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://192.168.0.102:2379,https://192.168.0.101:2379,https://192.168.0.100:2379,https://192.168.0.99:2379," set /k8s/network/config  '{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}'

5.2. 创建k8s安装目录

        创建安装目录:

sudo mkdir /k8s/kubernetes/{bin,cfg,ssl} –p

        配置环境变量

export PATH=/k8s/kubernetes/bin/:$PATH 

5.3. 下载flannel

        下载安装包,下载地址:https://github.com/coreos/flannel/releases

wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz

         解压并移动到/k8s/kubernetes/bin目录下

tar -zxvf flannel-v0.11.0-linux-arm64.tar.gz
mv flanneld  /k8s/kubernetes/bin/
mv mk-docker-opts.sh /k8s/kubernetes/bin/

5.4. 配置flannel

vim /k8s/kubernetes/cfg/flanneld.conf

FLANNEL_ETCD_ENDPOINTS="https://192.168.0.102:2379,https://192.168.0.101:2379,https://192.168.0.100:2379,https://192.168.0.99:2379"
FLANNEL_ETCD_PREFIX="/k8s/network"
FLANNEL_OPTIONS="-etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem"

5.5. 陪置flannel服务

sudo vim /usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/k8s/kubernetes/cfg/flanneld.conf
#ExecStart=/k8s/kubernetes/bin/flanneld  -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS}  $FLANNEL_OPTIONS
#ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq  $FLANNEL_OPTIONS
ExecStart=/k8s/kubernetes/bin/flanneld    -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS}   -etcd-prefix=${FLANNEL_ETCD_PREFIX}   $FLANNEL_OPTIONS
ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env 
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

        启动Flannel

sudo systemctl daemon-reload
sudo systemctl enable flanneld
sudo systemctl start flanneld
sudo systemctl status flanneld

        状态为running为正常

5.6. 将flannel网段配置到docker

        安装,并启动Flannel以后,会生成/run/flannel/subnet.env文件,生成内容为:

cat /run/flannel/subnet.env 
DOCKER_OPT_BIP="--bip=10.254.91.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.254.91.1/24 --ip-masq=true --mtu=1450"

        停止docker服务

sudo systemctl stop docker

         修改docker配置,编辑/usr/lib/systemd/system/docker.service文件,在service配置项下面添加以下配置

EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS --graph /data2/docker 修改docker存储目录

# /etc/docker/daemon.json中可能会有docker存储配置,如果有就修改一致,否则会以daemon.json文件中的配置为准。

         启动docker

sudo systemctl daemon-reload
sudo systemctl start docker

         检测配置是否正确,利用ifconfig来查看docker.0的网段是否和flannel的属于同一网段 

ifconfig


docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.254.91.1  netmask 255.255.255.0  broadcast 10.254.91.255
        inet6 fe80::42:92ff:fee0:a5e9  prefixlen 64  scopeid 0x20<link>
        ether 02:42:92:e0:a5:e9  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2  bytes 180 (180.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

        可以看到inet与/run/flannel/subnet.env文件中的DOCKER_OPT_BIP配置项中的网段

5.7. 同步操作到其他机器上

        步骤和上边的一样,每台机器上都做一遍即可。

6. 安装K8S Master节点(kube102)

6.1.  Kubernetes证书制作

         Kubernetes ca配置

cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

       Kubernetes ca证书

cat << EOF | tee ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

        Kubernetes api-server证书

cat << EOF | tee server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "10.254.0.1",
      "192.168.0.102",
      "192.168.0.101",
      "192.168.0.100",
      "192.168.0.99"
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

注:10.254.0.1为我们部署flannel时写入etcd中那个网段的第一个ip,不写会报错

       制作kube-proxy证书

cat << EOF | tee kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

        生成证书ca证书

cfssl gencert -initca ca-csr.json | cfssljson -bare ca –

        生成api-server证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

        生成kube-proxy证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

6.2. 下载二进制安装包

        下载安装包

wget https://dl.k8s.io/v1.14.1/kubernetes-server-linux-amd64.tar.gz

        解压并移动到/k8s/kubernetes/bin目录下,并赋权

tar xvf kubernetes-server-linux-amd64.tar.gz
cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/
cd /k8s/kubernetes/bin/
sudo chmod +x *

6.3. kube-apiserver部署

        创建TLS Bootstrapping Token

cat << EOF | tee /k8s/kubernetes/cfg/token.csv  
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" 
EOF

         创建kube-apiserver配置文件

vim /k8s/kubernetes/cfg/kube-apiserver 

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.0.102:2379,https://192.168.0.101:2379,https://192.168.0.100:2379,https://192.168.0.99:2379 \
--bind-address=192.168.0.102 \
--secure-port=6443 \
--advertise-address=192.168.0.102 \
--allow-privileged=true \
--service-cluster-ip-range=10.254.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/server.pem  \
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \
--kubelet-client-certificate=/k8s/kubernetes/ssl/server.pem  \  
--kubelet-client-key=/k8s/kubernetes/ssl/server-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"

            创建apiserver 服务

vim /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

       加载服务并启动

sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver
sudo systemctl start kube-apiserver
sudo systemctl status kube-apiserver

6.4. kube-scheduler部署

        创建kube-scheduler配置文件

vim  /k8s/kubernetes/cfg/kube-scheduler 

KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"

        创建kube-scheduler服务文件

vim /usr/lib/systemd/system/kube-scheduler.service 

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

        启动kube-scheduler服务

sudo systemctl daemon-reload
sudo systemctl enable  kube-scheduler
sudo systemctl start kube-scheduler
sudo systemctl status kube-scheduler

6.5. kube-controller-manager部署

        创建kube-controller-manager配置文件

vim /k8s/kubernetes/cfg/kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.254.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"

         创建kube-controller-manager服务文件

vim /usr/lib/systemd/system/kube-controller-manager.service 

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

          启动kube-controller-manager服务

sudo systemctl daemon-reload
sudo systemctl enable  kube-controller-manager
sudo systemctl start kube-controller-manager
sudo systemctl status kube-controller-manager

        验证kubernetes master server是否正常

kubectl get cs
NAME                  STATUS     MESSAGE             ERROR
scheduler             Healthy    ok                  
controller-manager   Healthy    ok                  
etcd-2                Healthy    {"health":"true"}   
etcd-3                Healthy    {"health":"true"}   
etcd-0                Healthy    {"health":"true"}   
etcd-1                Healthy    {"health":"true"}

7. 安装K8S Node节点(kube101, kube100,kube99)

7.1. 同步证书文件和配置

        同步证书

scp -r /k8s/kubernetes/ssl/* kube@kube101:/k8s/kubernetes/ssl/
scp -r /k8s/kubernetes/ssl/* kube@kube100:/k8s/kubernetes/ssl/
scp -r /k8s/kubernetes/ssl/* kube@kube99:/k8s/kubernetes/ssl/

         同步配置

scp -r /k8s/kubernetes/cfg/* kube@kube99:/k8s/kubernetes/cfg/
scp -r /k8s/kubernetes/cfg/* kube@kube100:/k8s/kubernetes/cfg/
scp -r /k8s/kubernetes/cfg/* kube@kube101:/k8s/kubernetes/cfg/

        下载二进制安装包

wget https://dl.k8s.io/v1.14.1/kubernetes-node-linux-amd64.tar.gz

        解压并分发文件

tar -zxvf kubernetes-node-linux-amd64.tar.gz
cd kubernetes/node/bin
scp -r kubectl kubelet kube-proxy kube@kube99:/k8s/kubernetes/bin/
scp -r kubectl kubelet kube-proxy kube@kube100:/k8s/kubernetes/bin/
scp -r kubectl kubelet kube-proxy kube@kube101:/k8s/kubernetes/bin/

7.2. kubelet部署

       创建kubelet bootstrap 和kube-proxy的 kubeconfig文件 通过脚本实现

vim /k8s/kubernetes/cfg/environment.sh

#!/bin/bash
#创建kubelet bootstrapping kubeconfig 
# BOOTSTRAP_TOKEN为master上设置的值,在token.csv中可查
BOOTSTRAP_TOKEN=af2b5abb59b0ad642cd35d48edaebf41
# 设置APISERVER地址
KUBE_APISERVER="https://192.168.0.102:6443"
#设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

#设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------
# 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem \
  --client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

        每台Node机器上运行该脚本

cd /k8s/kubernetes/cfg/   # 该脚本运行产生的文件在脚本运行的位置,我们进入该目录运行
sh environment.sh
 
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" modified.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" modified.
Switched to context "default".

        查看生成的bootstrap.kubeconfig,kube-proxy.kubeconfig文件

[kube@kube99 /k8s/kubernetes/cfg] ls
bootstrap.kubeconfig  environment.sh  flanneld.conf  kube-proxy.kubeconfig  token.csv

        创建kubelet参数配置模板文件

vim /k8s/kubernetes/cfg/kubelet.config  

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.0.101                         # node节点ip
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.254.0.10"]                                 
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

        创建kubelet配置文件

vim /k8s/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.0.101 \               # node节点ip
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

        创建kubelet 服务文件

sudo vim /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

         将kubelet-bootstrap用户绑定到系统集群角色,这个默认连接localhost:8080端口,可以在master上操作

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

        启动kubelet服务

sudo systemctl daemon-reload
sudo systemctl enable  kubelet
sudo systemctl start kubelet
sudo systemctl status kubelet

        每个node节点都如此操作

7.3. Master节点接受kubelet CSR请求

        在master上执行 kubectl get csr

kubectl get csr

NAME                                                       AGE    REQUESTOR           CONDITION
node-csr-0M5oFrs8nN03HJLXk3acgadmzQ5QUrXQUNQw44lGMnA   111s   kubelet-bootstrap   Pending
node-csr-K8SkSwzOC-3KOi6MCdvmqYFR5ZBXCwX1cOwX3zsSYg4   103s   kubelet-bootstrap   Pending
node-csr-q6sCN7crKXD9ZIsZEfsNHWXB-bU3MuUwYSbLLONtmpg   108s   kubelet-bootstrap   Pending

 

       手动接受csr

kubectl certificate approve node-csr-0M5oFrs8nN03HJLdXk3acgadmzQ5QUrXQUNQw44lGMnA

certificatesigningrequest.certificates.k8s.io/node-csr-0M5oFrs8nN03HJLdXk3acgadmzQ5QUrXQUNQw44lGMnA approved

       查看是否接受成功

kubectl get csr

NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-0M5oFrs8nN03HJLXk3acgadmzQ5QUrXQUNQw44lGMnA   5m42s   kubelet-bootstrap   Approved,Issued
node-csr-K8SkSwzOC-3KOi6MCdvmqYFR5ZBXCwX1cOwX3zsSYg4   5m34s   kubelet-bootstrap   Pending
node-csr-q6sCN7crKXD9ZIsZEfsNHWXB-bU3MuUwYSbLLONtmpg   5m39s   kubelet-bootstrap   Pending

         发现node-csr-0M5oFrs8nN03HJLXk3acgadmzQ5QUrXQUNQw44lGMnA的CONDITION状态已经由Pending改变为Approved,Issued,说明接受成功,将剩余的两个节点接受完成,这时查看node信息

kubectl get node

NAME             STATUS   ROLES    AGE     VERSION
192.168.0.100   Ready    <none>   16s     v1.14.1
192.168.0.101   Ready    <none>   4m36s   v1.14.1
192.168.0.99    Ready    <none>   30s     v1.14.1

7.4. kube-proxy部署

        创建 kube-proxy 配置文件

vim /k8s/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--bind-address=192.168.0.101 \                    # 节点ip
--hostname-override=192.168.0.101 \              # 节点ip
--cluster-cidr=10.254.0.0/16 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"

        创建kube-proxy 服务文件

sudo vim /usr/lib/systemd/system/kube-proxy.service 
 
[Unit]
Description=Kubernetes Proxy
After=network.target
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

        启动kube-proxy 服务

sudo systemctl daemon-reload 
sudo systemctl enable kube-proxy 
sudo systemctl start kube-proxy
sudo systemctl status  kube-proxy

        将kube-proxy安装在每台Node节点机器上

8. 部署管理界面

8.1. 部署dashboard

          修改dashboard编排文件

vim kubernetes-dashboard.yaml

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# ------------------- Dashboard Secrets ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kube-system
type: Opaque
data:
  csrf: ""

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
  verbs: ["get", "update", "delete"]
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---

# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: registry.cn-hangzhou.aliyuncs.com/k8sth/kubernetes-dashboard-amd64:v1.10.0
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard

        部署

kubectl create -f kubernetes-dashboard.yaml

       查看部署结果

kubectl get pods -n kube-system

NAME                                      READY    STATUS     RESTARTS   AGE
kubernetes-dashboard-7565cb87f7-5ftnj     1/1      Running    0           2m

8.2. 创建管理员用户

        创建admin-user.yaml的编排文件

vim admin-user.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

        创建admin-user

kubectl create -f admin-user.yaml

        获取管理员用户的Token

kubectl describe  secret admin-user --namespace=kube-system

Name:         admin-user-token-s7cdz
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 384a8a03-df54-11e9-9409-1866dae6f3a4

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1359 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lzxczm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXM3Y2R6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWfadfNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2U23df56tYWNjb3VudC51aWQiOiIzODRhOGEwMy1kZjU0LTExZTktOTQwOS0xODY2ZGFlNmYzYTQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.CdT4-srru0zKj5uxBLPKa59TZRGuh2e24DnkrcW9879P_4pTlihjKqaBe_1_1ianb-CDAQTpJnzSL2gqTp8ohGKEVs5FbX_trD_PKbvGaegItjeO9J3r_W7uXdIKz7Gr917hztUDvqNjKcDj3XLyGUJ8ZcQp4edABK_gESCEfMNCZQ8iMsvLD_nt0fKAbNzbVgxl1d6sodDlP6HbVfHe5dYJe49iekBtoHIX4L2tsoDAHW8oqUegQmjbaLg_F7vKAwK9ueIrQOVskRG4IE4TtugTbKSQAyCzhM_7y0OoH87k6eyG0tqBN43O9brTZCZ9gRjbKf31nlh2WMIjJGyP8A

8.3. 访问dashboard

        查看dashboard所在节点ip

kubectl get pods -n kube-system -o wide |grep dashboard

kubernetes-dashboard-7565cb87f7-5ftnj   1/1     Running   0          43m   10.254.9.2   192.168.0.99    <none>           <none>

        打开火狐浏览器,输入https://192.168.0.99:30001,选择令牌,将上步操作中获取的token输入即可

9. 部署KUBEDNS

9.1. 下载编排文件

git clone https://github.com/rootsongjc/kubernetes-handbook.git
cd kubernetes-handbook/manifests/kubedns/

9.2. 配置编配文件

        替换kubedns-controller.yaml中的镜像地址

镜像:
harbor-001.jimmysong.io/library/k8s-dns-kube-dns-amd64:1.14.1
改为:
sapcc/k8s-dns-kube-dns-amd64:1.14.1

镜像:
harbor-001.jimmysong.io/library/k8s-dns-sidecar-amd64:v1.14.1
改为:
sapcc/k8s-dns-sidecar-amd64:v1.14.1

镜像:
harbor-001.jimmysong.io/library/k8s-dns-dnsmasq-nanny-amd64:v1.14.1
改为:
sapcc/k8s-dns-dnsmasq-nanny-amd64:v1.14.1

        修改kubedns-svc.yaml中 clusterIP的值

clusterIP: 10.254.0.10    # 与kubelet.config中的clusterDNS: ["10.254.0.10"]配置一致

9.3. 部署kubedns

      执行编排文件

cd kubernetes-handbook/manifests/kubedns/
kubectl create -f .

       查看部署结果

kubectl get pods -n kube-system -o wide | grep dns

kube-dns-849bdf6664-xds7h               3/3     Running   0          121m   10.254.1.2   192.168.0.101   <none>           <none>

结束语

        到这里,kubernetes二进制安装已经完成了,如有问题,加群526855734,

        

Logo

开源、云原生的融合云平台

更多推荐