生产环境可部署Kubernetes集群的两种方式

目前生产部署Kubernetes集群主要有两种方式:
  • kubeadm

Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。

官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

  • 二进制包

从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。

Kubeadm降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。

kubernetes架构说明

mark

安装要求

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

  • 一台或多台机器,操作系统 CentOS7.x-86_x64;
  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多;
  • 集群中所有机器之间网络互通;
  • 可以访问外网,需要拉取镜像;
  • 禁止swap分区。

准备环境

img
单Master服务器规划:

角色IP下载的服务
master20.0.0.41kube-apiserver kube-controller-manager kube-scheduler etcd
node120.0.0.42kubelet kube-proxy docker flannel etcd
node220.0.0.43kubelet kube-proxy docker flannel etcd

一:ETCD群集部署

1.1:建立工作目录存放二进制软件包

所有虚拟机均已关闭防火墙以及selinux核心防护功能

# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

[root@master ~]# mkdir k8s              '创建目录'
[root@master ~]# cd k8s/
[root@master k8s]# rz -E      
rz waiting to receive.
[root@master k8s]# ls
etcd-cert.sh  etcd.sh                    #从宿主机拖进来
etcd-cert.sh:证书脚本  etcd.sh:服务脚本

[root@master k8s]# mkdir etc-cert        '//创建证书目录'
[root@master k8s]# ls
etc-cert  etcd-cert.sh  etcd.sh
[root@master k8s]# cd etc-cert/

#下载证书制作工具cfssl
cfssl:包含一个命令行工具 和一个用于 签名,验证并且捆绑TLS证书的 HTTP API 服务。使用GO语言编写。

[root@localhost k8s]# vim cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo

#这三个工具我本地已经下载好了直接复制到目录直接用
[root@master etc-cert]# rz -E
rz waiting to receive.
[root@master etc-cert]# ls
cfssl  cfssl-certinfo  cfssljson

cfssl:包含一个命令行工具 和一个用于 签名,验证并且捆绑TLS证书的 HTTP API 服务
cfssljson程序,从cfssl和multirootca程序获取JSON输出,并将证书,密钥,CSR和bundle写入磁盘


#把三个工具移动到/usr/local/bin便于系统所管理
[root@master etc-cert]# mv cfssl* /usr/local/bin/
[root@master etc-cert]# ls /usr/local/bin/
cfssl  cfssl-certinfo  cfssljson

#增加执行权限
[root@master bin]# chmod +x *
[root@master bin]# ls
cfssl  cfssl-certinfo  cfssljson

1.1:创建SSL证书

CFSSL可以创建一个获取和操作证书的内部认证中心。

运行认证中心需要一个CA证书和相应的CA私钥。任何知道私钥的人都可以充当CA颁发证书。因此,私钥的保护至关重要。

1.11:配置etcd证书自签CA
'//把里面内容生成到ca-config.json'
[root@master bin]# cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"     
        ]  
      } 
    }         
  }
}
EOF


#default默认策略,指定了证书的默认有效期是一年(8760h)
#kubernetes:表示该配置(profile)的用途是为kubernetes生成证书及相关的校验工作
#signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE
#server auth:表示可以该CA  server 提供的证书进行验证
#client auth:表示可以用该 CA  client 提供的证书进行验证
#expiry:也表示过期时间,如果不写以default中的为准


#把ca-config.json移动到etcd-cert中
[root@master bin]# mv ca-config.json /root/k8s/etc-cert
[root@master bin]# cd /root/k8s/etc-cert/
[root@master etc-cert]# ls
ca-config.json
1.12:使用自签CA签发ETCD HTPPS证书
  • 创建证书申请文件
[root@master etc-cert]# cat > ca-csr.json <<EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF


[root@master etc-cert]# ls
ca-config.json  ca-csr.json

参数介绍:

CN: Common Name,浏览器使用该字段验证网站是否合法,一般写的是域名。非常重要。浏览器使用该字段验证网站是否合法
key:生成证书的算法
names:一些其它的属性
C: Country, 国家
ST: State,州或者是省份
L: Locality Name,地区,城市

注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。

1.13:生成CA证书和CA私钥和CSR(证书签名请求)
  • 生成ca-key.pem ca.pem
[root@master etc-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/09/28 16:27:03 [INFO] generating a new CA key and certificate from CSR
2020/09/28 16:27:03 [INFO] generate received request
2020/09/28 16:27:03 [INFO] received CSR
2020/09/28 16:27:03 [INFO] generating key: rsa-2048
2020/09/28 16:27:03 [INFO] encoded CSR
2020/09/28 16:27:03 [INFO] signed certificate with serial number 549599223717947770363553280189357605844100100541

#该命令会生成运行CA所必需的文件ca-key.pem(私钥)和ca.pem(证书),还会生成ca.csr(证书签名请求),用于交叉签名或重新签名


[root@master etc-cert]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

“ca.pem”             #ca证书文件
“ca-key.pem”         #ca密钥证书文件
1.14:创建ETCD server证书
  • 生成etcd节点之间通信证书【注意IP地址的变化】
'//配置服务器端的签名文件'
[root@master etc-cert]# cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "20.0.0.41",
    "20.0.0.42",
    "20.0.0.43"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

[root@master etc-cert]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server-csr.json

#生成ETCD通信证书,用于etcd之间通信验证   生成server.pem server-key.prm
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

[root@master etc-cert]# ls
ca-config.json  ca-csr.json  ca.pem      server-csr.json  server.pem
ca.csr          ca-key.pem   server.csr  server-key.pem

1.2:配置etcd配置文件

#把两个包移动包k8s目录下
etcd-v3.3.10-linux-amd64.tar.gz   kubernetes-server-linux-amd64.tar.gz

[root@master k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz    '//解压'
[root@master k8s]# ls etcd-v3.3.10-linux-amd64
Documentation  etcdctl            README.md
etcd           README-etcdctl.md  READMEv2-etcdctl.md

#建立存放etcd配置文件、命令、证书目录
[root@master k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p
[root@master k8s]# ls /opt/etcd/
bin  cfg  ssl

'//指令文件放在bin命令目录下'
[root@master k8s]# mv etcd-v3.3.10-linux-amd64/etcd /opt/etcd/bin/
[root@master k8s]# mv etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/
[root@master k8s]# ls /opt/etcd/bin/
etcd  etcdctl

'//证书目录下所有证书拷贝到ssl目录'
[root@master k8s]# cp etc-cert/*.pem /opt/etcd/ssl/
[root@master k8s]# ls /opt/etcd/ssl/
ca-key.pem  ca.pem  server-key.pem  server.pem
  • 主节点执行脚本并声明本地节点名称和地址,此时会进入监控状态,等待其他节点加入
'//执行脚本etcd.sh产生启动脚本跟配置文件'  '//2380:群集内部通讯端口 2379:单个ETCD对外提供服务器的服务
[root@master k8s]# bash etcd.sh etcd01 20.0.0.41 etcd02=https://20.0.0.42:2380,etcd03=https://20.0.0.43:2380
  • 使用另外一个会话打开,会发现etcd进程已经开启
[root@master ~]# ps -ef | grep etcd
root      88738  87599  0 16:57 pts/3    00:00:00 bash etcd.sh etcd01 20.0.0.41 etcd02=https://20.0.0.42:2380,etcd03=https://20.0.0.43:2380
root      88785  88738  0 16:57 pts/3    00:00:00 systemctl restart etcd
root      88791      1  0 16:57 ?        00:00:00 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://20.0.0.41:2380 --listen-client-urls=https://20.0.0.41:2379,http://127.0.0.1:2379 --advertise-client-urls=https://20.0.0.41:2379 --initial-advertise-peer-urls=https://20.0.0.41:2380 --initial-cluster=etcd01=https://20.0.0.41:2380,etcd02=https://20.0.0.42:2380,etcd03=https://20.0.0.43:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
root      88867  88824  0 16:59 pts/1    00:00:00 grep --color=auto etcd
1.21:拷贝证书和启动脚本到两个noed节点
[root@master k8s]# scp -r /opt/etcd/ root@20.0.0.42:/opt/
[root@master k8s]# scp -r /opt/etcd/ root@20.0.0.43:/opt/

#启动脚本拷贝其他节点
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@20.0.0.42:/usr/lib/systemd/system/
root@20.0.0.42's password:        '//输入密码'
etcd.service                                100%  923   342.3KB/s   00:00    
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@20.0.0.43:/usr/lib/systemd/system/
root@20.0.0.43's password: 
etcd.service                                100%  923     1.1MB/s   00:00 

1.3:部署noed节点

1.31:在noed1/noed2节点修改etcd配置文件
  • 修改相应的名称和IP地址
[root@node1 ~]# vim /opt/etcd/cfg/etcd

#[Member]
ETCD_NAME="etcd02"       '//修改名称'
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://20.0.0.42:2380"     '//修改此处为当前服务器IP'
ETCD_LISTEN_CLIENT_URLS="https://20.0.0.42:2379"   '//修改此处为当前服务器IP'

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://20.0.0.42:2380"   '//修改此处为当前服务器IP'
ETCD_ADVERTISE_CLIENT_URLS="https://20.0.0.42:2379"         '//修改此处为当前服务器IP'
ETCD_INITIAL_CLUSTER="etcd01=https://20.0.0.41:2380,etcd02=https://20.0.0.42:2380,etcd03=https://20.0.0.43:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#修改noed2节点
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://20.0.0.43:2380"      '//修改此处为当前服务器IP'
ETCD_LISTEN_CLIENT_URLS="https://20.0.0.43:2379"    '//修改此处为当前服务器IP'

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://20.0.0.43:2380  '//修改此处为当前服务器IP'
ETCD_ADVERTISE_CLIENT_URLS="https://20.0.0.43:2379"       '//修改此处为当前服务器IP'
ETCD_INITIAL_CLUSTER="etcd01=https://20.0.0.41:2380,etcd02=https://20.0.0.42:2380,etcd03=https://20.0.0.43:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
  • ETCD_NAME:节点名称,集群中唯一
  • ETCD_DATA_DIR:数据目录
  • ETCD_LISTEN_PEER_URLS:集群通信监听地址
  • ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
  • ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
  • ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
  • ETCD_INITIAL_CLUSTER:集群节点地址
  • ETCD_INITIAL_CLUSTER_TOKEN:集群Token
  • ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
1.32:启动etcd服务
[root@node1 system]# systemctl start etcd

//查看状态
[root@node1 system]# systemctl status etcd.service 
 etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
   Active: active (running) since  2020-09-28 17:16:13 CST; 3min 4s ago
 Main PID: 11408 (etcd)
    Tasks: 13
   CGroup: /system.slice/etcd.service
           └─11408 /opt/etcd/bin/etcd --name=etcd02 --data-dir=/var/lib/etcd/default.etcd -
1.33:检查群集健康状态
[root@master k8s]# /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem \
--cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://20.0.0.41:2379,https://20.0.0.42:2379,https://20.0.0.43:2379" \
cluster-health
member 66dc3290a78a5583 is healthy: got healthy result from https://20.0.0.41:2379
member 7a5ad6a582dccc31 is healthy: got healthy result from https://20.0.0.43:2379
member bfea6e48f5ac445a is healthy: got healthy result from https://20.0.0.42:2379
cluster is healthy

如果输出上面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd

二:docker引擎部署

node1、noed2节点部署docker引擎

详细内容请参考https://blog.csdn.net/weixin_47151643/article/details/108695943

三:部署Flannel网络配置

flannel网络理论介绍

  • Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟化网络技术模式,该网络中的主机通过虚拟链路连接起来
  • VXLAN:将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上进行传输,到达目的地后由隧道端点解封装并将数据发送给目标地址
  • Flannel:是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方式

mark

3.1:向ETCD写入集群Pod网段信息

[root@master k8s]# cd /opt/etcd/ssl/

[root@master ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.41:2379,https://20.0.0.42:2379,https://20.0.0.43:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}


'//我们在noed节点查看写入的内容'
[root@node1 ssl]# pwd       '//在证书目录才能操作'
/opt/etcd/ssl

[root@node1 ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.41:2379,https://20.0.0.42:2379,https://20.0.0.43:2379" get /coreos.com/network/config
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

'//群集写入的信息都是可以查看的'
  • 拷贝到所有node节点【只需要部署在noed节点即可】
#把flannel包复制到节点的家目录
[root@node1 ~]# ls
anaconda-ks.cfg                     initial-setup-ks.cfg  模板  图片  下载  桌面
flannel-v0.10.0-linux-amd64.tar.gz  公共                  视频  文档  音乐

#所有节点解压操作   '//这边只演示node1节点'
[root@node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 
flanneld               '//flanneld服务脚本'
mk-docker-opts.sh     '//docker控制端脚本'
README.md

#noed两个节点创建k8s工作目录,将两个脚本移动到对应的工作目录
[root@node1 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p     '//创建配置文件、命令文件、证书目录'
[root@node1 ~]# ls /opt/kubernetes/
bin  cfg  ssl                                               '//目录已经有了里面是空的'
[root@node1 bin]# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/   '//移动flannel脚本命令到相应目录'
  • 两个noed节点都编辑flannel.sh脚本

  • 创建配置文件与启动脚本,定义的端口是2379,节点服务器提供的端口

[root@node1 ~]# vim flannel.sh

#!/bin/bash

ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}

cat <<EOF >/opt/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
  • 执行脚本,开启flannel网络功能
[root@node1 ~]# bash flannel.sh https://20.0.0.41:2379,https://20.0.0.42:2379,https://20.0.0.43:2379

Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.                    '//查看flanneld服务是否正常开启'

3.3:配置docker连接flannel

  • 能让flannel在不同节点转发信息
[root@node1 ~]# vim /usr/lib/systemd/system/docker.service 

EnvironmentFile=/run/flannel/subnet.env       '//连接flannel组件的环境变量文件'
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock

mark

  • 查看flannel分配给docker的IP地址
'//node1节点'
[root@node1 ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.91.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
//说明:bip指定启动时的子网
DOCKER_NETWORK_OPTIONS=" --bip=172.17.91.1/24 --ip-masq=false --mtu=1450"

'//noed2节点'
[root@node2 ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.5.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.5.1/24 --ip-masq=false --mtu=1450"
  • 重启服务
[root@node1 ~]# systemctl daemon-reload 
[root@node1 ~]# systemctl restart docker
  • 查看flannel网络
'//noed2'    docker:0对接flannel
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.5.1  netmask 255.255.255.0  broadcast 172.17.5.255
        ether 02:42:79:2f:71:7a  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.5.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::281b:a8ff:fe80:435d  prefixlen 64  scopeid 0x20<link>
        ether 2a:1b:a8:80:43:5d  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 37 overruns 0  carrier 0  collisions 0

'//noed2'
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.91.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::2479:53ff:fefa:2fda  prefixlen 64  scopeid 0x20<link>
        ether 26:79:53:fa:2f:da  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 37 overruns 0  carrier 0  collisions 0
        
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.91.1  netmask 255.255.255.0  broadcast 172.17.91.255
        ether 02:42:e1:b0:2f:dc  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  • 测试ping通对方docker0网卡 证明flannel起到路由作用
[root@node1 ~]# docker run -it centos:7 /bin/bash            '//两个节点都创建并运行容器'
[root@7ad1370c9e11 /]#  yum install net-tools -y            '//两个容器中都安装网络工具'

#查看容器地址
[root@7ad1370c9e11 /]# ifconfig                 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.91.2  netmask 255.255.255.0  broadcast 172.17.91.255
 .............省略..............

#node2节点IP
[root@90fb52e39615 /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.5.2  netmask 255.255.255.0  broadcast 172.17.5.255
'//可以看到票node01节点容器的IP地址是172.17.91.2,node02节点容器的IP地址是172.17.5.2 '

#查看容器之间通信连接
[root@7ad1370c9e11 /]# ping 172.17.5.2                  '//noed2 ping noed1'
PING 172.17.5.2 (172.17.5.2) 56(84) bytes of data.
64 bytes from 172.17.5.2: icmp_seq=1 ttl=62 time=0.845 ms
64 bytes from 172.17.5.2: icmp_seq=2 ttl=62 time=0.984 ms

[root@90fb52e39615 /]# ping 172.17.91.2
PING 172.17.91.2 (172.17.91.2) 56(84) bytes of data.
64 bytes from 172.17.91.2: icmp_seq=1 ttl=62 time=0.502 ms
64 bytes from 172.17.91.2: icmp_seq=2 ttl=62 time=0.420 ms
'//实现了不同节点的容器互通,flannel网络部署成功'

四:部署Kubernetes Master组件

kubernetes master 节点运行如下组件:
kube-apiserver
kube-scheduler
kube-controller-manager
kube-scheduler  kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式。
  • 下图是node节点的kubectl启动的流程图,根据此流程图,我们需要在master节点将kubelet-bootstrap用户绑定到集群,然后部署一些证书认证使node节点能够被master节点检测到并且成功连接。

4.1:配置kubernetes相关证书

#api生成证书
[root@master k8s]# rz -E
rz waiting to receive.
[root@master k8s]# ls
etc-cert      etcd-v3.3.10-linux-amd64              master.zip
etcd-cert.sh  etcd-v3.3.10-linux-amd64.tar.gz
etcd.sh       kubernetes-server-linux-amd64.tar.gz

[root@master k8s]# mkdir master                             
[root@master k8s]# mv master.zip master
[root@master k8s]# cd master/
[root@master master]# ls
master.zip 
[root@master master]# unzip master.zip                         '//解压'
[root@master master]# chmod +x controller-manager.sh           '//增加执行权限'
[root@master master]# ls
apiserver.sh  controller-manager.sh  master.zip  scheduler.sh

[root@master master]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p    '//创建k8s工作目录'
[root@master master]# ls /opt/kubernetes/
bin  cfg  ssl

#创建证书目录
[root@master k8s]# mkdir k8s-cert

#编辑证书脚本
[root@master k8s-cert]# vim k8s-cert.sh 

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "20.0.0.41",           '//master1'
      "20.0.0.44",           '//master2'
      "20.0.0.100",          '//VIP反向代理地址'
      "20.0.0.49",           '//nginx代理master'
      "20.0.0.50",           '//nginx代理backup'
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#-----------------------

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#-----------------------

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

#生成k8s证书
[root@master k8s-cert]#  bash k8s-cert.sh

#查看证书
[root@master k8s-cert]# ls
admin.csr       ca-config.json  ca.pem               kube-proxy-key.pem  server-key.pem
admin-csr.json  ca.csr          k8s-cert.sh          kube-proxy.pem      server.pem
admin-key.pem   ca-csr.json     kube-proxy.csr       server.csr
admin.pem       ca-key.pem      kube-proxy-csr.json  server-csr.json

'//8个证书 ca证书、代理proxy证书、admin证书、server证书'
[root@master k8s-cert]#  ls *pem
admin-key.pem  ca-key.pem  kube-proxy-key.pem  server-key.pem
admin.pem      ca.pem      kube-proxy.pem      server.pem
  • 拷贝刚才生成的证书把刚才生成的证书拷贝到配置证书的目录
[root@master k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/
[root@master k8s-cert]# ls /opt/kubernetes/ssl/
ca-key.pem  ca.pem  server-key.pem  server.pem

4.2:配置kubernetes相关组件

4.21:创建TLS bootstrapping token令牌

启用 TLS Bootstrapping 机制

TLS Bootstraping:Master
apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS
bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。

TLS bootstraping 工作流程:

mark

创建上述配置中的token文件:

[root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz 

'//复制关键命令文件'
/root/k8s/kubernetes/server/bin

[root@master bin]# cp kube-apiserver kubectl  kube-controller-manager kube-scheduler /opt/kubernetes/bin/
[root@master k8s]# ls /opt/kubernetes/bin/
kube-apiserver  kube-controller-manager  kubectl  kube-scheduler

#生成token.csv令牌

格式:token,用户名,UID,用户组
token也可自行生成替换:

[root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '     #可以随机生成序列号
f869a2c4ca95aebd952d7e180a52e4b6

[root@master k8s]# cd /opt/kubernetes/cfg/
[root@master cfg]# vim token.csv

'//编写以下内容 '
f869a2c4ca95aebd952d7e180a52e4b6,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

kubelet-bootstrap:用户名 授权noed节点加入k8s群集
10001:ID
4.22:创建apiserver配置
  • 开启apiserver
#token令牌、证书、二进制文件都准备好前提

[root@master cfg]# cd /root/k8s/master/
[root@master master]# bash apiserver.sh 20.0.0.41 https://20.0.0.41:2379,https://20.0.0.42:2379,https://20.0.0.43:2379

'//检查进程是否启动成功'
ps aux | grep kube
....省略信息......

'//监听的httpd端口'    6443是https端口  8080是http端口
[root@master master]# netstat -ntap | grep 6443
tcp        0      0 20.0.0.41:6443          0.0.0.0:*               LISTEN      29143/kube-apiserve 
tcp        0      0 20.0.0.41:6443          20.0.0.41:54456         ESTABLISHED 29143/kube-apiserve 
tcp        0      0 20.0.0.41:54456         20.0.0.41:6443          ESTABLISHED 29143/kube-apiserve 
[root@master master]# netstat -ntap | grep 8080
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      29143/kube-apiserve 
4.23:配置controller-manager/scheduler
  • 启动controller-manager跟scheduler服务
[root@master master]# ./scheduler.sh 127.0.0.1   '//启动scheduler服务'
[root@master master]# ps aux | grep ku            '//查看进程'

[root@master master]# ./controller-manager.sh 127.0.0.1   '//启动controller-manager'
4.24:查看master节点状态
[root@master master]#  /opt/kubernetes/bin/kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   

五:部署noed节点

kubernetes work 节点运行如下组件:
docker
kubelet
kube-proxy

5.1: 部署 kubelet 组件

kublet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如exec、run、logs 等;

kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况; 为确保安全,本文档只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如apiserver、heapster)。

5.2:配置node节点

#从kubelet、kube-proxy拷贝到node节点上去
[root@master bin]# pwd
/root/k8s/kubernetes/server/bin

[root@master bin]# scp kubelet kube-proxy root@20.0.0.42:/opt/kubernetes/bin/    '//本地拷贝'
[root@master bin]# scp kubelet kube-proxy root@20.0.0.43:/opt/kubernetes/bin/

#node查看
[root@node1 ~]# ls /opt/kubernetes/bin/
flanneld  kubelet  kube-proxy  mk-docker-opts.sh

[root@node2 ~]# ls /opt/kubernetes/bin/
flanneld  kubelet  kube-proxy  mk-docker-opts.sh

#node1节点操作

[root@node1 ~]# unzip node.zip     #包放在家目录  解压node.zip 获得kubelet.sh  proxy.sh
  • 创建kubeconfig文件
[root@master k8s]# mkdir kubeconfig
[root@master k8s]# cd kubeconfig/

[root@master kubeconfig]# rz -E    '//kubeconfig.sh复制到目录'
rz waiting to receive.
[root@master kubeconfig]# ls
kubeconfig.sh
'//拷贝kubeconfig.sh'
[root@master kubeconfig]# mv kubeconfig.sh kubeconfig
[root@master kubeconfig]# vim kubeconfig 


----------------删除以下部分----------------------------------------------------------------------
# 创建 TLS Bootstrapping Token
#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008

cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

-----------------------------------------原文内容----------------------------------------------
APISERVER=$1
SSL_DIR=$2

# 创建kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://$APISERVER:6443"

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=f869a2c4ca95aebd952d7e180a52e4b6 \
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------


# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=f869a2c4ca95aebd952d7e180a52e4b6 \
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=$SSL_DIR/kube-proxy.pem \
  --client-key=$SSL_DIR/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

通过 bash environment.sh获取 bootstrap.kubeconfig 配置文件。

5.21:获取token信息
[root@master ~]# vim /opt/kubernetes/cfg/token.csv     '//复制里面的'序列号
f869a2c4ca95aebd952d7e180a52e4b6

[root@master kubernetes]# vim /root/k8s/kubeconfig/kubeconfig 
 
 --token=f869a2c4ca95aebd952d7e180a52e4b6 \

mark

5.22:设置环境变量(可以写入到/etc/profile中)
  • 方便使用kubect命令
[root@master kubernetes]# export PATH=$PATH:/opt/kubernetes/bin/
5.23:查看集群状态
  • 所有组件都已经启动成功,通过kubectl工具查看当前集群组件状态:
[root@master kubernetes]# kubectl get cs           # 查看kubelet证书请求
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   

如上输出说明Master节点组件运行正常。

#生成配置文件
[root@master kubernetes]# cd /root/k8s/kubeconfig/
[root@master kubeconfig]# bash kubeconfig 20.0.0.41 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".

[root@master kubeconfig]# ls
bootstrap.kubeconfig  kubeconfig  kube-proxy.kubeconfig

5.3:拷贝配置文件到node节点

[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@20.0.0.42:/opt/kubernetes/cfg/

[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@20.0.0.43:/opt/kubernetes/cfg/

5.4:将kubelet-bootstrap用户绑定到系统群集角色

'//在master节点操作'
[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

'//'node节点启动kubelet服务
[root@node1 ~]# bash kubelet.sh 20.0.0.42
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

#检查kubelet服务启动
[root@node1 ~]# ps aux | grep kubelet.sh 
root      55319  0.0  0.0 112724  1000 pts/2    S+   20:06   0:00 grep --color=auto kubelet.sh
  • 去master检测node1节点的请求
#查看kubelet证书请求
[root@master kubeconfig]# kubectl get csr
'//pending 等待集群给该节点颁发证书'
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-JXGijlZkjcf7BOA_9vDGiOor2RgWpvBjsM_HBZShvOo   108s   kubelet-bootstrap   Pending

'//继续查看证书状态  已经同意了'
[root@master kubeconfig]# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-JXGijlZkjcf7BOA_9vDGiOor2RgWpvBjsM_HBZShvOo   6m2s   kubelet-bootstrap   Approved,Issued(已经被允许加入集群)

'//查看群集节点,成功加入node1节点'
[root@master kubeconfig]# kubectl get node
NAME        STATUS   ROLES    AGE     VERSION
20.0.0.42   Ready    <none>   2m11s   v1.12.3

'//启动proxu服务'
[root@node1 ~]# bash kubelet.sh 20.0.0.42
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

[root@node1 ~]# ps aux | grep kubelet.sh 

[root@node1 ~]# bash proxy.sh 20.0.0.42
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

'//查看状态'
[root@node1 ~]# systemctl status kube-proxy.service 
 kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since  2020-09-29 20:17:22 CST; 11s ago
 Main PID: 57648 (kube-proxy)
    Tasks: 0
   Memory: 8.8M

5.5:node2节点部署

  • 把node1节点opt/kubernetes目录复制到node2节点进行修改即可
[root@node1 ~]# scp -r /opt/kubernetes/ root@20.0.0.43:/opt

'//node2节点查看'
[root@node2 ~]# cd /opt/kubernetes/
[root@node2 kubernetes]# ls
bin  cfg  ssl

#删除ssl中node1的证书 在从新申请证书
[root@node2 kubernetes]# cd ssl/
[root@node2 ssl]# rm -rf *


#复制启动脚本kubelet,kube-proxy到node2节点
scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@20.0.0.43:/usr/lib/systemd/system/
  • 修改配置文件kubelet kubelet.config kube-proxy(三个配置文件)
#修改kubelet
[root@node2 ssl]# cd ../cfg/
[root@node2 cfg]# vim kubelet

--hostname-override=20.0.0.43 \     '//修改ip'
[root@node2 ssl]# cd ../cfg/
[root@node2 cfg]# vim kubelet

mark

修改kubelet.config
[root@node2 cfg]# vim kubelet.config

mark

#修改kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=20.0.0.43 \     '//修改node3 IP'
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

5.6:启动kube-proxy服务/kubelet服务

[root@node2 cfg]# systemctl start kubelet.service
[root@node2 cfg]# systemctl enable kubelet.service

[root@node2 cfg]# systemctl start kube-proxy.service 
[root@node2 cfg]# systemctl enable kube-proxy.service 

#master查看请求
[root@master kubeconfig]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-OLYuAknTjux4ooBkqHc8s6Qiqw2WXrCRgth24ut8QVk   43m   kubelet-bootstrap   Pending
node-csr-pqUXvdRk8OCkBfB1N9UxezoSzcSpp_yDyDK8I2tzRNk   64m   kubelet-bootstrap   Approved,Issued
[root@master kubeconfig]# kubectl certificate approve node-csr-OLYuAknTjux4ooBkqHc8s6Qiqw2WXrCRgth24ut8QVk
certificatesigningrequest.certificates.k8s.io/node-csr-OLYuAknTjux4ooBkqHc8s6Qiqw2WXrCRgth24ut8QVk approved

//查看群集中的节点
[root@master kubeconfig]#  kubectl get node
NAME        STATUS   ROLES    AGE    VERSION
20.0.0.42   Ready    <none>   146m   v1.12.3
20.0.0.43   Ready    <none>   51s    v1.12.3

至此,单Master集群部署完成,下一篇扩容为多Master集群~

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐