一、系统环境配置

主机名

IP

角色

master001

172.31.0.112

API_Server Scheduler Controller_manager

flannel ntpd

node001

172.31.0.113

etd flannel ntpd kubelet kube-proxy

node002

172.31.0.114

etd flannel ntpd kubelet kube-proxy

node003

172.31.0.115

etd flannel ntpd kubelet kube-proxy

升级系统内核版本

升级 Kubernetes 集群各个节点的 CentOS 系统内核版本:

## 载入公钥 # rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org ## 安装 ELRepo 最新版本 # yum install -y https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm ## 列出可以使用的 kernel 包版本 # yum list available --disablerepo=* --enablerepo=elrepo-kernel Loaded plugins: fastestmirror

Loading mirror speeds from cached hostfile

* elrepo-kernel: mirror-hk.koddos.net

Available Packages

kernel-lt-devel.x86_64 4.4.226-1.el7.elrepo elrepo-kernel

kernel-lt-doc.noarch 4.4.226-1.el7.elrepo elrepo-kernel

kernel-lt-headers.x86_64 4.4.226-1.el7.elrepo elrepo-kernel

kernel-lt-tools.x86_64 4.4.226-1.el7.elrepo elrepo-kernel

kernel-lt-tools-libs.x86_64 4.4.226-1.el7.elrepo elrepo-kernel

kernel-lt-tools-libs-devel.x86_64 4.4.226-1.el7.elrepo elrepo-kernel

kernel-ml.x86_64 5.7.1-1.el7.elrepo elrepo-kernel

kernel-ml-devel.x86_64 5.7.1-1.el7.elrepo elrepo-kernel

kernel-ml-doc.noarch 5.7.1-1.el7.elrepo elrepo-kernel

kernel-ml-headers.x86_64 5.7.1-1.el7.elrepo elrepo-kernel

kernel-ml-tools.x86_64 5.7.1-1.el7.elrepo elrepo-kernel

kernel-ml-tools-libs.x86_64 5.7.1-1.el7.elrepo elrepo-kernel

kernel-ml-tools-libs-devel.x86_64 5.7.1-1.el7.elrepo elrepo-kernel

perf.x86_64 5.7.1-1.el7.elrepo elrepo-kernel

python-perf.x86_64 5.7.1-1.el7.elrepo elrepo-kernel

## 安装指定的 kernel 版本: # yum install -y kernel-lt-4.4.226-1.el7.elrepo --enablerepo=elrepo-kernel ## 查看系统可用内核 # cat /boot/grub2/grub.cfg | grep menuentry menuentry 'CentOS Linux (4.4.226-1.el7.elrepo.x86_64) 7 (Core)' --class centos (略) menuentry 'CentOS Linux (3.10.0-862.el7.x86_64) 7 (Core)' --class centos ...(略) ## 设置开机从新内核启动 # grub2-set-default "CentOS Linux (4.4.226-1.el7.elrepo.x86_64) 7 (Core)" ## 查看内核启动项 # grub2-editenv list saved_entry=CentOS Linux (4.4.226-1.el7.elrepo.x86_64) 7 (Core)

 

重启机器!

备注:之所以升级内核 是因为 Kubernetes 版本到 1.18 之后使用的 IPVS 模块是比较新的,需要系统内核版本支持,本人使用的是 CentOS 系统,内核版本为 3.10,里面的 IPVS 模块比较老旧,缺少新版 Kubernetes IPVS 所需的依赖。

 

 

1、# 将 SELinux 设置为 disabled 模式(将其禁用) setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

 

2、关闭系统 iptables firewalld.service 服务

# yum install -y ipvsadm ipset conntrack socat jq sysstat iptables iptables-services libseccomp

 

① 关闭服务,并设为开机不自启

systemctl stop firewalld

systemctl disable firewalld

systemctl stop iptables.service

systemctl disable iptables.service

② 清空防火墙规则

iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat

iptables -P FORWARD ACCEPT

 

3、关闭 sawp

vim /etc/fstab

#/dev/mapper/centos-swap swap swap defaults 0 0

注释掉 swap

或者: sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

 

4、打开 ip6tables 内部的桥接功能;默认情况下系统以及打开的!

首先为所有节点 添加内核模块 同时支持 IPVS

# socat 命令用于支持 kubectl port-forward 命令

执行命: modprobe br_netfilter # 加载该模块

cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- br_netfilter modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && \ bash /etc/sysconfig/modules/ipvs.modules && \ lsmod | grep -E "ip_vs|nf_conntrack_ipv4"

 

检测:

# ll -sh /proc/sys/net/bridge

total 0

0 -rw-r--r-- 1 root root 0 Mar 12 10:44 bridge-nf-call-arptables

0 -rw-r--r-- 1 root root 0 Mar 12 10:44 bridge-nf-call-ip6tables

0 -rw-r--r-- 1 root root 0 Mar 12 10:44 bridge-nf-call-iptables

0 -rw-r--r-- 1 root root 0 Mar 12 10:44 bridge-nf-filter-pppoe-tagged

0 -rw-r--r-- 1 root root 0 Mar 12 10:44 bridge-nf-filter-vlan-tagged

0 -rw-r--r-- 1 root root 0 Mar 12 10:44 bridge-nf-pass-vlan-input-dev

 

方法一:

echo "1" > /proc/sys/net/bridge/bridge-nf-call-ip6tables

echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables

方法二:

cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1 EOF sysctl --system

 

5、集群环境规划

因为测试 这里只做了本地hosts解析。就不添加dns服务了

172.31.0.112 master001

172.31.0.113 node001

172.31.0.114 node002

172.31.0.115 node003

172.31.0.113 etcd-node1

172.31.0.114 etcd-node2

172.31.0.115 etcd-node3

 

echo "* soft nofile 65536" >> /etc/security/limits.conf echo "* hard nofile 65536" >> /etc/security/limits.conf echo "* soft nproc 65536" >> /etc/security/limits.conf echo "* hard nproc 65536" >> /etc/security/limits.conf echo "* soft memlock unlimited" >> /etc/security/limits.conf echo "* hard memlock unlimited" >> /etc/security/limits.conf

 

 

关闭 dnsmasq (可选)

linux 系统开启了 dnsmasq 后(如 GUI 环境),将系统 DNS Server 设置为 127.0.0.1,这会导致 docker 容器无法解析域名,需要关闭它:

service dnsmasq stop

systemctl disable dnsmasq

 

 

6、ntp 服务 这里不做详细介绍了我们再node01 上部署了 ntpd_server

设置时区

cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

yum -y install ntp

配置ntp server

vim /etc/ntp.conf

restrict 172.31.0.0 mask 255.255.0.0 nomodify notrap

保存退出重新启动

systemctl restart ntpd.service

systemctl enable ntpd.service

 

配置 ntp client

vim /etc/ntp.conf

server 0.172.31.0.112 或者 server master

保存退出重新启动

systemctl restart ntpd.service

systemctl enable ntpd.service

或者

*/5 * * * * root ntpdate master001 > /dev/null 2>&1

 

 

 

二、镜像源配置

 

官方镜像源: yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

或者

阿里镜像源: yum-config-manager --add-repo

http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

 

执行 yum clean all 清理下本地Yum 缓存

检查 docker-ce 源是否添加

验证镜像源可用,同步到其它所有节点服务器

 

 

 

 

三、安装docker-ce

 

注意:所有节点均需要安装(master 节点,已经 node节点)!

1、安装

yum -y install docker-ce

 

2、启动服务

systemctl start docker.service

systemctl enable docker.service

 

3、查看版本

# docker version

Client: Docker Engine - Community

Version: 19.03.8

API version: 1.40

Go version: go1.12.17

Git commit: afacb8b

Built: Wed Mar 11 01:27:04 2020

OS/Arch: linux/amd64

Experimental: false

 

Server: Docker Engine - Community

Engine:

Version: 19.03.8

API version: 1.40 (minimum version 1.12)

Go version: go1.12.17

Git commit: afacb8b

Built: Wed Mar 11 01:25:42 2020

OS/Arch: linux/amd64

Experimental: false

containerd:

Version: 1.2.13

GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429

runc:

Version: 1.0.0-rc10

GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd

docker-init:

Version: 0.18.0

GitCommit: fec3683

 

 

 

四、准备部署目录和证书的创建。

 

1、创建相关应用程序归档目录:

[root@master001 opt]# mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}

[root@node001 opt]# mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}

[root@node002 opt]# mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}

[root@node003 opt]# mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}

 

[root@node003 opt]# ssh-copy-id node001

[root@node003 opt]# ssh-copy-id node002

[root@node003 opt]# ssh-copy-id node003

 

2、下载 kubernetes软件包:

我们选择 v1.18.2 版本

https://dl.k8s.io/v1.18.2/kubernetes-server-linux-amd64.tar.gz

 

# 解压的server、client、node都会解压到kubernetes目录下:

[root@master001 src]# tar -zxvf kubernetes-server-linux-amd64.tar.gz

[root@master001 kubernetes]# ls

addons kubernetes-src.tar.gz LICENSES server

 

 

# 各个节点增加kubernetes的环境变量

vim /etc/profile

export KUBERNETES_HOME=/opt/kubernetes

export PATH=$KUBERNETES_HOME/bin:$PATH

 

3、CA证书创建和分发

从k8s的1.8版本开始,K8S系统各组件需要使用TLS证书对通信进行加密。每一个K8S集群都需要独立的CA证书体系。CA证书有以下三种:easyrsa、openssl、cfssl。这里使用cfssl证书,也是目前使用最多的,相对来说配置简单一些,通过json的格式,把证书相关的东西配置进去即可。这里使用cfssl的版本为1.2版本。

(1)安装 CFSSL(只要在master 一台主机安装即可!)

curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl

curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson

curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo

chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo

 

(2)创建用来生成 CA 文件的 JSON 配置文件

[root@master001 src]# mkdir ssl

[root@master001 src]# cd ssl/

[root@master001 ssl]# pwd

/usr/local/src/ssl

 

# vim ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } }

}

 

(3)创建用来生成 CA 证书签名请求(CSR)的 JSON 配置文件

vim ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "k8s", "OU": "System" } ] }

 

(4)生成CA证书(ca.pem)和密钥(ca-key.pem)

# 生成证书和密钥

[root@master001 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

2020/05/06 17:22:29 [INFO] generating a new CA key and certificate from CSR

2020/05/06 17:22:29 [INFO] generate received request

2020/05/06 17:22:29 [INFO] received CSR

2020/05/06 17:22:29 [INFO] generating key: rsa-2048

2020/05/06 17:22:30 [INFO] encoded CSR

2020/05/06 17:22:30 [INFO] signed certificate with serial number 641010558117528867312481899616046442222031173099

 

[root@master001 ssl]# ll

total 20

-rw-r--r-- 1 root root 290 May 6 17:19 ca-config.json

-rw-r--r-- 1 root root 1005 May 6 17:22 ca.csr

-rw-r--r-- 1 root root 210 May 6 17:22 ca-csr.json

-rw------- 1 root root 1675 May 6 17:22 ca-key.pem

-rw-r--r-- 1 root root 1363 May 6 17:22 ca.pem

 

(5)分发证书

[root@master001 ssl]# cp ca.csr ca.pem ca-key.pem ca-config.json /opt/kubernetes/ssl

[root@master001 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json node001:/opt/kubernetes/ssl

[root@master001 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json node002:/opt/kubernetes/ssl

[root@master001 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json node003:/opt/kubernetes/ssl

 

 

 

五、安装 etcd

所有持久化的状态信息以KV的形式存储在ETCD中。类似zookeeper,提供分布式协调服务。之所以说kubenetes各个组件是无状态的,就是因为其中把数据都存放在ETCD中。由于ETCD支持集群,这里在三台主机上都部署上ETCD。

(1)准备etcd软件包

wget https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz

 

[root@master001 src]# tar -zxvf etcd-v3.3.10-linux-amd64.tar.gz

[root@master001 src]# cd etcd-v3.3.10-linux-amd64

[root@master001 etcd-v3.3.10-linux-amd64]# ls

Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md

# 有2个文件,etcdctl是操作etcd的命令

[root@master001 etcd-v3.3.10-linux-amd64]# cp etcd etcdctl /opt/kubernetes/bin/

[root@master001 etcd-v3.3.10-linux-amd64]# scp etcd etcdctl node001:/opt/kubernetes/bin/

[root@master001 etcd-v3.3.10-linux-amd64]# scp etcd etcdctl node002:/opt/kubernetes/bin/

 

(2)创建 etcd 证书签名请求

[root@master001 ]# cd /usr/local/src/ssl/

vim etcd-csr.json

{ "CN": "etcd", "hosts": [

# 此处的ip是etcd集群中各个节点的ip地址

"127.0.0.1", "172.31.0.113", "172.31.0.114", "172.31.0.115" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "k8s", "OU": "System" } ] }

 

(3)生成 etcd 证书和私钥

[root@master001 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \

-ca-key=/opt/kubernetes/ssl/ca-key.pem \

-config=/opt/kubernetes/ssl/ca-config.json \

-profile=kubernetes etcd-csr.json | cfssljson -bare etcd

2020/05/06 17:40:49 [INFO] generate received request

2020/05/06 17:40:49 [INFO] received CSR

2020/05/06 17:40:49 [INFO] generating key: rsa-2048

2020/05/06 17:40:50 [INFO] encoded CSR

2020/05/06 17:40:50 [INFO] signed certificate with serial number 100309427538013646669998055350817020207855832803

2020/05/06 17:40:50 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for

websites. For more information see the Baseline Requirements for the Issuance and Management

of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);

specifically, section 10.2.3 ("Information Requirements").

 

[root@master001 ssl]# ll -sh etcd*

4.0K -rw-r--r-- 1 root root 1.1K May 6 17:40 etcd.csr

4.0K -rw-r--r-- 1 root root 284 May 6 17:38 etcd-csr.json

4.0K -rw------- 1 root root 1.7K May 6 17:40 etcd-key.pem

4.0K -rw-r--r-- 1 root root 1.5K May 6 17:40 etcd.pem

 

(4)将证书拷贝到/opt/kubernetes/ssl目录下

[root@master001 ssl]# cp etcd*.pem /opt/kubernetes/ssl/

[root@master001 ssl]# scp etcd*.pem node001:/opt/kubernetes/ssl/

[root@master001 ssl]# scp etcd*.pem node002:/opt/kubernetes/ssl/

[root@master001 ssl]# scp etcd*.pem node003:/opt/kubernetes/ssl/

(5)配置ETCD配置文件

vim /opt/kubernetes/cfg/etcd.conf

#[member]

ETCD_NAME="etcd-node1" #ETCD节点名称修改,这个ETCD_NAME每个节点必须不同

ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #ETCD数据目录

ETCD_LISTEN_PEER_URLS="https://172.31.0.113:2380" #ETCD监听的URL,每个节点不同需要修改

ETCD_LISTEN_CLIENT_URLS="https://172.31.0.113:2379,https://127.0.0.1:2379"

#外部通信监听URL修改,每个节点不同需要修改

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.31.0.113:2380"

ETCD_INITIAL_CLUSTER="etcd-node1=https://172.31.0.113:2380,etcd-node2=https://172.31.0.114:2380,etcd-node3=https://172.31.0.115:2380" # 添加集群访问

ETCD_INITIAL_CLUSTER_STATE="new"

ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"

ETCD_ADVERTISE_CLIENT_URLS="https://172.31.0.113:2379"

#[security]

CLIENT_CERT_AUTH="true"

ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"

ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"

ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

PEER_CLIENT_CERT_AUTH="true"

ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"

ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"

ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

 

 

备注:默认不会创建etcd的数据存储目录,这里在所有节点上创建etcd数据存储目录并启动etcd

[root@node001 src]# mkdir -p /var/lib/etcd/default.etcd

[root@node002 src]# mkdir -p /var/lib/etcd/default.etcd

[root@node003 src]# mkdir -p /var/lib/etcd/default.etcd

 

(6)创建ETCD系统服务

[root@node001 src]# vim /etc/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

 

[Service]

Type=simple

WorkingDirectory=/var/lib/etcd

EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf

ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd"

Type=notify

Restart=on-failure

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

 

etcd.service文件到其他2个节点

[root@node001 src]# scp /etc/systemd/system/etcd.service node003:/etc/systemd/system/

 

[root@node001 src]# scp /etc/systemd/system/etcd.service node003:/etc/systemd/system/

 

(7)重新加载系统服务

加载配置:systemctl daemon-reload

启动:systemctl start etcd

状态:systemctl status etcd

自启:systemctl enable etcd

 

(8)验证ETCD集群:

[root@node001 cfg]# netstat -tulnp |grep etcd #在各节点上查看是否监听了2379和2380端口

tcp 0 0 172.31.0.112:2379 0.0.0.0:* LISTEN 6122/etcd

tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 6122/etcd

tcp 0 0 172.31.0.112:2380 0.0.0.0:* LISTEN 6122/etcd

 

[root@master001 ssl]# etcdctl --endpoints=https://172.31.0.113:2379 \

--ca-file=/opt/kubernetes/ssl/ca.pem \

--cert-file=/opt/kubernetes/ssl/etcd.pem \

--key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health

member 66251b1fc2333a8a is healthy: got healthy result from https://172.31.0.114:2379

member 91f46226c0c40c66 is healthy: got healthy result from https://172.31.0.115:2379

member ff2f70247467ad9e is healthy: got healthy result from https://172.31.0.113:2379

cluster is healthy #表明ETCD集群是正常的!!!

 

注意事项:

flannel v0.11版本不支持etcd v3.4.3版本,支持etcd v3.3.10版本

因为etcd分v2和v3俩版本,不同版本使用的命令参数不同,得到的结果也不同

若flannel v0.11使用etcd v3.4.3版本,则(Falnnel要用etcd存储自身一个子网信息,所以要保证能成功连接Etcd,写入预定义子网段)使用的命令会有变化,然后结果是可以写进去的。但是在启动flannel的时候,会报错:Couldn't fetch network config: client: response is invalid json. The endpoint is probably not valid etcd cluster endpoint.

这就是使用flannel版本跟etcd版本不支持的结果

 

etcd 3.4注意事项:(故我们这里不能采用 3.4 以上版本)

  • ETCD3.4版本ETCDCTL_API=3 etcdctl 和 etcd --enable-v2=false 成为了默认配置,如要使用v2版本,执行etcdctl时候需要设置ETCDCTL_API环境变量,例如:ETCDCTL_API=2 etcdctl
  • ETCD3.4版本会自动读取环境变量的参数,所以EnvironmentFile文件中有的参数,不需要再次在ExecStart启动参数中添加,二选一,如同时配置,会触发以下类似报错“etcd: conflicting environment variable "ETCD_NAME" is shadowed by corresponding command-line flag (either unset environment variable or disable flag)”
  • flannel操作etcd使用的是v2的API,而kubernetes操作etcd使用的v3的API

 

 

 

六、Mater节点二进制部署。

 

1、部署Kubernetes API服务部署。

(1)准备软件包。

wget https://dl.k8s.io/v1.18.2/kubernetes-server-linux-amd64.tar.gz

[root@master001 kubernetes]# cp server/bin/kube-apiserver /opt/kubernetes/bin/

[root@master001 kubernetes]# cp server/bin/kube-controller-manager /opt/kubernetes/bin/

[root@master001 kubernetes]# cp server/bin/kube-scheduler /opt/kubernetes/bin/

备注:只需要在master001上拷贝

 

(2)创建生成CSR的 JSON 配置文件

[root@master001 cfg]# cd /usr/local/src/ssl/

[root@master001 ssl]# vim kubernetes-csr.json

{ "CN": "kubernetes", "hosts": [ "127.0.0.1", "172.31.0.112",  # Master的ip地址 "10.1.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "k8s", "OU": "System" } ] }

 

(3)生成 kubernetes 证书和私钥

[root@master001 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \

-ca-key=/opt/kubernetes/ssl/ca-key.pem \

-config=/opt/kubernetes/ssl/ca-config.json \

-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

2020/05/06 20:17:48 [INFO] generate received request

2020/05/06 20:17:48 [INFO] received CSR

2020/05/06 20:17:48 [INFO] generating key: rsa-2048

2020/05/06 20:17:49 [INFO] encoded CSR

2020/05/06 20:17:49 [INFO] signed certificate with serial number 685043223916614112359651660989710923919210347911

2020/05/06 20:17:49 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for

websites. For more information see the Baseline Requirements for the Issuance and Management

of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);

specifically, section 10.2.3 ("Information Requirements").

 

[root@master001 ssl]# cp kubernetes*.pem /opt/kubernetes/ssl/

[root@master001 ssl]# scp kubernetes*.pem node001:/opt/kubernetes/ssl/

[root@master001 ssl]# scp kubernetes*.pem node002:/opt/kubernetes/ssl/

[root@master001 ssl]# scp kubernetes*.pem node003:/opt/kubernetes/ssl/

 

(4) 创建 kube-apiserver 使用的客户端 token 文件

[root@master001 ssl]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '

898e7424a68e83a796c6a0f4188879d3

[root@master001 ssl]# vim /opt/kubernetes/ssl/bootstrap-token.csv

898e7424a68e83a796c6a0f4188879d3,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

 

(5) 创建基础用户名/密码认证配置

[root@master001 ssl]# vim /opt/kubernetes/ssl/basic-auth.csv

admin,admin,1

readonly,readonly,2

 

(6) 部署Kubernetes API Server

[root@master001 ssl]# vim /usr/lib/systemd/system/kube-apiserver.service

[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

After=etcd.service

 

[Service]

ExecStart=/opt/kubernetes/bin/kube-apiserver \

--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \

--bind-address=172.31.0.112 \

--insecure-bind-address=127.0.0.1 \

--authorization-mode=Node,RBAC \

--runtime-config=rbac.authorization.k8s.io/v1 \

--kubelet-https=true \

--anonymous-auth=false \

--basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \

--enable-bootstrap-token-auth \

--token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \

--service-cluster-ip-range=10.1.0.0/16 \

--service-node-port-range=20000-40000 \

--tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \

--tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \

--client-ca-file=/opt/kubernetes/ssl/ca.pem \

--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \

--etcd-cafile=/opt/kubernetes/ssl/ca.pem \

--etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \

--etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \

--etcd-servers=https://172.31.0.113:2379,https://172.31.0.114:2379,https://172.31.0.115:2379 \

--enable-swagger-ui=true \

--allow-privileged=true \

--audit-log-maxage=30 \

--audit-log-maxbackup=3 \

--audit-log-maxsize=100 \

--audit-log-path=/opt/kubernetes/log/api-audit.log \

--event-ttl=1h \

--v=2 \

--logtostderr=false \

--log-dir=/opt/kubernetes/log

Restart=on-failure

RestartSec=5

Type=notify

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

保存退出!

 

(7) 启动API Server服务

[root@master001 ssl]# systemctl daemon-reload

[root@master001 ssl]# systemctl start kube-apiserver

[root@master001 ssl]# systemctl enable kube-apiserver

[root@master001 ssl]# netstat -nultp |grep "kube-apiserver"

tcp 0 0 172.31.0.112:6443 0.0.0.0:* LISTEN 7081/kube-apiserver

tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 7081/kube-apiserver

从监听端口可以看到api-server监听在6443端口,同时也监听了本地的8080端口,是提供kube-schduler和kube-controller使用。

 

 

2、部署Controller Manager服务部署。

controller-manager由一系列的控制器组成,它通过apiserver监控整个集群的状态,并确保集群处于预期的工作状态。

 

(1)部署Controller Manager服务

vim /usr/lib/systemd/system/kube-controller-manager.service

[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/kube-controller-manager \ --address=127.0.0.1 \ --master=http://127.0.0.1:8080 \ --allocate-node-cidrs=true \ --service-cluster-ip-range=10.1.0.0/16 \ --cluster-cidr=10.2.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \ --root-ca-file=/opt/kubernetes/ssl/ca.pem \ --leader-elect=true \ --v=2 \ --logtostderr=false \ --log-dir=/opt/kubernetes/log Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target

 

(2)启动Controller Manager

systemctl daemon-reload

systemctl enable kube-controller-manager

systemctl start kube-controller-manager

systemctl status kube-controller-manager

 

[root@master001 ssl]# netstat -tulnp |grep kube-controlle

tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 7576/kube-controlle

tcp6 0 0 :::10257 :::* LISTEN 7576/kube-controlle

 

从监听端口上,可以看到kube-controller监听在本地的10252端口,外部是无法直接访问kube-controller,需要通过api-server才能进行访问。

 

 

3、部署Kubernetes Scheduler

  • scheduler负责分配调度Pod到集群内的node节点
  • 监听kube-apiserver,查询还未分配的Node的Pod
  • 根据调度策略为这些Pod分配节点

 

(1)部署 Scheduler 服务

vim /usr/lib/systemd/system/kube-scheduler.service

[Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/kube-scheduler \ --address=127.0.0.1 \ --master=http://127.0.0.1:8080 \ --leader-elect=true \ --v=2 \ --logtostderr=false \ --log-dir=/opt/kubernetes/log Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target

 

(2)启动 Scheduler 服务

systemctl daemon-reload

systemctl enable kube-scheduler

systemctl start kube-scheduler

systemctl status kube-scheduler

 

[root@master001 ssl]# netstat -tulnp |grep kube-scheduler

tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 7637/kube-scheduler

tcp6 0 0 :::10259 :::* LISTEN 7637/kube-scheduler

 

从kube-scheduler的监听端口上,同样可以看到监听在本地的10251端口上,外部无法直接访问,同样是需要通过api-server进行访问。

 

 

4、部署kubectl 命令行工具

kubectl用于日常直接管理K8S集群,那么kubectl要进行管理k8s,就需要和k8s的组件进行通信,也就需要用到证书。此时kubectl需要单独部署,也是因为kubectl也是需要用到证书,而前面的kube-apiserver、kube-controller、kube-scheduler都是不需要用到证书,可以直接通过服务进行启动。

(1)准备二进制命令包

[root@master001 src]# cp kubernetes/server/bin/kubectl /opt/kubernetes/bin/

 

(2)创建 admin 证书签名请求

[root@master001 kubernetes]# cd /usr/local/src/ssl/

[root@master001 ssl]# vim admin-csr.json

{ "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "system:masters", "OU": "System" } ] }

 

(3)生成 admin 证书和私钥

[root@master001 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \

-ca-key=/opt/kubernetes/ssl/ca-key.pem \

-config=/opt/kubernetes/ssl/ca-config.json \

-profile=kubernetes admin-csr.json | cfssljson -bare admin

2020/05/07 10:41:15 [INFO] generate received request

2020/05/07 10:41:15 [INFO] received CSR

2020/05/07 10:41:15 [INFO] generating key: rsa-2048

2020/05/07 10:41:16 [INFO] encoded CSR

2020/05/07 10:41:16 [INFO] signed certificate with serial number 543502041881597444088255711460536075856227458945

2020/05/07 10:41:16 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for

websites. For more information see the Baseline Requirements for the Issuance and Management

of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);

specifically, section 10.2.3 ("Information Requirements").

 

[root@master01 ssl]# ll -sh admin*

4.0K -rw-r--r-- 1 root root 1013 May 7 10:41 admin.csr

4.0K -rw-r--r-- 1 root root 231 May 7 10:40 admin-csr.json

4.0K -rw------- 1 root root 1.7K May 7 10:41 admin-key.pem

4.0K -rw-r--r-- 1 root root 1.4K May 7 10:41 admin.pem

 

[root@master001 ssl]# cp admin*.pem /opt/kubernetes/ssl/

 

(4)设置集群参数

[root@master001 ssl]# kubectl config set-cluster kubernetes \ --certificate-authority=/opt/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://172.31.0.112:6443

Cluster "kubernetes" set.

 

(5)设置客户端认证参数

[root@master001 ssl]# kubectl config set-credentials admin \ --client-certificate=/opt/kubernetes/ssl/admin.pem \ --embed-certs=true \ --client-key=/opt/kubernetes/ssl/admin-key.pem

User "admin" set.

 

(6)设置上下文参数

[root@master001 ssl]# kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin

Context "kubernetes" created.

 

(7)设置默认上下文

[root@master001 ssl]# kubectl config use-context kubernetes

Switched to context "kubernetes".

 

备注:上面(4)-->(7)的配置是为了在家目录下生成config文件,之后kubectl和api通信就需要用到该文件,这也就是说如果在其他节点上需要用到这个kubectl,就需要将该文件拷贝到其他节点。

 

[root@master01 ssl]# cat /root/.kube/config

apiVersion: v1

clusters:

- cluster:

certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR3akNDQXFxZ0F3SUJBZ0lVQS9DRjNCZFE3dFJ4clN1TW53YkpxSndISFdnd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1p6RUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0ZOb1lXNW5hR0Zw

..................

server: https://172.31.0.112:6443

name: kubernetes

contexts:

- context:

cluster: kubernetes

user: admin

name: kubernetes

current-context: kubernetes

kind: Config

preferences: {}

..................

 

 

(8)验证使用kubectl工具

[root@master001 ssl]# kubectl get cs

NAME STATUS MESSAGE ERROR

scheduler Healthy ok

controller-manager Healthy ok

etcd-1 Healthy {"health":"true"}

etcd-0 Healthy {"health":"true"}

etcd-2 Healthy {"health":"true"}

 

 

 

 

七、Node节点二进制部署。

 

1、部署kubelet

(1)二进制包准备。

[root@master001 bin]# scp kubelet kube-proxy /opt/kubernetes/bin/

[root@master001 bin]# scp kubelet kube-proxy node001:/opt/kubernetes/bin/

[root@master001 bin]# scp kubelet kube-proxy node002:/opt/kubernetes/bin/

[root@master001 bin]# scp kubelet kube-proxy node003:/opt/kubernetes/bin/

备注:从master节点 kubernetes-server-linux-amd64.tar.gz 解压文件中拷贝到node节点。

 

(2)创建角色绑定。

注意:在master 节点上创建

kubelet启动时会向kube-apiserver发送tsl bootstrap请求,所以需要将bootstrap的token设置成对应的角色,这样kubectl才有权限创建该请求。

[root@master001 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

clusterrolebinding "kubelet-bootstrap" created

 

(3)创建 kubelet bootstrapping kubeconfig 文件 设置集群参数。

注意:在master 节点上创建

[root@master001 ~]# cd /usr/local/src/ssl/

[root@master001 ssl]# kubectl config set-cluster kubernetes \ --certificate-authority=/opt/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://172.31.0.112:6443 \ --kubeconfig=bootstrap.kubeconfig

Cluster "kubernetes" set.

 

(4)设置客户端认证参数。

注意:在master 节点上操作

在前面我们创建 kube-apiserver 使用的客户端 token 文件中查看: bootstrap-token

[root@master001 ssl]# cat /opt/kubernetes/ssl/bootstrap-token.csv

898e7424a68e83a796c6a0f4188879d3,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

 

[root@master001 ssl]# kubectl config set-credentials kubelet-bootstrap \ --token=898e7424a68e83a796c6a0f4188879d3 \ --kubeconfig=bootstrap.kubeconfig

User "kubelet-bootstrap" set.

 

(5)设置上下文参数

注意:在master 节点上操作

[root@master001 ssl]# kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig

Context "default" created.

备注:bootstrap.kubeconfig 就是我们在上面 (4)步骤中创建的 在当前文件夹 /usr/local/src/ssl 目录下!

 

(6)选择默认上下文

注意:在master 节点上操作

[root@master001 ssl]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

Switched to context "default".

[root@master001 ssl]# cp bootstrap.kubeconfig /opt/kubernetes/cfg/

[root@master001 ssl]# scp bootstrap.kubeconfig node001:/opt/kubernetes/cfg/

[root@master001 ssl]# scp bootstrap.kubeconfig node002:/opt/kubernetes/cfg/

[root@master001 ssl]# scp bootstrap.kubeconfig node003:/opt/kubernetes/cfg/

 

 

 

2、部署kubelet 1.设置CNI支持

注意:以下所有操作均要在所有 Node 节点执行。

(1)配置CNI

[root@node001 ~]# mkdir -p /etc/cni/net.d

[root@node001 ~]# vim /etc/cni/net.d/10-flannel.conflist

{

"name": "cbr0",

"cniVersion": "0.3.1",

"plugins": [

{

"type": "flannel",

"delegate": {

"hairpinMode": true,

"isDefaultGateway": true

}

},

{

"type": "portmap",

"capabilities": {

"portMappings": true

}

}

]

}

保存退出!

 

(2)创建kubelet数据存储目录

[root@node001 ~]# mkdir /var/lib/kubelet

 

(3)创建kubelet服务配置

[root@node001 ~]# vim /usr/lib/systemd/system/kubelet.service

[Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/opt/kubernetes/bin/kubelet \ --address=172.31.0.113 \ # 这里还根据实际的node节点IP来修改 --hostname-override=172.31.0.113 \ # 这里还根据实际的node节点IP来修改 --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \ --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \ --cert-dir=/opt/kubernetes/ssl \ --network-plugin=cni \ --cni-conf-dir=/etc/cni/net.d \ --cni-bin-dir=/opt/kubernetes/bin/cni \ --cluster-dns=10.1.0.2 \ --cluster-domain=cluster.local. \ --hairpin-mode hairpin-veth \ --fail-swap-on=false \ --logtostderr=true \ --v=2 \ --logtostderr=false \ --log-dir=/opt/kubernetes/log \

--runtime-cgroups=/systemd/system.slice \

--kubelet-cgroups=/systemd/system.slice Restart=on-failure RestartSec=5

 

[Install]

WantedBy=multi-user.target

 

保存退出!

备注: kubelet.kubeconfig 这个会在服务启动成功后自动生成该配置文件。

阿里镜像源:registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.0

 

 

(4)启动Kubelet

systemctl daemon-reload systemctl enable kubelet systemctl start kubelet systemctl status kubelet

 

 

(5)查看csr请求 注意是在 master上执行。

注意是在 master上执行。

[root@master001 bin]# kubectl get csr

NAME AGE SIGNERNAME REQUESTOR CONDITION

node-csr-ETvvxync12zx8KOXR4C6hk5B-oDza3FrP86IMuzq9PM 69s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending

node-csr-exgyQM7l4bVV7IWd45_YHdcys5W_Rh-zWf19uPckjHU 2m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending

node-csr-u_gnS21i81yRkoMVafVWUJSAvaaPXGCSvbblYqdJRa4 3m59s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending

 

(6)批准kubelet 的 TLS 证书请求。

注意是在 master上执行。

[root@master001 bin]# kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve

certificatesigningrequest.certificates.k8s.io/node-csr-ETvvxync12zx8KOXR4C6hk5B-oDza3FrP86IMuzq9PM approved

certificatesigningrequest.certificates.k8s.io/node-csr-exgyQM7l4bVV7IWd45_YHdcys5W_Rh-zWf19uPckjHU approved

certificatesigningrequest.certificates.k8s.io/node-csr-u_gnS21i81yRkoMVafVWUJSAvaaPXGCSvbblYqdJRa4 approved

 

[root@master001 bin]# kubectl get csr

NAME AGE SIGNERNAME REQUESTOR CONDITION

node-csr-ETvvxync12zx8KOXR4C6hk5B-oDza3FrP86IMuzq9PM 27m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued

node-csr-exgyQM7l4bVV7IWd45_YHdcys5W_Rh-zWf19uPckjHU 28m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued

node-csr-u_gnS21i81yRkoMVafVWUJSAvaaPXGCSvbblYqdJRa4 30m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued

 

[root@master001 bin]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

172.31.0.113 NotReady <none> 8m7s v1.18.2

172.31.0.114 NotReady <none> 8m7s v1.18.2

172.31.0.115 NotReady <none> 8m7s v1.18.2

可以看到 所有的节点都已经成功加入集群

状态显示 NotReady(未就绪状态) 是因为没有配置网络插件 配置好网络插件就 Ready了

我们在 node 节点上执行 systemctl status kubelet

因为我们还没有部署网络插件 如:flannel!

 

 

3、部署Kubernetes Proxy

(1)配置kube-proxy使用LVS

注意是在各个 node 节点上执行。

[root@node001 ~]# yum install -y ipvsadm ipset conntrack

[root@node002 ~]# yum install -y ipvsadm ipset conntrack

[root@node003 ~]# yum install -y ipvsadm ipset conntrack

 

(2)创建 kube-proxy 证书请求

注意是在 master 节点上执行。

[root@master001 ~]# cd /usr/local/src/ssl/

[root@master001 ssl]# vim kube-proxy-csr.json

{ "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "k8s", "OU": "System" } ] }

保存退出!

 

(3)生成证书

[root@master001 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \

-ca-key=/opt/kubernetes/ssl/ca-key.pem \

-config=/opt/kubernetes/ssl/ca-config.json \

-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

2020/05/07 13:03:52 [INFO] generate received request

2020/05/07 13:03:52 [INFO] received CSR

2020/05/07 13:03:52 [INFO] generating key: rsa-2048

2020/05/07 13:03:52 [INFO] encoded CSR

2020/05/07 13:03:52 [INFO] signed certificate with serial number 688965016078159362647696629152821365848992245226

2020/05/07 13:03:52 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for

websites. For more information see the Baseline Requirements for the Issuance and Management

of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);

specifically, section 10.2.3 ("Information Requirements").

 

(4)分发证书到所有Node节点

[root@master001 ssl]# cp kube-proxy*.pem /opt/kubernetes/ssl/

[root@master001 ssl]# scp kube-proxy*.pem node001:/opt/kubernetes/ssl/

[root@master001 ssl]# scp kube-proxy*.pem node002:/opt/kubernetes/ssl/

[root@master001 ssl]# scp kube-proxy*.pem node003:/opt/kubernetes/ssl/

 

(5)创建kube-proxy配置文件

[root@master001 ssl]# kubectl config set-cluster kubernetes \ --certificate-authority=/opt/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://172.31.0.112:6443 \ --kubeconfig=kube-proxy.kubeconfig

Cluster "kubernetes" set.

 

[root@master001 ssl]# kubectl config set-credentials kube-proxy \ --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \ --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig

User "kube-proxy" set.

 

[root@master001 ssl]# kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig

Context "default" created.

 

[root@master001 ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

Switched to context "default".

 

(6)分发kubeconfig配置文件

[root@master001 ssl]# cp kube-proxy.kubeconfig /opt/kubernetes/cfg/

[root@master001 ssl]# scp kube-proxy.kubeconfig node001:/opt/kubernetes/cfg/

[root@master001 ssl]# scp kube-proxy.kubeconfig node002:/opt/kubernetes/cfg/

[root@master001 ssl]# scp kube-proxy.kubeconfig node003:/opt/kubernetes/cfg/

 

(7)创建kube-proxy服务配置

所有节点都得创建 mkdir /var/lib/kube-proxy

[root@master001 ssl]# mkdir /var/lib/kube-proxy

[root@node001 ~]# mkdir /var/lib/kube-proxy

[root@node002 ~]# mkdir /var/lib/kube-proxy

[root@node003 ~]# mkdir /var/lib/kube-proxy

 

[root@master001 ssl]# vim /usr/lib/systemd/system/kube-proxy.service

[Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy \ --bind-address=172.31.0.113 \ # 这里还根据实际的节点IP来修改 --hostname-override=172.31.0.113 \ # 这里还根据实际的节点IP来修改 --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \ --masquerade-all \ --feature-gates=SupportIPVSProxyMode=true \ --proxy-mode=ipvs \ --ipvs-min-sync-period=5s \ --ipvs-sync-period=5s \ --ipvs-scheduler=rr \ --logtostderr=true \ --v=2 \ --logtostderr=false \ --log-dir=/opt/kubernetes/log Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target

 

保存退出!

[root@master001 ssl]# scp /usr/lib/systemd/system/kube-proxy.service node001:/usr/lib/systemd/system/kube-proxy.service

[root@master001 ssl]# scp /usr/lib/systemd/system/kube-proxy.service node002:/usr/lib/systemd/system/kube-proxy.service

[root@master001 ssl]# scp /usr/lib/systemd/system/kube-proxy.service node003:/usr/lib/systemd/system/kube-proxy.service

切记:需要修改每个node节点的实际 IP

 

(8)启动Kubernetes Proxy

systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy systemctl status kube-proxy

上述报错也是因为这个问题是 kubernetes 1.18 版本才出现的,所以去 Kubernetes Github 查看相关 issues,发现有人在升级 Kubernetes 版本到 1.18 后,也遇见了相同的问题,经过 issue 中 Kubernetes 维护人员讨论,分析出原因可能为新版 Kubernetes 使用的 IPVS 模块是比较新的,需要系统内核版本支持,本人使用的是 CentOS 系统,内核版本为 3.10,里面的 IPVS 模块比较老旧,缺少新版 Kubernetes IPVS 所需的依赖。

 

最终无报错的状态如下:

 

检查LVS状态,可以看到已经创建了一个LVS集群,将来自10.1.0.1:443的请求转到172.31.0.112:6443,而6443就是api-server的端口

[root@node001 ~]# ipvsadm -Ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 10.1.0.1:443 rr

-> 172.31.0.112:6443 Masq 1 0 0

 

 

 

 

八、Flannel网络部署

(1)为Flannel生成证书

注意是在 master 节点上执行。

[root@master001 ssl]# vim flanneld-csr.json

{ "CN": "flanneld", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "k8s", "OU": "System" } ] }

 

(2)生成证书

[root@master001 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \ -ca-key=/opt/kubernetes/ssl/ca-key.pem \ -config=/opt/kubernetes/ssl/ca-config.json \ -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

2020/05/07 13:31:55 [INFO] generate received request

2020/05/07 13:31:55 [INFO] received CSR

2020/05/07 13:31:55 [INFO] generating key: rsa-2048

2020/05/07 13:31:56 [INFO] encoded CSR

2020/05/07 13:31:56 [INFO] signed certificate with serial number 568472282797363721493186255502425941110906425380

2020/05/07 13:31:56 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for

websites. For more information see the Baseline Requirements for the Issuance and Management

of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);

specifically, section 10.2.3 ("Information Requirements").

 

[root@master001 ssl]# ll -sh flannel*

4.0K -rw-r--r-- 1 root root 1001 May 7 13:31 flanneld.csr

4.0K -rw-r--r-- 1 root root 223 May 7 13:30 flanneld-csr.json

4.0K -rw------- 1 root root 1.7K May 7 13:31 flanneld-key.pem

4.0K -rw-r--r-- 1 root root 1.4K May 7 13:31 flanneld.pem

 

(3)分发证书

[root@master001 ssl]# cp flanneld*.pem /opt/kubernetes/ssl/

[root@master001 ssl]# scp flanneld*.pem node001:/opt/kubernetes/ssl/

[root@master001 ssl]# scp flanneld*.pem node002:/opt/kubernetes/ssl/

[root@master001 ssl]# scp flanneld*.pem node003:/opt/kubernetes/ssl/

 

 

(4)下载Flannel软件包

https://github.com/coreos/flannel/releases/download/v0.12.0/flannel-v0.12.0-linux-amd64.tar.gz

[root@master001 src]# tar -zxvf flannel-v0.12.0-linux-amd64.tar.gz

flanneld

mk-docker-opts.sh

README.md

解压后就三个文件!

[root@master001 src]# cp flanneld mk-docker-opts.sh /opt/kubernetes/bin/

[root@master001 src]# scp flanneld mk-docker-opts.sh node001:/opt/kubernetes/bin/

[root@master001 src]# scp flanneld mk-docker-opts.sh node002:/opt/kubernetes/bin/

[root@master001 src]# scp flanneld mk-docker-opts.sh node003:/opt/kubernetes/bin/

 

(5)配置Flannel

[root@master001 ~]# vim /opt/kubernetes/cfg/flannel

FLANNEL_ETCD="-etcd-endpoints=https://172.31.0.113:2379,https://172.31.0.114:2379,https://172.31.0.115:2379" FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network" FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem" FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem" FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem"

 

复制配置到其它节点上

[root@master001 ~]# scp /opt/kubernetes/cfg/flannel node001:/opt/kubernetes/cfg/

[root@master001 ~]# scp /opt/kubernetes/cfg/flannel node002:/opt/kubernetes/cfg/

[root@master001 ~]# scp /opt/kubernetes/cfg/flannel node003:/opt/kubernetes/cfg/

 

 

(6)设置Flannel系统服务

[root@master001 ~]# vim /usr/lib/systemd/system/flanneld.service

[Unit]

Description=Flanneld overlay address etcd agent

After=network.target

Before=docker.service

 

[Service]

EnvironmentFile=-/opt/kubernetes/cfg/flannel

ExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE}

ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker

 

Type=notify

 

[Install]

WantedBy=multi-user.target

RequiredBy=docker.service

保存退出!

 

备注:mk-docker-opts.sh:为本机创建一个flannel网络的工具

flanneld:flannel的启动脚本

复制系统服务脚本到其它节点上

[root@master001 ~]# scp /usr/lib/systemd/system/flanneld.service node001:/usr/lib/systemd/system/flanneld.service

[root@master001 ~]# scp /usr/lib/systemd/system/flanneld.service node002:/usr/lib/systemd/system/flanneld.service

[root@master001 ~]# scp /usr/lib/systemd/system/flanneld.service node003:/usr/lib/systemd/system/flanneld.service

 

 

Flannel CNI集成

(1)下载CNI插件

https://github.com/containernetworking/plugins/releases

 

https://github.com/containernetworking/plugins/releases/download/v0.8.5/cni-plugins-linux-amd64-v0.8.5.tgz

 

[root@master001 ~]# mkdir /opt/kubernetes/bin/cni

[root@node001 ~]# mkdir /opt/kubernetes/bin/cni

[root@node002 ~]# mkdir /opt/kubernetes/bin/cni

[root@node003 ~]# mkdir /opt/kubernetes/bin/cni

[root@master001 src]# tar zxf cni-plugins-linux-amd64-v0.8.5.tgz -C /opt/kubernetes/bin/cni/

[root@master001 src]# ls /opt/kubernetes/bin/cni/

bandwidth bridge dhcp firewall flannel host-device host-local ipvlan loopback macvlan portmap ptp sbr static tuning vlan

[root@master001 src]# scp -r /opt/kubernetes/bin/cni/* node001:/opt/kubernetes/bin/cni/

[root@master002 src]# scp -r /opt/kubernetes/bin/cni/* node002:/opt/kubernetes/bin/cni/

[root@master003 src]# scp -r /opt/kubernetes/bin/cni/* node003:/opt/kubernetes/bin/cni/

 

(2)创建Etcd的key

此步的操作是为了创建POD的网段,并在ETCD中存储,而后FLANNEL从ETCD中取出并进行分配。

 

[root@master001 src]# /opt/kubernetes/bin/etcdctl \

--ca-file /opt/kubernetes/ssl/ca.pem \

--cert-file /opt/kubernetes/ssl/flanneld.pem \

--key-file /opt/kubernetes/ssl/flanneld-key.pem \

--no-sync -C https://172.31.0.113:2379,https://172.31.0.114:2379,https://172.31.0.115:2379 \

set /kubernetes/network/config '{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "Directrouting": true }}'

返回:

{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "Directrouting": true }}

 

查看验证 :

[root@node001 src]# etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem --endpoints="https://172.31.0.113:2379" get /kubernetes/network/config

返回:

{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "Directrouting": true }}

 

(3)启动flannel

[root@master001~]# systemctl daemon-reload [root@master001 ~]# systemctl enable flanneld [root@master001 ~]# chmod +x /opt/kubernetes/bin/* [root@master001 ~]# systemctl start flanneld

 

 

(4)、创建pod 镜像机器测试

编辑 nginx-deployment.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: nginx-deployment

labels:

app: nginx

spec:

replicas: 3

selector:

matchLabels:

app: nginx

template:

metadata:

labels:

app: nginx

spec:

containers:

- name: nginx

image: nginx:1.14.2

ports:

- containerPort: 80

---

apiVersion: v1

kind: Service

metadata:

name: nginx-svc

spec:

type: NodePort

selector:

app: nginx

ports:

- protocol: TCP

nodePort: 30000

port: 8080

targetPort: 80

保存退出!

 

创建:

[root@master01 yaml]# kubectl apply -f nginx-deployment.yaml

deployment.apps/nginx-deployment created

service/nginx-svc created

查看:

[root@master01 yaml]# kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

nginx-deployment-6b474476c4-5szx2 1/1 Running 0 8m8s 10.2.66.4 172.31.0.114 <none> <none>

nginx-deployment-6b474476c4-nflkk 1/1 Running 0 8m8s 10.2.50.2 172.31.0.113 <none> <none>

nginx-deployment-6b474476c4-xqs5c 1/1 Running 0 8m8s 10.2.24.4 172.31.0.115 <none> <none>

 

测试:

[root@master01 yaml]# curl 10.2.66.4

<!DOCTYPE html>

<html>

<head>

<title>Welcome to nginx!</title>

<style>

body {

width: 35em;

margin: 0 auto;

font-family: Tahoma, Verdana, Arial, sans-serif;

}

</style>

</head>

<body>

<h1>Welcome to nginx!</h1>

<p>If you see this page, the nginx web server is successfully installed and

working. Further configuration is required.</p>

 

<p>For online documentation and support please refer to

<a href="http://nginx.org/">nginx.org</a>.<br/>

Commercial support is available at

<a href="http://nginx.com/">nginx.com</a>.</p>

 

<p><em>Thank you for using nginx.</em></p>

</body>

</html>

 

[root@node01 ~]# curl 10.1.67.165:8080

<!DOCTYPE html>

<html>

<head>

<title>Welcome to nginx!</title>

<style>

body {

width: 35em;

margin: 0 auto;

font-family: Tahoma, Verdana, Arial, sans-serif;

}

</style>

</head>

<body>

<h1>Welcome to nginx!</h1>

<p>If you see this page, the nginx web server is successfully installed and

working. Further configuration is required.</p>

 

<p>For online documentation and support please refer to

<a href="http://nginx.org/">nginx.org</a>.<br/>

Commercial support is available at

<a href="http://nginx.com/">nginx.com</a>.</p>

 

<p><em>Thank you for using nginx.</em></p>

</body>

</html>

测试集群功能正常!

 

九、CoreDNS部署

在 Cluster 中,除了可以通过 Cluster IP 访问 Service,Kubernetes 还提供了更为方便的 DNS 访问。

(1)编辑coredns.yaml文件

vim coredns.yaml

apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: Reconcile name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: EnsureExists name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists data: Corefile: | .:53 { errors health kubernetes cluster.local. in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 proxy . /etc/resolv.conf cache 30 } --- apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: coredns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS" spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: coredns template: metadata: labels: k8s-app: coredns spec: serviceAccountName: coredns tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule - key: "CriticalAddonsOnly" operator: "Exists" containers: - name: coredns image: coredns/coredns:1.0.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: coredns namespace: kube-system labels: k8s-app: coredns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: coredns clusterIP: 10.1.0.2 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP

保存退出!

 

(2)创建coredns

[root@master001 coredns]# kubectl create -f coredns.yaml

serviceaccount/coredns created

clusterrole.rbac.authorization.k8s.io/system:coredns created

clusterrolebinding.rbac.authorization.k8s.io/system:coredns created

configmap/coredns created

deployment.apps/coredns created

service/coredns created

 

(3)查看coredns服务

[root@master001 coredns]# kubectl get deployment -n kube-system

NAME READY UP-TO-DATE AVAILABLE AGE

coredns 2/2 2 2 88s

 

[root@master001 coredns]# kubectl get svc -n kube-system

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

coredns ClusterIP 10.1.0.2 <none> 53/UDP,53/TCP 90s

 

[root@master001 coredns]# kubectl get pod -n kube-system

NAME READY STATUS RESTARTS AGE

coredns-8585c9f5dd-74mq4 1/1 Running 0 93s

coredns-8585c9f5dd-shnz7 1/1 Running 0 93s

 

(4)Pod容器中进行域名解析测试

[root@master001 CoreDNS]# kubectl run alpine --rm -ti --image=alpine -- /bin/sh

If you don't see a command prompt, try pressing enter.

/ # nslookup nginx-svc

Server: 10.1.0.2

Address: 10.1.0.2:53

 

Name: nginx-svc.default.svc.cluster.local

Address: 10.1.67.165

 

** server can't find nginx-svc.svc.cluster.local.: NXDOMAIN

** server can't find nginx-svc.cluster.local.: NXDOMAIN

** server can't find nginx-svc.svc.cluster.local.: NXDOMAIN

** server can't find nginx-svc.cluster.local.: NXDOMAIN

 

/ # wget nginx-svc:8080

Connecting to nginx-svc:8080 (10.1.255.217:8080)

saving to 'index.html'

index.html 100% |****************************************| 612 0:00:00 ETA

'index.html' saved

 

测试没问题可以正常解析!

 

 

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐