前言:其实远在8月份的时候就尝试过二进制安装kubernetes,结果当时遇到的问题很多,奈何自己知识面也不够,所以最后不了了之。最近时间稍微比较宽裕,就再次重振旗鼓,重新开始安装,没想到整个过程还蛮顺利的,虽有一点小插曲,不过还是很快就能解决,于是在这里把自己的安装步骤记录一下,方便以后回顾。

kubernetes安装部署方式:

  • minikube 单节点微型k8s(仅供学习、预览使用)
  • 二进制安装部署(生产首选,新手推荐)
  • 使用kubeadmin进行部署,k8s的部署工具,跑在k8s里(相对简单,熟手推荐)

kubernetes架构

  • apiserver 提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;
  • controller manager 负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;
  • scheduler 负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;
  • kube-proxy 负责为Service提供cluster内部的服务发现和负载均衡;
  • kubelet 负责管理pods和它们上面的容器,images镜像,volume,etc等;
  • Container runtime 负责镜像管理以及Pod和容器的真正运行(CRI);
  • etcd 保存了整个集群的状态信息;

四组基本概念

  • Pod/Pod控制器
  • Name/Namespace
  • Label/Label选择器
  • Service/Ingress
1、Pod/Pod控制器

Pod是K8s里能被运行的最小单元。1个Pod里可以运行多个容器container,这些容器共享网络命名空间。

Pod控制器是Pod启动的一个模板,用来保证在K8s里启动的Pod是按照预期来运行的。K8s里提供了众多的Pod控制器,常用的有Deployment、Daemonset等。

2、Name/Namespace

在K8s内部,使用资源来定义每一种逻辑功能,故每种资源都有自己的名称。如api版本(apiVersion),类别(kind),元数据(metadata),定义清单(spec),状态(status)等信息。

随着集群规模的扩大,需要一种能够隔离K8s内部各种资源的方法,这就是名称空间,不同名称空间里的资源,名称可以相同;相同名称里的资源,名称不能相同。K8s里默认存在的名称空间有default、kube-system、kube-public。查询K8s里特定的资源要带上名称空间。

3、Label/Label选择器

标签是K8s特色的管理方式,便于分类管理资源对象。一个标签可以对应多个资源,一个资源也可以有多个标签,它们是多对多的关系。标签的组成:key=value.

给资源打上标签后,可以使用标签选择器过滤指定的标签。

4、Service/Ingress

在K8s的世界里,虽然每个Pod都会被分配一个单独的IP地址,但这个IP地址会随着Pod的销毁而消失。一个service可以看作是对一组提供相同服务的Pod的对外访问接口。Service作用于哪些Pod是通过标签选择器来定义的。

Ingress 是K8s集群里工作在OSI网络模型下的应用层。

三条网络:service网络(192.168.0.0/16),Pod网络(172.7.0.0/16),节点网络(10.4.7.0/24)

实验环境

基础架构


主机名角色ip作用
k8s-master运维节点192.168.23.100docker的私有仓库、签发证书等
k8s-node1代理节点proxy192.168.23.11作 Ingress 和 apiserver 的反向代理
k8s-node2代理节点192.168.23.12作 Ingress 和 apiserver 的反向代理
k8s-node3运算节点192.168.23.22该节点既充当了主控节点,也充当了工作节点
k8s-node4运算节点192.168.23.21kubelet,kube-proxy,docker etcd,kube-apiserver,kube-controller-manager,kube-scheduler,etcd。该节点既充当了主控节点,也充当了工作节点

使用二进制安装部署k8s的要点

基础设施环境准备好
  • centos7.6系统、关闭SELinux,关闭firewalld服务,时间同步(chronyd),调整base源,epel源。
安装部署bind9内网DNS系统
安装部署docker的私有仓库
准备签发证书环境-cfssl
安装部署主控节点服务(4个)
  • etcd、apiserver、controller-manager、scheduler
安装部署运算节点服务(2个)
  • kubelet、kube-proxy

硬件环境

  • 5台vm,每台至少2c2g

软件环境

在所有节点上操作

// 设置固定IP地址和网关、子网掩码、DNS地址等
[root@k8s-node1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33 // 设置为如下代码
TYPE=Ethernet
BOOTPROTO=none
NAME=ens33
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.23.100 // 192.168.23 是自己的网段 IPADDR是自己设置的IP地址
NETMASK=255.255.255.0
GATEWAY=192.168.23.254 // 网关、掩码、DNS可在虚拟网络编辑器中查看
DNS1=192.168.23.254
// 重启network服务
[root@k8s-node-11 ~]# service NetworkManager restart
// 检查各机器是否能ping通外网
[root@k8s-node-11 ~]# ping www.baidu.com
// 关闭防火墙
[root@k8s-node-11 ~]# systemctl stop firewalld
[root@k8s-node-11 ~]# systemctl disable firewalld
// 关闭selinux
[root@k8s-node-11 ~]# setenforce 0
[root@k8s-node-11 ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
// 关闭swap分区
[root@k8s-node-11 ~]# swapoff -a 
[root@k8s-node-11 ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab
// 设置各机的hostname
[root@k8s-node-11 ~]# hostnamectl set-hostname k8s-master
// 添加epel源
[root@k8s-node-11 ~]# yum install -y epel-release
// 安装必要工具
[root@k8s-node-11 ~]# yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils -y

DNS服务安装部署

  • 创建主机域host.com
  • 创建业务域od.com
  • 主辅同步(10.4.7.11主、10.4.7.12辅)
  • 客户端配置指向自建DNS

1、在node-11: 192.168.23.11节点上

安装配置bind9,部署自建DNS系统
// 安装bind9软件  用来配置DNS解析
[root@k8s-node-11 ~]# yum install bind -y
// 安装好bind之后
[root@k8s-node-11 ~]# vi /etc/named.conf  // 修改部分代码为
listen-on port 53 { 192.168.23.11; };
allow-query     { any; };
forwarders      { 192.168.23.254; }; // 在allow-query下面增加这一行
dnssec-enable no;
dnssec-validation no;
// 检查是否修改成功,没有报错则修改成功
[root@k8s-node-11 ~]# named-checkconf
区域配置文件

vi /etc/named.rfc1912.zones

# 在最下面增加以下内容
zone "host.com" IN {
	type  master;
	file  "host.com.zone";
	allow-update { 192.168.23.11; };
};

zone "od.com" IN {
	type  master;
	file  "od.com.zone";
	allow-update { 192.168.23.11; };
};
配置主机域和业务域的数据文件

vi /var/named/host.com.zone

$ORIGIN host.com.
$TTL 600        ; 10 minutes 
@       IN SOA  dns.host.com. dnsadmin.host.com. (
                                2021101001      ; serial
                                10800           ; refresh (3 hours)
                                900             ; retry (15 minutes)
                                604800          ; expire (1 week)
                                86400           ; minimum (1 day)
                                )       
                        NS   dns.host.com.
$TTL 60 ; 1 minute
dns                             A       192.168.23.11
k8s-node-100                    A       192.168.23.100
k8s-node-11                     A       192.168.23.11
k8s-node-21                     A       192.168.23.21
k8s-node-11                     A       192.168.23.11
k8s-node-21                     A       192.168.23.21

vi /var/named/od.com.zone

$ORIGIN od.com.
$TTL 600        ; 10 minutes 
@       IN SOA  dns.od.com. dnsadmin.od.com. (
                                2021101001      ; serial
                                10800           ; refresh (3 hours)
                                900             ; retry (15 minutes)
                                604800          ; expire (1 week)
                                86400           ; minimum (1 day)
                                )       
                        NS   dns.od.com.
$TTL 60 ; 1 minute
dns                             A       192.168.23.11
// 检查是否修改成功,没有报错则修改成功
[root@k8s-node-11 ~]# named-checkconf
[root@k8s-node-11 ~]# systemctl restart named
[root@k8s-node-11 ~]# netstat -luntp|grep 53
// 检查DNS服务是否正常,是否能解析出k8s-master的IP地址
[root@k8s-node-11 ~]# dig -t A k8s-node-100.host.com @192.168.23.11 +short
192.168.23.100
// 修改network的配置文件中的DNS1,改为使用自己的DNS解析
[root@k8s-node-11 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33 // 设置为如下代码
DNS1=192.168.23.11
// 重启network服务
[root@k8s-node-11 ~]# service NetworkManager restart
// 测试和其他主机能否相互ping通
[root@k8s-node-21 ~]# ping k8s-node-100.host.com
PING k8s-node-100.host.com (192.168.23.100) 56(84) bytes of data.
64 bytes from 192.168.23.100 (192.168.23.100): icmp_seq=1 ttl=64 time=0.426 ms
64 bytes from 192.168.23.100 (192.168.23.100): icmp_seq=2 ttl=64 time=1.96 ms
配置DNS客户端(主机域短域名)

vi /etc/resolv.conf

# Generated by NetworkManager
search host.com  // 加一个search,可用短域名查找主机域,业务域一般不用短域名
nameserver 192.168.23.11
测试ping短域名
[root@k8s-node-11 ~]# ping k8s-node-100
PING k8s-node-100.host.com (192.168.23.100) 56(84) bytes of data.
64 bytes from 192.168.23.100 (192.168.23.100): icmp_seq=1 ttl=64 time=0.426 ms
64 bytes from 192.168.23.100 (192.168.23.100): icmp_seq=2 ttl=64 time=1.96 ms

2、在除node-11的其他节点上修改DNS解析

修改network配置文件
// 修改network的配置文件中的DNS1,改为使用自己的DNS解析
[root@k8s-node-100 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33 // 设置为如下代码
DNS1=192.168.23.11
// 重启network服务
[root@k8s-node-100 ~]# service NetworkManager restart
// 测试其他主机相互能否ping通
[root@k8s-node-100 ~]# ping k8s-node-11.host.com
PING k8s-node-11.host.com (192.168.23.11) 56(84) bytes of data.
64 bytes from 192.168.23.11 (192.168.23.11): icmp_seq=1 ttl=64 time=0.800 ms
64 bytes from 192.168.23.11 (192.168.23.11): icmp_seq=2 ttl=64 time=0.434 ms

准备签发证书环境

运维主机k8s-node-100.host.com上:

安装CFSSL

复制

[root@k8s-node-100 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
[root@k8s-node-100 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
[root@k8s-node-100 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo
[root@k8s-node-100 ~]# chmod +x /usr/bin/cfssl*

创建生成CA证书签名请求(csr)的JSON配置文件

vi /opt/certs/ca-csr.json

复制

{
    "CN": "kubernetes-ca",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ],
    "ca": {
        "expiry": "175200h"
    }
}

CN: Common Name,浏览器使用该字段验证网站是否合法,一般写的是域名。非常重要。浏览器使用该字段验证网站是否合法
C: Country, 国家
ST: State,州,省
L: Locality,地区,城市
O: Organization Name,组织名称,公司名称
OU: Organization Unit Name,组织单位名称,公司部门

生成CA证书和私钥

复制

[root@k8s-node-100 ~]# cd /opt/certs
[root@k8s-node-100 certs]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca
2019/01/18 09:31:19 [INFO] generating a new CA key and certificate from CSR
2019/01/18 09:31:19 [INFO] generate received request
2019/01/18 09:31:19 [INFO] received CSR
2019/01/18 09:31:19 [INFO] generating key: rsa-2048
2019/01/18 09:31:19 [INFO] encoded CSR
2019/01/18 09:31:19 [INFO] signed certificate with serial number 345276964513449660162382535043012874724976422200

生成ca.pem、ca.csr、ca-key.pem(CA私钥,需妥善保管)

复制

/opt/certs
[root@k8s-node-100 ~]# ls -l
-rw-r--r-- 1 root root  332 Jan 16 11:10 ca-csr.json
-rw------- 1 root root 1675 Jan 16 11:17 ca-key.pem
-rw-r--r-- 1 root root 1001 Jan 16 11:17 ca.csr
-rw-r--r-- 1 root root 1354 Jan 16 11:17 ca.pem

部署docker环境

k8s-node-100.host.com,k8s-node-21.host.com,k8s-node-22.host.com上:

安装

[root@k8s-node-21 ~]# sudo yum install -y yum-utils  // 设置存储库
[root@k8s-node-21 ~]# sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@k8s-node-21 ~]# sudo yum install docker-ce docker-ce-cli containerd.io -y // 安装docker引擎
[root@k8s-node-21 ~]# sudo systemctl start docker // 启动docker
[root@k8s-node-21 ~]# sudo docker run hello-world // 验证docker引擎是否正确安装
[root@k8s-node-21 ~]# systemctl enable docker     // 开机自启动docker
[root@k8s-node-21 ~]# mkdir /data/docker /etc/docker -p

修改docker配置文件

vi /etc/docker/daemon.json

{
  "graph": "/data/docker",
  "storage-driver": "overlay",
  "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.od.com"],
  "bip": "172.7.21.1/24",  
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}
// 注:这里的bip的21要根据主机ip变化
[root@k8s-node2 ~]# systemctl daemon-reload  // 重新加载docker引擎
[root@k8s-node2 ~]# systemctl restart docker

**注意:**这里bip要根据宿主机ip变化

另附:查看具体哪一个目录下磁盘使用过多,删除没用的。
df -h 命令,查看磁盘利用率
sudo du -sh * 查看具体哪一目录下磁盘使用过高

部署docker镜像私有仓库harbor

k8s-node-100.host.com上:

下载软件二进制包并解压

harbor下载地址

复制

[root@k8s-node-100 ~]# mkdir /opt/src -p
[root@k8s-node-100 src]# cd /opt/src
[root@k8s-node-100 src]# // 下载harbor
[root@k8s-node-100 src]# tar xf harbor-offline-installer-v1.9.4.tgz -C /opt  // 解压到/opt目录下
[root@k8s-node-100 src]# cd ..
[root@k8s-node-100 opt]# mv harbor/ harbor-v1.9.4
[root@k8s-node-100 opt]# ln -s /opt/harbor-v1.9.4/ /opt/harbor // 作软链接,方便未来升级

修改harbor的配置文件

vi /opt/harbor/harbor.yml

hostname = harbor.od.com
http:
	port: 180
harbor_admin_password:Harbor12345
data_volume: /data/harbor
log:
	level: info
	rotate_count: 50
	rotate_size: 200M
	location: /data/harbor/logs

创建目录

[root@k8s-node-100 harbor]# mkdir /data/harbor/logs -p

安装docker-compose

复制

[root@k8s-node-100 harbor]# pip3 install --upgrade pip
[root@k8s-node-100 harbor]# pip3 install docker-compose

安装harbor

[root@k8s-node-100 ~]# cd /opt/harbor
[root@k8s-node-100 harbor]# ./install.sh

检查harbor启动情况

复制

[root@k8s-node-100 harbor]# docker-compose ps
       Name                     Command               State                                 Ports                               
--------------------------------------------------------------------------------------------------------------------------------
harbor-adminserver   /harbor/start.sh                 Up                                                                        
harbor-core          /harbor/start.sh                 Up                                                                        
harbor-db            /entrypoint.sh postgres          Up      5432/tcp                                                          
harbor-jobservice    /harbor/start.sh                 Up                                                                        
harbor-log           /bin/sh -c /usr/local/bin/ ...   Up      127.0.0.1:1514->10514/tcp                                         
harbor-portal        nginx -g daemon off;             Up      80/tcp                                                            
nginx                nginx -g daemon off;             Up      0.0.0.0:1443->443/tcp, 0.0.0.0:4443->4443/tcp, 0.0.0.0:180->80/tcp
redis                docker-entrypoint.sh redis ...   Up      6379/tcp                                                          
registry             /entrypoint.sh /etc/regist ...   Up      5000/tcp                                                          
registryctl          /harbor/start.sh                 Up

安装nginx并配置

安装

复制

[root@k8s-node-100 harbor]# yum install nginx -y
[root@k8s-node-100 harbor]# rpm -qa nginx
nginx-1.12.2-2.el7.x86_64
修改配置

vi /etc/nginx/conf.d/harbor.od.com.conf

server {
    listen       80;
    server_name  harbor.od.com;

    client_max_body_size 1000m;

    location / {
        proxy_pass http://127.0.0.1:180;
    }
}
重新启动nginx
[root@k8s-node-100 harbor]# systemctl start nginx
[root@k8s-node-100 harbor]# systemctl enable nginx

配置harbor的dns内网解析(这一步在node-11上完成)

vi /var/named/od.com.zone

2021101001  ==> 2021101002     // 前滚一个序列号
harbor	60 IN A 192.168.23.100 // 再新增一个harbor记录

复制

[root@k8s-node-11 ~]# systemctl restart named
[root@k8s-node-11 ~]# dig -t A harbor.od.com +short
192.168.23.100

启动

复制

[root@k8s-node-100 harbor]# curl harbor.od.com
[root@k8s-node-100 harbor]# netstat -luntp|grep nginx
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      6590/nginx: master  
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      6590/nginx: master

虚拟机内浏览器打开http://harbor.od.com

  • 用户名:admin
  • 密码: Harbor12345

宿主机内浏览器打开192.168.23.100:180

  • 用户名:admin
  • 密码: Harbor12345
harbor界面如下

建立好公有仓库public。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-dqqYWOI1-1634116081472)(E:\2021秋招\2021秋招知识点\k8s二进制安装步骤.assets\image-20211011210238416.png)]

推送镜像到私有仓库中
[root@k8s-node-100 harbor]# docker image pull nginx:1.7.9 // 拉取镜像
[root@k8s-node-100 harbor]# docker tag 84581e99d807 harbor.od.com/public/nginx:v1.7.9 // 打一个标签
[root@k8s-node-100 harbor]# docker image push harbor.od.com/public/nginx:v1.7.9 // 尝试推送镜像到私有仓库中
[root@k8s-node-100 harbor]# docker login harbor.od.com // 登录私有仓库
[root@k8s-node-100 harbor]# docker image push harbor.od.com/public/nginx:v1.7.9 // 再次推送镜像到私有仓库中
在私有仓库harbor中可看到:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-XWy5zKMm-1634116081474)(E:\2021秋招\2021秋招知识点\k8s二进制安装步骤.assets\image-20211012110417977.png)]

部署Master节点服务

部署etcd集群

集群规划

主机名角色ip
k8s-node-12etcd lead192.168.23.12
k8s-node-21etcd follow192.168.23.21
k8s-node-22etcd follow192.168.23.22

创建生成证书签名请求(csr)的JSON配置文件

运维主机k8s-node-100.host.com上:

vi /opt/certs/ca-config.json

{
	"signing": {
		"default": {
			"expiry": "175200h"
		},
		"profiles": {
			"server": {
				"expiry": "175200h",
				"usages": [
					"signing",
					"key encipherment",
					"server auth"
				]
			},
			"client": {
				"expiry": "175200h",
				"usages": [
					"signing",
					"key encipherment",
					"client auth"
				]
			},
			"peer": {
				"expiry": "175200h",
				"usages": [
					"signing",
					"key encipherment",
					"server auth",
					"client auth"
				]
			}
		}
	}
}
创建etcd证书请求的文件

vi /opt/certs/etcd-peer-csr.json

{
    "CN": "k8s-etcd",
    "hosts": [
        "192.168.23.11",   // 这里的host地址就是etcd可能部署在哪些主机上,部署etcd了的主机IP地址都需要在这添上
        "192.168.23.12",
        "192.168.23.21",
        "192.168.23.22"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "sichuan",
            "L": "chengdu",
            "O": "od",
            "OU": "ops"
        }
    ]
}

生成etcd证书和私钥

[root@k8s-node-100 ~]# cd /opt/certs
[root@k8s-node-100 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssl-json -bare etcd-peer
2019/01/18 09:35:09 [INFO] generate received request
2019/01/18 09:35:09 [INFO] received CSR
2019/01/18 09:35:09 [INFO] generating key: rsa-2048
2019/01/18 09:35:09 [INFO] encoded CSR
2019/01/18 09:35:10 [INFO] signed certificate with serial number 324191491384928915605254764031096067872154649010
2019/01/18 09:35:10 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

检查生成的证书、私钥

[root@k8s-node-100 certs]# ls -l|grep etcd
-rw-r--r-- 1 root root  387 Jan 18 12:32 etcd-peer-csr.json
-rw------- 1 root root 1679 Jan 18 12:32 etcd-peer-key.pem
-rw-r--r-- 1 root root 1074 Jan 18 12:32 etcd-peer.csr
-rw-r--r-- 1 root root 1432 Jan 18 12:32 etcd-peer.pem

创建etcd用户

k8s-node-12.host.com上:

**注意:**这里部署文档以k8s-node-12.host.com主机为例,另外两台主机安装部署方法类似

[root@k8s-node-12 ~]# cd /opt/
[root@k8s-node-12 opt]# mkdir src
[root@k8s-node-12 opt]# cd src/
[root@k8s-node-12 src]# useradd -s /sbin/nologin -M etcd // 创建etcd用户
[root@k8s-node-12 src]# id etcd
uid=1001(etcd) gid=1001(etcd)=1001(etcd)

下载软件,解压,做软连接

etcd下载地址
k8s-node-12.host.com上:

[root@k8s-node-12 ~]# cd /opt/src
[root@k8s-node-12 src]# // 下载etcd
[root@k8s-node-12 src]# ls -l
total 9604
-rw-r--r-- 1 root root 9831476 Jan 18 10:45 etcd-v3.1.20-linux-amd64.tar.gz
[root@k8s-node-12 src]# tar xfv etcd-v3.1.20-linux-amd64.tar.gz -C /opt
[root@k8s-node-12 src]# cd /opt/
[root@k8s-node-12 opt]# mv etcd-v3.1.20-linux-amd64/ etcd-v3.1.20
[root@k8s-node-12 opt]# ln -s /opt/etcd-v3.1.20/ /opt/etcd
[root@k8s-node-12 opt]# ll
total 0
lrwxrwxrwx 1 root   root   24 Jan 18 14:21 etcd -> etcd-v3.1.20
drwxr-xr-x 4 478493 89939 166 Jun 16  2018 etcd-v3.1.20
drwxr-xr-x 2 root   root   45 Jan 18 14:21 src

创建目录,拷贝证书、私钥

k8s-node-12.host.com上:

复制

[root@k8s-node-12 src]# mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server 

将运维主机上生成的ca.pemetcd-peer-key.pemetcd-peer.pem拷贝到/opt/etcd/certs目录中,注意私钥文件权限600

复制

[root@k8s-node-12 src]# cd /opt/etcd/certs
[root@k8s-node-12 certs]# scp k8s-node-100.host.com:/opt/certs/ca.pem .
[root@k8s-node-12 certs]# scp k8s-node-100.host.com:/opt/certs/etcd-peer.pem .
[root@k8s-node-12 certs]# scp k8s-node-100.host.com:/opt/certs/etcd-peer-key.pem .
[root@k8s-node-12 certs]# ll
total 12
-rw-r--r-- 1 etcd etcd 1354 Jan 18 14:45 ca.pem
-rw------- 1 etcd etcd 1679 Jan 18 17:00 etcd-peer-key.pem
-rw-r--r-- 1 etcd etcd 1444 Jan 18 17:02 etcd-peer.pem

创建etcd服务启动脚本

k8s-node-12.host.com上:

vi /opt/etcd/etcd-server-startup.sh

**注意:**etcd集群各主机的启动脚本略有不同,部署其他节点时注意修改IP地址。

#!/bin/sh
./etcd --name etcd-server-23-12 \
       --data-dir /data/etcd/etcd-server \
       --listen-peer-urls https://192.168.23.12:2380 \
       --listen-client-urls https://192.168.23.12:2379,http://127.0.0.1:2379 \
       --quota-backend-bytes 8000000000 \
       --initial-advertise-peer-urls https://192.168.23.12:2380 \
       --advertise-client-urls https://192.168.23.12:2379,http://127.0.0.1:2379 \
       --initial-cluster  etcd-server-23-12=https://192.168.23.12:2380,etcd-server-23-21=https://192.168.23.21:2380,etcd-server-23-22=https://192.168.23.22:2380 \
       --ca-file ./certs/ca.pem \
       --cert-file ./certs/etcd-peer.pem \
       --key-file ./certs/etcd-peer-key.pem \
       --client-cert-auth  \
       --trusted-ca-file ./certs/ca.pem \
       --peer-ca-file ./certs/ca.pem \
       --peer-cert-file ./certs/etcd-peer.pem \
       --peer-key-file ./certs/etcd-peer-key.pem \
       --peer-client-cert-auth \
       --peer-trusted-ca-file ./certs/ca.pem \
       --log-output stdout

调整权限和目录

HDSS7-12.host.com上:

复制

[root@k8s-node-12 certs]# chmod +x /opt/etcd/etcd-server-startup.sh
[root@k8s-node-12 certs]# chown -R etcd.etcd /opt/etcd-v3.1.20/
[root@k8s-node-12 certs]# chown -R etcd.etcd /data/etcd/
[root@k8s-node-12 certs]# mkdir -p /data/logs/etcd-server/
[root@k8s-node-12 certs]# chown -R etcd.etcd /data/logs/etcd-server/

安装supervisor软件

HDSS7-12.host.com上:

复制

[root@k8s-node-12 certs]# yum install supervisor -y
[root@k8s-node-12 certs]# systemctl start supervisord
[root@k8s-node-12 certs]# systemctl enable supervisord

创建etcd-server的启动配置

k8s-node-12.host.com上:

vi /etc/supervisord.d/etcd-server.ini

**注意:**etcd集群各主机启动配置略有不同,配置其他节点时注意修改。23-12 要修改为自己IP地址的后两段。

[program:etcd-server-23-12]
command=/opt/etcd/etcd-server-startup.sh                        ; the program (relative uses PATH, can take args)
numprocs=1                                                      ; number of processes copies to start (def 1)
directory=/opt/etcd                                             ; directory to cwd to before exec (def no cwd)
autostart=true                                                  ; start at supervisord start (default: true)
autorestart=true                                                ; retstart at unexpected quit (default: true)
startsecs=0                                                     ; number of secs prog must stay running (def. 1)
startretries=3                                                  ; max # of serial start failures (default 3)
exitcodes=0,2                                                   ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                 ; signal used to kill process (default TERM)
stopwaitsecs=10                                                 ; max num secs to wait b4 SIGKILL (default 10)
user=etcd                                                       ; setuid to this UNIX account to run the program
redirect_stderr=true                                            ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log           ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                    ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                        ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                     ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                     ; emit events on stdout writes (default false)
stderr_logfile=/data/logs/etcd-server/etcd.stderr.log           ; stderr log path, NONE for none; default AUTO
stderr_logfile_maxbytes=64MB                                    ; max # logfile bytes b4 rotation (default 50MB)
stderr_logfile_backups=4                                        ; # of stderr logfile backups (default 10)
stderr_capture_maxbytes=1MB                                     ; number of bytes in 'capturemode' (default 0)
stderr_events_enabled=false                                     ; emit events on stderr writes (default false)

启动etcd服务并检查

k8s-node-12.host.com上:

复制

[root@k8s-node-12 certs]# supervisorctl update
[root@k8s-node-12 certs]# netstat -luntp|grep etcd
[root@k8s-node-12 certs]# supervisorctl status   
etcd-server-7-12                 RUNNING   pid 6692, uptime 0:00:05
[root@k8s-node-12 certs]# tail -fn 200 /data/logs/etcd-server/etcd.stdout.log

安装部署启动检查所有集群规划主机上的etcd服务

同理,以上步骤在K8s-node-21和k8s-node-22上安装etcd并启动。

检查集群状态

3台均启动后,检查集群状态

复制

[root@k8s-node-12 ~]# /opt/etcd/etcdctl cluster-health
member 988139385f78284 is healthy: got healthy result from http://127.0.0.1:2379
member 5a0ef2a004fc4349 is healthy: got healthy result from http://127.0.0.1:2379
member f4a0cb0a765574a8 is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy

[root@k8s-node-12 ~]# /opt/etcd/etcdctl member list
988139385f78284: name=etcd-server-23-22 peerURLs=https://192.168.23.22:2380 clientURLs=http://127.0.0.1:2379,https://192.168.23.22:2379 isLeader=false
5a0ef2a004fc4349: name=etcd-server-23-21 peerURLs=https://192.168.23.21:2380 clientURLs=http://127.0.0.1:2379,https://192.168.23.21:2379 isLeader=false
f4a0cb0a765574a8: name=etcd-server-23-12 peerURLs=https://192.168.23.12:2380 clientURLs=http://127.0.0.1:2379,https://192.168.23.12:2379 isLeader=true

部署kube-apiserver集群

集群规划

主机名角色ip
k8s-node-21.host.comkube-apiserver10.4.7.21
k8s-node-22.host.comkube-apiserver10.4.7.22
k8s-node-11.host.com4层负载均衡10.4.7.11
k8s-node-12.host.com4层负载均衡10.4.7.12
**注意:**这里192.168.23.11192.168.23.12使用nginx做4层负载均衡器,用keepalived跑一个vip:192.168.23.10,代理两个kube-apiserver,实现高可用

这里部署文档以k8s-node-21.host.com主机为例,另外一台运算节点安装部署方法类似

下载软件,解压,做软连接

k8s-node-21.host.com上:
kubernetes下载地址

复制

[root@k8s-node-21 src]# cd /opt/src
[root@k8s-node-21 src]# // 下载kubernetes
[root@k8s-node-21 src]# ls -l|grep kubernetes
-rw-r--r-- 1 root root 417761204 Jan 17 16:46 kubernetes-server-linux-amd64.tar.gz
[root@k8s-node-21 src]# tar xfv kubernetes-server-linux-amd64.tar.gz -C /opt
[root@k8s-node-21 src]# mv /opt/kubernetes /opt/kubernetes-v1.15.2-linux-amd64
[root@k8s-node-21 src]# ln -s /opt/kubernetes-v1.15.2-linux-amd64 /opt/kubernetes
[root@k8s-node-21 src]# cd kubernetes
[root@k8s-node-21 src]# rm kubernetes-src.tar.gz  -rf
[root@k8s-node-21 src]# cd server/bin/
[root@k8s-node-21 src]# rm  *.tar *_tag -rf
[root@k8s-node-21 src]# ll
-rwxr-xr-x 1 root root  43534816 85 2019 apiextensions-apiserver
-rwxr-xr-x 1 root root 100548640 85 2019 cloud-controller-manager
-rwxr-xr-x 1 root root 200648416 85 2019 hyperkube
-rwxr-xr-x 1 root root  40182208 85 2019 kubeadm
-rwxr-xr-x 1 root root 164501920 85 2019 kube-apiserver
-rwxr-xr-x 1 root root 116397088 85 2019 kube-controller-manager
-rwxr-xr-x 1 root root  42985504 85 2019 kubectl
-rwxr-xr-x 1 root root 119616640 85 2019 kubelet
-rwxr-xr-x 1 root root  36987488 85 2019 kube-proxy
-rwxr-xr-x 1 root root  38786144 85 2019 kube-scheduler
-rwxr-xr-x 1 root root   1648224 85 2019 mounter
[root@k8s-node-21 src]# mkdir /opt/kubernetes/server/bin/{cert,conf}
[root@k8s-node-21 src]# ls -l /opt|grep kubernetes
lrwxrwxrwx 1 root   root         31 Jan 18 10:49 kubernetes -> kubernetes-v1.15.2-linux-amd64/
drwxr-xr-x 4 root   root         50 Jan 17 17:40 kubernetes-v1.15.2-linux-amd64

签发client证书

运维主机k8s-node-100.host.com上:

创建生成证书签名请求(csr)的JSON配置文件

vi /opt/certs/client-csr.json

{
    "CN": "k8s-node",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "sichuan",
            "L": "chengdu",
            "O": "od",
            "OU": "ops"
        }
    ]
}
生成client证书和私钥

复制

[root@k8s-node-100 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssl-json -bare client
2019/01/18 14:02:50 [INFO] generate received request
2019/01/18 14:02:50 [INFO] received CSR
2019/01/18 14:02:50 [INFO] generating key: rsa-2048
2019/01/18 14:02:51 [INFO] encoded CSR
2019/01/18 14:02:51 [INFO] signed certificate with serial number 423108651040279300242366884100637974155370861448
2019/01/18 14:02:51 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
检查生成的证书、私钥
[root@k8s-node-100 certs]# ls -l|grep client
-rw------- 1 root root 1679 Jan 21 11:13 client-key.pem
-rw-r--r-- 1 root root  989 Jan 21 11:13 client.csr
-rw-r--r-- 1 root root 1367 Jan 21 11:13 client.pem

签发kube-apiserver证书

运维主机k8s-node-100.host.com上:

创建生成证书签名请求(csr)的JSON配置文件

vi /opt/certs/apiserver-csr.json

{
    "CN": "apiserver",
    "hosts": [
        "127.0.0.1",
        "192.168.0.1",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local",
        "192.168.23.10",
        "192.168.23.21",
        "192.168.23.22",
        "192.168.23.23"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "sichuan",
            "L": "chengdu",
            "O": "od",
            "OU": "ops"
        }
    ]
}
生成kube-apiserver证书和私钥
[root@k8s-node-100 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json | cfssl-json -bare apiserver 
2019/01/18 14:05:44 [INFO] generate received request
2019/01/18 14:05:44 [INFO] received CSR
2019/01/18 14:05:44 [INFO] generating key: rsa-2048
2019/01/18 14:05:46 [INFO] encoded CSR
2019/01/18 14:05:46 [INFO] signed certificate with serial number 633406650960616624590510576685608580490218676227
2019/01/18 14:05:46 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
检查生成的证书、私钥
[root@k8s-node-100 certs]# ls -l|grep apiserver
total 72
-rw-r--r-- 1 root root  406 Jan 21 14:10 apiserver-csr.json
-rw------- 1 root root 1675 Jan 21 14:11 apiserver-key.pem
-rw-r--r-- 1 root root 1082 Jan 21 14:11 apiserver.csr
-rw-r--r-- 1 root root 1599 Jan 21 14:11 apiserver.pem

拷贝证书至各运算节点(node-21、node-22),并创建配置

k8s-node-21.host.com上:

拷贝证书、私钥,注意私钥文件属性600

复制

[root@k8s-node-21 cert]# cd /opt/kubernetes/server/bin/cert
[root@k8s-node-21 cert]# scp k8s-node-100.host.com:/opt/certs/apiserver.pem .
[root@k8s-node-21 cert]# scp k8s-node-100.host.com:/opt/certs/apiserver-key.pem .
[root@k8s-node-21 cert]# scp k8s-node-100.host.com:/opt/certs/ca-key.pem .
[root@k8s-node-21 cert]# scp k8s-node-100.host.com:/opt/certs/ca.pem .
[root@k8s-node-21 cert]# scp k8s-node-100.host.com:/opt/certs/client-key.pem .
[root@k8s-node-21 cert]# scp k8s-node-100.host.com:/opt/certs/client.pem .
[root@k8s-node-21 cert]# ll
total 40
-rw------- 1 root root 1676 Jan 21 16:39 apiserver-key.pem
-rw-r--r-- 1 root root 1599 Jan 21 16:36 apiserver.pem
-rw------- 1 root root 1675 Jan 21 13:55 ca-key.pem
-rw-r--r-- 1 root root 1354 Jan 21 13:50 ca.pem
-rw------- 1 root root 1679 Jan 21 13:53 client-key.pem
-rw-r--r-- 1 root root 1368 Jan 21 13:53 client.pem
创建配置

vi /opt/kubernetes/server/bin/conf/audit.yaml

apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"

创建启动脚本

k8s-node-21.host.com上:

vi /opt/kubernetes/server/bin/kube-apiserver.sh

#!/bin/bash
./kube-apiserver \
  --apiserver-count 2 \
  --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
  --audit-policy-file ./conf/audit.yaml \
  --authorization-mode RBAC \
  --client-ca-file ./cert/ca.pem \
  --requestheader-client-ca-file ./cert/ca.pem \
  --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
  --etcd-cafile ./cert/ca.pem \
  --etcd-certfile ./cert/client.pem \
  --etcd-keyfile ./cert/client-key.pem \
  --etcd-servers https://192.168.23.12:2379,https://192.168.23.21:2379,https://192.168.23.22:2379 \
  --service-account-key-file ./cert/ca-key.pem \
  --service-cluster-ip-range 192.168.0.0/16 \
  --service-node-port-range 3000-29999 \
  --target-ram-mb=1024 \
  --kubelet-client-certificate ./cert/client.pem \
  --kubelet-client-key ./cert/client-key.pem \
  --log-dir  /data/logs/kubernetes/kube-apiserver \
  --tls-cert-file ./cert/apiserver.pem \
  --tls-private-key-file ./cert/apiserver-key.pem \
  --v 2

调整权限和目录

k8s-node-21.host.com上:

复制

[root@k8s-node-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-apiserver.sh
[root@k8s-node-21 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver

创建supervisor配置

k8s-node-21.host.com上:

vi /etc/supervisord.d/kube-apiserver.ini

[program:kube-apiserver-23-21]
command=/opt/kubernetes/server/bin/kube-apiserver.sh       ; the program (relative uses PATH, can take args)
numprocs=1                                                 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                       ; directory to cwd to before exec (def no cwd)
autostart=true                                             ; start at supervisord start (default: true)
autorestart=true                                           ; retstart at unexpected quit (default: true)
startsecs=22                                               ; number of secs prog must stay running (def. 1)
startretries=3                                             ; max # of serial start failures (default 3)
exitcodes=0,2                                              ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                            ; signal used to kill process (default TERM)
stopwaitsecs=10                                            ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                  ; setuid to this UNIX account to run the program
redirect_stderr=false                                      ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log  ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                               ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                   ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                ; emit events on stdout writes (default false)
stderr_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stderr.log        ; stderr log path, NONE for none; default AUTO
stderr_logfile_maxbytes=64MB                               ; max # logfile bytes b4 rotation (default 50MB)
stderr_logfile_backups=4                                   ; # of stderr logfile backups (default 10)
stderr_capture_maxbytes=1MB                                ; number of bytes in 'capturemode' (default 0)
stderr_events_enabled=false                                ; emit events on stderr writes (default false)

启动服务并检查

k8s-node-21.host.com上:

复制

[root@k8s-node-21 bin]# supervisorctl update
kube-apiserverr: added process group
[root@k8s-node-21 bin]# supervisorctl status
etcd-server-23-21                   RUNNING   pid 6661, uptime 1 day, 8:41:13
kube-apiserver-23-21                RUNNING   pid 43765, uptime 2:09:41

安装部署启动检查所有集群规划主机上的kube-apiserver

同理,以上步骤在k8s-node-22上面安装好api-server集群。

配4层反向代理

k8s-node-11.host.com,k8s-node-12.host.com上:

nginx安装
[root@k8s-node-11 bin]# yum install nginx -y
配置反向代理

vi /etc/nginx/nginx.conf

注:在http块外面增加以下代码
stream {
    upstream kube-apiserver {
        server 192.168.23.21:6443     max_fails=3 fail_timeout=30s;
        server 192.168.23.22:6443     max_fails=3 fail_timeout=30s;
    }
    server {
        listen 7443;
        proxy_connect_timeout 2s;
        proxy_timeout 900s;
        proxy_pass kube-apiserver;
    }
}
keepalived安装
[root@k8s-node-11 bin]# yum install keepalived -y
keepalived配置
check_port.sh

vi /etc/keepalived/check_port.sh

#!/bin/bash
#keepalived 监控端口脚本
#使用方法:
#在keepalived的配置文件中
#vrrp_script check_port {#创建一个vrrp_script脚本,检查配置
#    script "/etc/keepalived/check_port.sh 6379" #配置监听的端口
#    interval 2 #检查脚本的频率,单位(秒)
#}
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
else
        echo "Check Port Cant Be Empty!"
fi

复制

[root@k8s-node-11 bin]# chmod +x /etc/keepalived/check_port.sh
keepalived主

k8s-node-11.host.com

复制

[root@k8s-node-11 ~]# rpm -qa keepalived
keepalived-1.3.5-6.el7.x86_64

复制

vi /etc/keepalived/keepalived.conf

注:把原文件里的内容全部删除掉
! Configuration File for keepalived

global_defs {
   router_id 192.168.23.11

}

vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 7443"
    interval 2
    weight -20
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 192.168.23.11
    nopreempt

    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        192.168.23.10
    }
}
keepalived从

k8s-node-12.host.com

复制

[root@k8s-node-12 ~]# rpm -qa keepalived
keepalived-1.3.5-6.el7.x86_64

复制

vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived
global_defs {
	router_id 192.168.23.12
}
vrrp_script chk_nginx {
	script "/etc/keepalived/check_port.sh 7443"
	interval 2
	weight -20
}
vrrp_instance VI_1 {
	state BACKUP
	interface ens33
	virtual_router_id 251
	mcast_src_ip 192.168.23.12
	priority 90
	advert_int 1
	authentication {
		auth_type PASS
		auth_pass 11111111
	}
	track_script {
		chk_nginx
	}
	virtual_ipaddress {
		192.168.23.10
	}
}

启动代理并检查

k8s-node-11.host.com,k8s-node-12.host.com,上:

  • 启动

    复制

    [root@k8s-node-11 ~]# systemctl restart keepalived
    [root@k8s-node-11 ~]# systemctl enable keepalived
    [root@k8s-node-11 ~]# nginx -s reload
    
    [root@k8s-node-12 ~]# systemctl restart keepalived
    [root@k8s-node-12 ~]# systemctl enable keepalived
    [root@k8s-node-12 ~]# nginx -s reload
    
  • 检查

    复制

    [root@hdss7-11 ~]## netstat -luntp|grep 7443
    tcp        0      0 0.0.0.0:7443            0.0.0.0:*               LISTEN      17970/nginx: master
    [root@hdss7-12 ~]## netstat -luntp|grep 7443
    tcp        0      0 0.0.0.0:7443            0.0.0.0:*               LISTEN      17970/nginx: master
    [root@k8s-node-11 ~]# ip add|grep 192.168.23.10
        inet 192.168.23.10/32 scope global ens33
    

部署controller-manager

集群规划

主机名角色ip
k8s-node-21.host.comcontroller-manager192.168.23.21
k8s-node-22.host.comcontroller-manager192.168.23.22

**注意:**这里部署文档以k8s-node-21.host.com主机为例,另外一台运算节点安装部署方法类似

创建启动脚本

k8s-node-21.host.com上:

vi /opt/kubernetes/server/bin/kube-controller-manager.sh

#!/bin/sh
./kube-controller-manager \
  --cluster-cidr 172.7.0.0/16 \
  --leader-elect true \
  --log-dir /data/logs/kubernetes/kube-controller-manager \
  --master http://127.0.0.1:8080 \
  --service-account-private-key-file ./cert/ca-key.pem \
  --service-cluster-ip-range 192.168.0.0/16 \
  --root-ca-file ./cert/ca.pem \
  --v 2

调整文件权限,创建目录

k8s-node-21.host.com上:

复制

[root@k8s-node-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-controller-manager.sh
[root@k8s-node-21 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager

创建supervisor配置

k8s-node-21.host.com上:

vi /etc/supervisord.d/kube-conntroller-manager.ini

[program:kube-controller-manager-23-21]
command=/opt/kubernetes/server/bin/kube-controller-manager.sh  ; the program (relative uses PATH, can take args)
numprocs=1                                                     ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                         ; directory to cwd to before exec (def no cwd)
autostart=true                                                 ; start at supervisord start (default: true)
autorestart=true                                               ; retstart at unexpected quit (default: true)
startsecs=22                                                ; number of secs prog must stay running (def. 1)
startretries=3                                                 ; max # of serial start failures (default 3)
exitcodes=0,2                                              ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                ; signal used to kill process (default TERM)
stopwaitsecs=10                                             ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                   ; setuid to this UNIX account to run the program
redirect_stderr=false                                       ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controll.stdout.log  ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                       ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                 ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                 ; emit events on stdout writes (default false)
stderr_logfile=/data/logs/kubernetes/kube-controller-manager/controll.stderr.log  ; stderr log path, NONE for none; default AUTO
stderr_logfile_maxbytes=64MB                                ; max # logfile bytes b4 rotation (default 50MB)
stderr_logfile_backups=4                                    ; # of stderr logfile backups (default 10)
stderr_capture_maxbytes=1MB                                 ; number of bytes in 'capturemode' (default 0)
stderr_events_enabled=false                                 ; emit events on stderr writes (default false)

启动服务并检查

k8s-node-21.host.com上:

复制

[root@k8s-node-21 bin]# supervisorctl update
kube-controller-manager: added process group
[root@k8s-node-21 bin]# supervisorctl status   
etcd-server-7-21                 RUNNING   pid 6661, uptime 1 day, 8:41:13
kube-apiserver                   RUNNING   pid 43765, uptime 2:09:41
kube-controller-manager          RUNNING   pid 44230, uptime 2:05:01

安装部署启动检查所有集群规划主机上的kube-controller-manager服务

部署kube-scheduler

集群规划

主机名角色ip
k8s-node-21.host.comkube-scheduler192.168.23.21
k8s-node-22.host.comkube-scheduler192.168.23.22

**注意:**这里部署文档以k8s-node-21.host.com主机为例,另外一台运算节点安装部署方法类似

创建启动脚本

k8s-node-21.host.com上:

vi /opt/kubernetes/server/bin/kube-scheduler.sh

#!/bin/sh
./kube-scheduler \
  --leader-elect  \
  --log-dir /data/logs/kubernetes/kube-scheduler \
  --master http://127.0.0.1:8080 \
  --v 2

调整文件权限,创建目录

k8s-node-21.host.com上:

[root@k8s-node-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-scheduler.sh
[root@k8s-node-21 bin]# mkdir -p /data/logs/kubernetes/kube-scheduler

创建supervisor配置

k8s-node-21.host.com上:

vi /etc/supervisord.d/kube-scheduler.ini

[program:kube-scheduler-23-21]
command=/opt/kubernetes/server/bin/kube-scheduler.sh       ; the program (relative uses PATH, can take args)
numprocs=1                                                 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                       ; directory to cwd to before exec (def no cwd)
autostart=true                                             ; start at supervisord start (default: true)
autorestart=true                                           ; retstart at unexpected quit (default: true)
startsecs=22                                               ; number of secs prog must stay running (def. 1)
startretries=3                                             ; max # of serial start failures (default 3)
exitcodes=0,2                                              ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                            ; signal used to kill process (default TERM)
stopwaitsecs=10                                            ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                  ; setuid to this UNIX account to run the program
redirect_stderr=false                                      ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                               ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                   ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                ; emit events on stdout writes (default false)
stderr_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stderr.log ; stderr log path, NONE for none; default AUTO
stderr_logfile_maxbytes=64MB                               ; max # logfile bytes b4 rotation (default 50MB)
stderr_logfile_backups=4                                   ; # of stderr logfile backups (default 10)
stderr_capture_maxbytes=1MB                                ; number of bytes in 'capturemode' (default 0)
stderr_events_enabled=false                                ; emit events on stderr writes (default false)

启动服务并检查

k8s-node-21.host.com上:

复制

[root@k8s-node-21 bin]# supervisorctl update
kube-scheduler: added process group
[root@k8s-node-21 bin]# supervisorctl status
etcd-server-23-21                 RUNNING   pid 6661, uptime 1 day, 8:41:13
kube-apiserver                    RUNNING   pid 43765, uptime 2:09:41
kube-controller-manager           RUNNING   pid 44230, uptime 2:05:01
kube-scheduler                    RUNNING   pid 44779, uptime 2:02:27

安装部署启动检查所有集群规划主机上的kube-scheduler服务

对kubectl做一个软链接

至此,主控节点所有的插件都按照完毕了。

[root@k8s-node-21 bin]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl
[root@k8s-node-21 bin]# which kubectl

[root@k8s-node-22 bin]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl
[root@k8s-node-22 bin]# which kubectl
检查集群的健康状态
[root@k8s-node-21 cert]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"} 

[root@k8s-node-21 ~]# supervisorctl status
etcd-server-23-21                RUNNING   pid 1477, uptime 0:30:00
kube-apiserver-23-21             RUNNING   pid 1487, uptime 0:30:00
kube-controller-manager-23-21    RUNNING   pid 6749, uptime 0:27:48
kube-scheduler-23-21             RUNNING   pid 1490, uptime 0:30:00

部署Node节点服务

部署kubelet

集群规划

主机名角色ip
k8s-node-21.host.comkubelet192.168.23.21
k8s-node-22.host.comkubelet192.168.23.22

**注意:**这里部署文档以k8s-node-21.host.com主机为例,另外一台运算节点安装部署方法类似

签发kubelet证书

运维主机k8s-node-100.host.com上:

创建生成证书签名请求(csr)的JSON配置文件

vi /opt/certs/kubelet-csr.json

{
    "CN": "kubelet-node",
    "hosts": [
    "127.0.0.1",
    "192.168.23.10",
    "192.168.23.21",
    "192.168.23.22",
    "192.168.23.23",
    "192.168.23.24",
    "192.168.23.25",
    "192.168.23.26",
    "192.168.23.27",
    "192.168.23.28"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "sichuan",
            "L": "chengdu",
            "O": "od",
            "OU": "ops"
        }
    ]
}
生成kubelet证书和私钥
[root@k8s-node-100 ~]# cd /opt/certs
[root@k8s-node-100 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
2019/01/18 17:51:16 [INFO] generate received request
2019/01/18 17:51:16 [INFO] received CSR
2019/01/18 17:51:16 [INFO] generating key: rsa-2048
2019/01/18 17:51:17 [INFO] encoded CSR
2019/01/18 17:51:17 [INFO] signed certificate with serial number 48870268157415133698067712395152321546974943470
2019/01/18 17:51:17 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
检查生成的证书、私钥

复制

/opt/certs
[root@hdss7-200 certs]# ls -l|grep kubelet
total 88
-rw-r--r-- 1 root root  415 Jan 22 16:58 kubelet-csr.json
-rw------- 1 root root 1679 Jan 22 17:00 kubelet-key.pem
-rw-r--r-- 1 root root 1086 Jan 22 17:00 kubelet.csr
-rw-r--r-- 1 root root 1456 Jan 22 17:00 kubelet.pem

拷贝证书至各运算节点,并创建配置

k8s-node-21.host.com上:

拷贝证书、私钥,注意私钥文件属性600

复制

[root@k8s-node-21 ~]# cd /opt/kubernetes/server/bin/cert
[root@k8s-node-21 cert]# scp k8s-node-100.host.com:/opt/certs/kubelet-key.pem .
[root@k8s-node-21 cert]# scp k8s-node-100.host.com:/opt/certs/kubelet.pem .
[root@k8s-node-21 cert]# ls -l /opt/kubernetes/server/bin/cert
total 40
-rw------- 1 root root 1676 Jan 21 16:39 apiserver-key.pem
-rw-r--r-- 1 root root 1599 Jan 21 16:36 apiserver.pem
-rw------- 1 root root 1675 Jan 21 13:55 ca-key.pem
-rw-r--r-- 1 root root 1354 Jan 21 13:50 ca.pem
-rw------- 1 root root 1679 Jan 21 13:53 client-key.pem
-rw-r--r-- 1 root root 1368 Jan 21 13:53 client.pem
-rw------- 1 root root 1679 Jan 22 17:00 kubelet-key.pem
-rw-r--r-- 1 root root 1456 Jan 22 17:00 kubelet.pem
创建配置

k8snode-21.host.com上:

set-cluster

**注意:**在/opt/kubernetes/server/bin/conf目录下

复制

[root@k8s-node-21 conf]# kubectl config set-cluster myk8s \
  --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
  --embed-certs=true \
  --server=https://192.168.23.10:7443 \
  --kubeconfig=kubelet.kubeconfig

Cluster "myk8s" set.
set-credentials

**注意:**在/opt/kubernetes/server/bin/conf目录下

复制

[root@k8s-node-21 conf]# kubectl config set-credentials k8s-node --client-certificate=/opt/kubernetes/server/bin/cert/client.pem --client-key=/opt/kubernetes/server/bin/cert/client-key.pem --embed-certs=true --kubeconfig=kubelet.kubeconfig 

User "k8s-node" set.
set-context

**注意:**在/opt/kubernetes/server/bin/conf目录下

复制

[root@k8s-node-21 conf]# kubectl config set-context myk8s-context \
  --cluster=myk8s \
  --user=k8s-node \
  --kubeconfig=kubelet.kubeconfig

Context "myk8s-context" created.
use-context

**注意:**在/opt/kubernetes/server/bin/conf目录下

复制

[root@k8s-node-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig

Switched to context "myk8s-context".
k8s-node.yaml
  • 创建资源配置文件

vi /opt/kubernetes/server/bin/conf/k8s-node.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node
  • 应用资源配置文件

复制

[root@k8s-node-21 conf]# cd /opt/kubernetes/server/conf
[root@k8s-node-21 conf]# kubectl create -f k8s-node.yaml

clusterrolebinding.rbac.authorization.k8s.io/k8s-node created
  • 检查状态

复制

[root@k8s-node-21 ~]# cd /opt/kubernetes/server/conf
[root@k8s-node-21 conf]# kubectl get clusterrolebinding k8s-node
NAME           AGE
k8s-node       3m

准备pause基础镜像

运维主机k8s-node-100.host.com上:

下载

复制

[root@k8s-node-100 harbor]# docker login harbor.od.com
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. 
# 另外,如果登录不上私有仓库,则在/opt/harbor目录下输入docker-compose ps,检查是否全部启动成功,如果有没有启动成功的,则重新安装harbor,==> ./install.sh
[root@k8s-node-100 harbor]# docker image pull kubernetes/pause
Using default tag: latest
latest: Pulling from kubernetes/pause
4f4fb700ef54: Pull complete 
b9c8ec465f6b: Pull complete 
Digest: sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105
Status: Downloaded newer image for kubernetes/pause:latest
docker.io/kubernetes/pause:latest
[root@k8s-node-100 harbor]# docker tag f9d5de079539 harbor.od.com/public/pause:latest
[root@k8s-node-100 harbor]# docker push harbor.od.com/public/pause:latest // 提交至私有仓库(harbor)中

创建kubelet启动脚本

k8s-node-21.host.com上:

vi /opt/kubernetes/server/bin/kubelet-2321.sh

#!/bin/sh
./kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 192.168.0.2 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on="false" \
  --client-ca-file ./cert/ca.pem \
  --tls-cert-file ./cert/kubelet.pem \
  --tls-private-key-file ./cert/kubelet-key.pem \
  --hostname-override 192.168.23.21 \
  --image-gc-high-threshold 20 \
  --image-gc-low-threshold 10 \
  --kubeconfig ./conf/kubelet.kubeconfig \
  --log-dir /data/logs/kubernetes/kube-kubelet \
  --pod-infra-container-image harbor.od.com/k8s/pod:v3.4 \
  --root-dir /data/kubelet

**注意:**kubelet集群各主机的启动脚本略有不同,部署其他节点时注意修改IP地址。

检查配置,权限,创建日志目录

k8s-node-21.host.com上:

复制

[root@k8s-node-21 ~]# cd /opt/kubernetes/server/conf
[root@k8s-node-21 conf]# ls -l|grep kubelet.kubeconfig 
-rw------- 1 root root 6471 Jan 22 17:33 kubelet.kubeconfig

[root@k8s-node-21 conf]# chmod +x /opt/kubernetes/server/bin/kubelet-2321.sh
[root@k8s-node-21 conf]# mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet

创建supervisor配置

k8s-node-21.host.com上:

vi /etc/supervisord.d/kube-kubelet.ini

[program:kube-kubelet-2321]
command=/opt/kubernetes/server/bin/kubelet-2321.sh        ; the program (relative uses PATH, can take args)
numprocs=1                                                ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                      ; directory to cwd to before exec (def no cwd)
autostart=true                                            ; start at supervisord start (default: true)
autorestart=true              							  ; retstart at unexpected quit (default: true)
startsecs=22                  							  ; number of secs prog must stay running (def. 1)
startretries=3                							  ; max # of serial start failures (default 3)
exitcodes=0,2                 							  ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT               							  ; signal used to kill process (default TERM)
stopwaitsecs=10               							  ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                 ; setuid to this UNIX account to run the program
redirect_stderr=false                                     ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log   ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                              ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                  ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                               ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                               ; emit events on stdout writes (default false)
stderr_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stderr.log   ; stderr log path, NONE for none; default AUTO
stderr_logfile_maxbytes=64MB                              ; max # logfile bytes b4 rotation (default 50MB)
stderr_logfile_backups=4                                  ; # of stderr logfile backups (default 10)
stderr_capture_maxbytes=1MB   							  ; number of bytes in 'capturemode' (default 0)
stderr_events_enabled=false   							  ; emit events on stderr writes (default false)

启动服务并检查

k8s-node-21.host.com上:

复制

[root@k8s-node-21 bin]# supervisorctl update
kube-kubelet: added process group
[root@k8s-node-21 bin]# supervisorctl status
etcd-server-23-21                 RUNNING   pid 9507, uptime 22:44:48
kube-apiserver                   RUNNING   pid 9770, uptime 21:10:49
kube-controller-manager          RUNNING   pid 10048, uptime 18:22:10
kube-kubelet                     STARTING  
kube-scheduler                   RUNNING   pid 10041, uptime 18:22:13

检查运算节点

HDSS7-21.host.com上:

复制

[root@k8s-node-21 conf]# kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
192.168.23.21   Ready    <none>   94s   v1.15.2
192.168.23.22   Ready    <none>   82s   v1.15.2

[root@k8s-node-21 conf]# kubectl label node 192.168.23.22 node-role.kubernetes.io/node=   // 加集群角色
node/192.168.23.22 labeled
[root@k8s-node-21 conf]# kubectl label node 192.168.23.22 node-role.kubernetes.io/master=
node/192.168.23.22 labeled
[root@k8s-node-21 conf]# kubectl get nodes
NAME            STATUS   ROLES         AGE     VERSION
192.168.23.21   Ready    master,node   4m8s    v1.15.2
192.168.23.22   Ready    master,node   3m56s   v1.15.2

非常重要!

安装部署启动检查所有集群规划主机上的kubelet服务

部署kube-proxy

集群规划

主机名角色ip
k8s-node-21.host.comkube-proxy192.168.23.21
k8s-node-22.host.comkube-proxy192.168.23.22

**注意:**这里部署文档以k8s-node-21.host.com主机为例,另外一台运算节点安装部署方法类似

签发kube-proxy证书

运维主机k8s-node-100.host.com上:

创建生成证书签名请求(csr)的JSON配置文件

vi /opt/certs/kube-proxy-csr.json

{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "sichuan",
            "L": "chengdu",
            "O": "od",
            "OU": "ops"
        }
    ]
}
生成kube-proxy证书和私钥

复制

[root@k8s-node-100 certs]# cd /opt/certs
[root@k8s-node-100 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json | cfssl-json -bare kube-proxy-client
2019/01/18 18:14:23 [INFO] generate received request
2019/01/18 18:14:23 [INFO] received CSR
2019/01/18 18:14:23 [INFO] generating key: rsa-2048
2019/01/18 18:14:23 [INFO] encoded CSR
2019/01/18 18:14:23 [INFO] signed certificate with serial number 375797145588654714099258750873820528127028390681
2019/01/18 18:14:23 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
检查生成的证书、私钥

复制

[root@k8s-node-100 certs]# cd /opt/certs
[root@k8s-node-100 certs]# ls -l|grep kube-proxy
-rw------- 1 root root 1679 Jan 22 17:31 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1005 Jan 22 17:31 kube-proxy-client.csr
-rw-r--r-- 1 root root 1383 Jan 22 17:31 kube-proxy-client.pem
-rw-r--r-- 1 root root  268 Jan 22 17:23 kube-proxy-csr.json

拷贝证书至各运算节点,并创建配置

k8s-node-21.host.com上:

拷贝证书、私钥,注意私钥文件属性600

复制

[root@k8s-node-21 ~]# cd /opt/kubernetes/server/bin/cert
[root@k8s-node-21 cert]# scp k8s-node-100.host.com:/opt/certs/kube-proxy-client-key.pem .
[root@k8s-node-21 cert]# scp k8s-node-100.host.com:/opt/certs/kube-proxy-client.pem .
[root@hdss7-21 cert]# ls -l /opt/kubernetes/server/bin/cert
total 40
-rw------- 1 root root 1676 Jan 21 16:39 apiserver-key.pem
-rw-r--r-- 1 root root 1599 Jan 21 16:36 apiserver.pem
-rw------- 1 root root 1675 Jan 21 13:55 ca-key.pem
-rw-r--r-- 1 root root 1354 Jan 21 13:50 ca.pem
-rw------- 1 root root 1679 Jan 21 13:53 client-key.pem
-rw-r--r-- 1 root root 1368 Jan 21 13:53 client.pem
-rw------- 1 root root 1679 Jan 22 17:00 kubelet-key.pem
-rw-r--r-- 1 root root 1456 Jan 22 17:00 kubelet.pem
-rw------- 1 root root 1679 Jan 22 17:31 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1383 Jan 22 17:31 kube-proxy-client.pem
创建配置
set-cluster

**注意:**在/opt/kubernetes/server/bin/conf目录下

复制

[root@k8s-node-21 cert]# cd /opt/kubernetes/server/bin/conf
[root@k8s-node-21 conf]# kubectl config set-cluster myk8s \
  --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
  --embed-certs=true \
  --server=https://192.168.23.10:7443 \
  --kubeconfig=kube-proxy.kubeconfig

Cluster "myk8s" set.
set-credentials

**注意:**在/opt/kubernetes/server/bin/conf目录下

复制

[root@k8s-node-21 cert]# cd /opt/kubernetes/server/bin/conf
[root@k8s-node-21 conf]# kubectl config set-credentials kube-proxy \
  --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
  --client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

User "kube-proxy" set.
set-context

**注意:**在/opt/kubernetes/server/bin/conf目录下

复制

[root@k8s-node-21 cert]# cd /opt/kubernetes/server/bin/conf
[root@k8s-node-21 conf]# kubectl config set-context myk8s-context \
  --cluster=myk8s \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

Context "myk8s-context" created.
use-context

**注意:**在/opt/kubernetes/server/bin/conf目录下

复制

[root@k8s-node-21 cert]# cd /opt/kubernetes/server/bin/conf
[root@k8s-node-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig

Switched to context "myk8s-context".

加载ipvs模块

k8s-node-21.host.com上:

vi /root/ipvs.sh

#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
  /sbin/modinfo -F filename $i &>/dev/null
  if [ $? -eq 0 ];then
    /sbin/modprobe $i
  fi
done
[root@k8s-node-21 conf]# chmod +x /root/ipvs.sh
[root@k8s-node-21 conf]# /root/ipvs.sh
[root@k8s-node-21 conf]# lsmod |grep ip_vs

创建kube-proxy启动脚本

k8s-node-21.host.com上:

vi /opt/kubernetes/server/bin/kube-proxy-2321.sh

#!/bin/sh
./kube-proxy \
  --cluster-cidr 172.7.0.0/16 \
  --hostname-override 192.168.23.21 \
  --proxy-mode=ipvs \
  --ipvs-scheduler=nq \
  --kubeconfig ./conf/kube-proxy.kubeconfig

**注意:**kube-proxy集群各主机的启动脚本略有不同,部署其他节点时注意修改IP地址。

检查配置,权限,创建日志目录

k8s-node-21.host.com上:

复制

[root@k8s-node-21 cert]# cd /opt/kubernetes/server/conf
[root@k8s-node-21 conf]# ls -l|grep kube-proxy.kubeconfig 
-rw------- 1 root root 6471 Jan 22 17:33 kube-proxy.kubeconfig

[root@k8s-node-21 conf]# chmod +x /opt/kubernetes/server/bin/kube-proxy-2321.sh
[root@k8s-node-21 conf]# mkdir -p /data/logs/kubernetes/kube-proxy

创建supervisor配置

k8s-node-21.host.com上:

vi /etc/supervisord.d/kube-proxy.ini

[program:kube-proxy-23-21]
command=/opt/kubernetes/server/bin/kube-proxy-2321.sh      ; the program (relative uses PATH, can take args)
numprocs=1                                                 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                       ; directory to cwd to before exec (def no cwd)
autostart=true                                             ; start at supervisord start (default: true)
autorestart=true                                           ; retstart at unexpected quit (default: true)
startsecs=22                                               ; number of secs prog must stay running (def. 1)
startretries=3                                             ; max # of serial start failures (default 3)
exitcodes=0,2                                              ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                            ; signal used to kill process (default TERM)
stopwaitsecs=10                                            ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                  ; setuid to this UNIX account to run the program
redirect_stderr=false                                      ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log     ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                               ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                   ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                ; emit events on stdout writes (default false)
stderr_logfile=/data/logs/kubernetes/kube-proxy/proxy.stderr.log     ; stderr log path, NONE for none; default AUTO
stderr_logfile_maxbytes=64MB                               ; max # logfile bytes b4 rotation (default 50MB)
stderr_logfile_backups=4                                   ; # of stderr logfile backups (default 10)
stderr_capture_maxbytes=1MB   						       ; number of bytes in 'capturemode' (default 0)
stderr_events_enabled=false   						       ; emit events on stderr writes (default false)

启动服务并检查

k8s-node-21.host.com上:

复制

[root@k8s-node-21 bin]# supervisorctl update
kube-proxy: added process group
[root@k8s-node-21 bin]# supervisorctl status
etcd-server-23-21                RUNNING   pid 9507, uptime 22:44:48
kube-apiserver                   RUNNING   pid 9770, uptime 21:10:49
kube-controller-manager          RUNNING   pid 10048, uptime 18:22:10
kube-kubelet                     RUNNING   pid 14597, uptime 0:32:59
kube-proxy                       STARTING  
kube-scheduler                   RUNNING   pid 10041, uptime 18:22:13
[root@k8s-node-21 bin]# yum install ipvsadm -y
[root@k8s-node-21 bin]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq
  -> 192.168.23.21:6443           Masq    1      0          0         
  -> 192.168.23.22:6443           Masq    1      0          0

安装部署启动检查所有集群规划主机上的kube-proxy服务

验证kubernetes集群

在任意一个运算节点,创建一个资源配置清单

这里我们选择k8s-node-21.host.com主机

vi /root/nginx-ds.yaml

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ds
spec:
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.7.9
        command: ["/bin/bash", "-ce", "tail -f /dev/null"]
        ports:
        - containerPort: 80

应用资源配置,并检查

复制

[root@k8s-node-21 ~]# cd /root
[root@k8s-node-21 ~]# kubectl create -f nginx-ds.yaml
-----------------------------------------------------------
[root@k8s-node-21 ~]# kubectl get pod
NAME             READY   STATUS    RESTARTS   AGE
nginx-ds-6hnc7   1/1     Running   0          99m
nginx-ds-m5q6j   1/1     Running   0          18h
[root@k8s-node-21 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
etcd-0               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
[root@k8s-node-21 ~]# kubectl get node
NAME            STATUS   ROLES         AGE    VERSION
192.168.23.21   Ready    master,node   112m   v1.15.2
192.168.23.22   Ready    master,node   112m   v1.15.2
[root@k8s-node-21 ~]# supervisorctl status
etcd-server-23-21                RUNNING   pid 1477, uptime 3:57:41
kube-apiserver-23-21             RUNNING   pid 1487, uptime 3:57:41
kube-controller-manager-23-21    RUNNING   pid 6749, uptime 3:55:29
kube-kubelet-2321                RUNNING   pid 8384, uptime 1:59:36
kube-proxy-23-21                 RUNNING   pid 34051, uptime 1:07:52
kube-scheduler-23-21             RUNNING   pid 1490, uptime 3:57:41
-------------------------------------------------------------
[root@k8s-node-21 ~]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE    IP           NODE            NOMINATED NODE   READINESS GATES
nginx-ds-ft5nb   1/1     Running   0          2m8s   172.7.22.2   192.168.23.22   <none>           <none>
nginx-ds-gmjrv   1/1     Running   0          2m9s   172.7.21.2   192.168.23.21   <none>           <none>
[root@k8s-node-21 ~]# kubectl describe pod nginx-ds-6hnc7 // 查看pod的详细信息

验证

至此,k8s集群就搭建成功了!

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐