1、K8S部署

常见的安装部署方式
  • Minikubu 单节点微型k8s(仅供学习和预览使用)

  • 二进制安装部署(生产首选)

  • 使用kubeadmin进行部署,k8s的部署工具,跑在k8s里(相对简单,熟手推荐)

    使用官网提供的Minikube可以进行试验
    https://kubernetes.io/docs/tutorials/hello-minikube/
    点击 Launch Terminal
    命令查询关键组件
    kubuctl get pods -n kube-system
    早期使用http通信,新版本ssl通信,自签证书
    在这里插入图片描述

2、二进制部署k8s环境

  • 准备5台2c/2g/50g虚机,使用10.4.7.0/24网络
  • 预装Centos7.6操作系统,做好基础优化
  • 安装部署bind9,部署自建DNS系统
  • 准备自签证书环境
  • 安装部署Docker环境,部署Harbor私有仓库

2.1.1、 部署环境优化

准备模板机器环境

1、Centos7默认网卡名称为ens33,将其改为eth0,配置10.4.7.0/24的网段。
2、关闭selinux,关闭防火墙,配置hostname

2.1.2、 网卡配置操作

  • vmware配置虚拟网路:
    打开编辑器虚拟网络管理,选择net模式,以管理员身份修改,10.4.7.0/24
    打开net设置设置网关地址
    打开windows更改是适配器,修改VMnet8属性高级配置更改跃点数为10,优先使用当前的dns.
  • 更改网卡名称,配置静态ip如下操作:
    [root@hdss7-200 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
    TYPE=Ethernet
    BOOTPROTO=“static”
    NAME=eth0
    DEVICE=eth0
    ONBOOT=yes
    IPADDR=10.4.7.200
    NETMASK=255.255.255.0
    GATEWAY=10.4.7.2
    DNS1=10.4.7.2
  • 修改网卡配置文件名称
    mv ifcfg-ens33 ifcfg-eth0
  • 更改grub配置,添加net.ifnames=0 biosdevname=0如下:
    vi /etc/default/grub
    [root@hdss7-200 ~]# cat /etc/default/grub
    GRUB_CMDLINE_LINUX=“crashkernel=auto rhgb quiet net.ifnames=0 biosdevname=0”
  • 执行更新内核
    grub2-mkconfig -o /boot/grub2/grub.cfg
  • NetworkManager服务停用,否则会出现两个ip地址:
    systemctl stop NetworkManager
    systemctl disable NetworkManager
  • 重启系统 reboot

2.1.3、 系统环境配置操作

  • 关闭防火墙
    systemctl stop firewalld
    systemctl disable firewalld
  • 关闭selinux
    1、vim /etc/selinux/conf
    SELINUX=disabled
    重启reboot后,使用gettenforce查看结果
    是否为disabled
    2、临时修改 setenforce 0
  • 安装yum epel源
    yum install -y epel-release
  • 配置修改主机名hostname
    hostctl set-hostname $(使用的主机名称)
  • 安装必要的工具
    yum install -y net-tools wget telnet tree nmap sysstat lrzsz dos2unix bind-utils

3、安装bind9

hdss7-11 10.4.7.11主机上安装bind9

yum install -y bind

DNS主配置文件
vim /etc/named.conf
listen-on port 53 { 10.4.7.11; };
allow-query { any; };
forwarders { 10.4.7.2; }; ###上一级解析
recursion yes; ###递归模式访问
dnssec-enable no;
dnssec-validation no;

DNS区域配置文件,下列配置文件末尾添加以下内容,分别为主机域和业务域:
vim /etc/named.rfc1912.zones

zone “host.com” IN {
type master;
file “host.com.zone”;
allow-update { 10.4.7.11; };
};

zone “od.com” IN {
type master;
file “od.com.zone”;
allow-update { 10.4.7.11; };
};

3.1 配置区域数据文件

3.1.1、 主机域配置

[root@hdss7-11 named]# vim host.com.zone
$ORIGIN host.com.
$TTL 600 ; 10 minutes@ IN SOA dns.host.com dnsadmin.host.com. (
2022071401 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.host.com
$TTL 60 ; 1 minute
dns A 10.4.7.11
HDSS7-11 A 10.4.7.11
HDSS7-12 A 10.4.7.12
HDSS7-21 A 10.4.7.21
HDSS7-22 A 10.4.7.22
HDSS7-200 A 10.4.7.200

3.1.2、 业务域配置

[root@hdss7-11 named]# cat od.com.zone
$ORIGIN od.com.
$TTL 600 ; 10 minutes@ IN SOA dns.od.com dnsadmin.od.com. (
2022071401 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.od.com
$TTL 60 ; 1 minute
dns A 10.4.7.11

3.1.3、 启动服务

systemctl start named
systemctl status named
如有以下报错
named[25893]: network unreachable resolving ‘./NS/IN’: 2001:500:2f::f#53
ipv6引起的修改 vim /etc/sysconfig/named
添加一行OPTIONS=“-4”
重新启动服务
systemctl restart named
systemctl status named
如有以下报错
zone od.com/IN: loading from master file od.com.zone failed: permission denied
给配置文件添加权限
chmod 777 host.com.zone
chmod 777 od.com.zone

3.1.4、 测试

测试验证解析
dig -t A hdss7-11.host.com @10.4.7.11 +short
此时解析成功
windows网络配置vmnet8设置首选dns设置10.4.7.11
在windows上cmd ping hdss7-200.host.com ping通为正常

4、安装自签证书组件

下载组件在运维主机hdss7-200上
证书签发工具CFSSL:R1.2

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo
chmod +x /usr/bin/cfssl*

出现下载报错时,需要配置github.com解析
查询域名IP地址,搜索github
https://www.ipaddress.com/
在windows上得host当中配置140.82.112.3 github.com
C:\Windows\System32\drivers\etc
在虚机上下载工具

4.1、创建生成CA证书签名请求(csr)得json配置文件

cd /opt
mkdir certs
cd certs
证书签名需要有一个根证书ca

[root@hdss7-200 certs]# cat ca-csr.json
{
“CN” : “OldboyEdu”,
“host” : [
],
“key” : {
“algo” : “rsa”,
“size” : 2048
},
“names” : [
{
“C” : “CN”,
“ST” : “beijing”,
“L” : “beijing”,
“O” : “od”,
“OU” : “ops”
}
],
“ca” : {
“expiry”: “175200h”
}
}

CN:Common Name,浏览器使用该字段验证网站是否合法,一般写的是域名。非常重要。浏览器使用该字段验证网站是否合法
C:Country,国家
ST:State,州,省
L:locality,地区,城市
O:Organization Name,组织名称,公司名称
OU:Organization Unit Name,组织单位名称,公司部门

4.2、 生成证书

cfssl gencert -initca ca-csr.json | cfssl-json -bare ca

[root@hdss7-200 certs]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca
2022/07/14 13:05:33 [INFO] generating a new CA key and certificate from CSR
2022/07/14 13:05:33 [INFO] generate received request
2022/07/14 13:05:33 [INFO] received CSR
2022/07/14 13:05:33 [INFO] generating key: rsa-2048
2022/07/14 13:05:33 [INFO] encoded CSR
2022/07/14 13:05:33 [INFO] signed certificate with serial number 270568213429462971062083159529019547350395764311
[root@hdss7-200 certs]# ls -al
total 16
drwxr-xr-x 2 root root 71 Jul 14 13:05 .
drwxr-xr-x. 3 root root 19 Jul 14 12:18 …
-rw-r–r-- 1 root root 993 Jul 14 13:05 ca.csr
-rw-r–r-- 1 root root 339 Jul 14 12:59 ca-csr.json
-rw------- 1 root root 1679 Jul 14 13:05 ca-key.pem
-rw-r–r-- 1 root root 1346 Jul 14 13:05 ca.pem
[root@hdss7-200 certs]#

5、部署docker环境

5.1、安装安装所在的主机

hdss7-200 hdss7-21 hdss-22

命令安装,拉取get.docker.com上得脚本进行安装
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

5.2、安装完成配置daemon.json配置

[root@hdss7-200 certs]# mkdir -p /etc/docker /data/docker
[root@hdss7-200 certs]# vim /etc/docker/daemon.json

{
“graph”:“/data/docker”,“storage-driver”: “overlay2”,
“insecure-registries”: [“registry.access.redhat.com”,“quay.io”,“harbor.od.com”],
“registry-mirrors”: [“https://q2qgrodke.mirror.aliyuncs.com”],
“bip”:“172.7.200.1/24”,
“exec-opts”: [“native.cgroupdriver=systemd”],
“live-restore”: true
}

5.3、启动验证

systemctl start docker
docker ps -a
docker info
docker version

6、部署docker镜像私有仓库harbor

HDSS7-200.host.com上

6.1 使用1.7.5以上的版本,以下版本可能有越权获取管理员权限

harbor官方github地址:https://github.com/goharbor/harbor
harbor下载

创建/opt/src目录,放原软件包
mkdir /opt/src
cd /opt/src

wget https://storage.googleapis.com/harbor-releases/release-1.8.0/harbor-offline-installer-v1.8.3.tgz

tar xf harbor-offline-installer-v1.8.3.tgz -C /opt/

标识版本信息
mv harbor/ harbor-v1.8.3
创建软连接,为后期版本迭代做准备
ln -s /opt/harbor-v1.8.3 /opt/harbor
lrwxrwxrwx 1 root root 18 Jul 14 15:14 harbor -> /opt/harbor-v1.8.3

取消软连接可以使用以下命令
unlink harbor

cd harbor

6.2、配置yml文件

vim harbor.yml
hostname: harbor.od.com
http:
port: 180
harbor_admin_password: Harbor12345
data_volume: /data/harbor
location: /data/harbor/logs

安装nginx做反向代理
yum install -y nginx
vim /etc/nginx/conf.d/harbor.od.com.conf
server {
listen 80;
server_name harbor.od.com;
client_max_body_size 1000m;
location / {
proxy_pass http://127.0.0.1:180;
}
}

systemctl start nginx
curl harbor.od.com
这时无法解析,需要在DNS od.com.zone配置中配置一条记录
前滚一个序列号 serial
harbor A 10.4.7.200
systemctl restart named
每次添加一个记录,需要前滚一个序列号;

6.3、终端与浏览器访问harbor.od.com

正常访问后登入admin账号密码,创建public仓库
docker pull docker.io/library/nginx:1.7.9
<===> docker pull nginx:1.7.9
下载完后,打标签上传到public仓库
docker images
docker tag 84581e99d807 harbor.od.com/public/nginx:v1.7.9
docker push harbor.od.com/public/nginx:v1.7.9
这是会报错,没有登入私有仓库,登入私有仓库
docker login harbor.od.com
上传镜像
docker push harbor.od.com/public/nginx:v1.7.9

这时查看仓库镜像,已经上传成功,镜像仓库搭建完成

7、部署Master节点服务

7.1、部署etcd集群

  • 集群规划
主机名角色ip
HDSS7-12.host.cometcd lead10.4.7.12
HDSS7-21.host.cometct follow10.4.7.21
HDSS7-22.host.cometcd follow10.4.7.22

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-oLzMdutY-1659440723369)(image/k8s部署操作二/1657867484210.png)]

7.2、创建基于根证书的config配置文件

vim /opt/certs/ca-config.json


{
    "signing": {
        "default": {
            "expiry": "175200h"
        },
        "profiles": {
            "server": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "client": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}

vim /opt/certs/etcd-peer-csr.json


{
    "CN": "etcd-peer",
    "hosts": [
        "10.4.7.11",
        "10.4.7.12",
        "10.4.7.21",
        "10.4.7.22"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

创建etc根证书

cfssl  gencert  -ca=ca.pem  -ca-key=ca-key.pem  -config=ca-config.json  -profile=peer  etcd-peer-csr.json |cfssl-json  -bare  etcd-peer

7.3、安装etcd

创建etcd用户

useradd -s /sbin/nologin -M etcd

etcd下载地址

在HDSS7-12上部署安装,两台类似安装操作配置

将下载的etcd软件包解压到/opt/src目录下

mkdir -p /data/etcd  /data/logs/etcd-server  /opt/src  /opt/etcd/certs
tar xf etcd-v3.1.8-linux-amd64.tar.gz -C /opt
mv /opt/etcd-v3.1.8-linux-amd64  /opt/etcd-v3.1.8
ln -s  /opt/etcd-v3.1.8  /opt/etcd
useradd -s /sbin/nologin -M etcd
scp hdss7-200:/opt/certs/ca.pem .
scp hdss7-200:/opt/certs/etcd-peer.pem .
scp hdss7-200:/opt/certs/etcd-peer-key.pem .

chown -R etcd.etcd /opt/etcd/certs/
chmod 600 etcd-peer-key.pem

vim etcd-server-startup.sh

#!/bin/sh
./etcd --name etcd-server-7-12 \
       --data-dir /data/etcd/etcd-server \
       --listen-peer-urls https://10.4.7.12:2380 \
       --listen-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \
       --quota-backend-bytes 8000000000 \
       --initial-advertise-peer-urls https://10.4.7.12:2380 \
       --advertise-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \
       --initial-cluster  etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:2380,etcd-server-7-22=https://10.4.7.22:2380 \
       --ca-file ./certs/ca.pem \
       --cert-file ./certs/etcd-peer.pem \
       --key-file ./certs/etcd-peer-key.pem \
       --client-cert-auth  \
       --trusted-ca-file ./certs/ca.pem \
       --peer-ca-file ./certs/ca.pem \
       --peer-cert-file ./certs/etcd-peer.pem \
       --peer-key-file ./certs/etcd-peer-key.pem \
       --peer-client-cert-auth \
       --peer-trusted-ca-file ./certs/ca.pem \
       --log-output stdout

chmod +x /opt/etcd/etcd-server-startup.sh

安装管理etcd的组件supervisor

yum install -y supervisor
systemctl start supervisord
systemctl enable supervisord

vim /etc/supervisord.d/etcd-server.ini
[program:etcd-server-7-12]
command=/opt/etcd/etcd-server-startup.sh                        ; the program (relative uses PATH, can take args)
numprocs=1                                                      ; number of processes copies to start (def 1)
directory=/opt/etcd                                             ; directory to cwd to before exec (def no cwd)
autostart=true                                                  ; start at supervisord start (default: true)
autorestart=true                                                ; retstart at unexpected quit (default: true)
startsecs=22                                                    ; number of secs prog must stay running (def. 1)
startretries=3                                                  ; max # of serial start failures (default 3)
exitcodes=0,2                                                   ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                 ; signal used to kill process (default TERM)
stopwaitsecs=10                                                 ; max num secs to wait b4 SIGKILL (default 10)
user=etcd                                                       ; setuid to this UNIX account to run the program
redirect_stderr=false                                           ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log           ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                    ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                        ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                     ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                     ; emit events on stdout writes (default false)
stderr_logfile=/data/logs/etcd-server/etcd.stderr.log           ; stderr log path, NONE for none; default AUTO
stderr_logfile_maxbytes=64MB                                    ; max # logfile bytes b4 rotation (default 50MB)
stderr_logfile_backups=4                                        ; # of stderr logfile backups (default 10)
stderr_capture_maxbytes=1MB                                     ; number of bytes in 'capturemode' (default 0)
stderr_events_enabled=false                                     ; emit events on stderr writes (default false)

chown -R etcd.etcd /data/etcd /opt/etcd-v3.1.8/ /data/logs/etcd-server/


[root@hdss7-12 certs]# supervisorctl start all
etcd-server-7-12: started
[root@hdss7-12 certs]# supervisorctl status   
etcd-server-7-12                 RUNNING   pid 6692, uptime 0:00:05

集群健康状态检查
[root@hdss7-12 ~]# /opt/etcd/etcdctl cluster-health
member 988139385f78284 is healthy: got healthy result from http://127.0.0.1:2379
member 5a0ef2a004fc4349 is healthy: got healthy result from http://127.0.0.1:2379
member f4a0cb0a765574a8 is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy

[root@hdss7-12 ~]# /opt/etcd/etcdctl member list
988139385f78284: name=etcd-server-7-22 peerURLs=https://10.4.7.22:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.22:2379 isLeader=false
5a0ef2a004fc4349: name=etcd-server-7-21 peerURLs=https://10.4.7.21:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.21:2379 isLeader=false
f4a0cb0a765574a8: name=etcd-server-7-12 peerURLs=https://10.4.7.12:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.12:2379 isLeader=true

7.4、需要注意的是,当etcd集群状态,有问题时,需要排查时间是否同步

systemctl restart chronyd

chronyc sourcestats

8、部署kube-apiserver集群

8.1、集群规划

主机名角色ip
HDSS7-21.host.comkube-apiserver10.4.7.21
HDSS7-22.host.comkuber-apiserver10.4.7.22
HDSS7-11.host.com4层负载均衡10.4.7.11
HDSS7-12.host.com4层负载均衡10.4.7.12

负载均衡使用nginx 4层负载均衡,使用keepalived跑一个vip:10.4.7.10.代理两个kube-apiserver,实现高可用

8.2、部署环境准备

以HDSS7-21为例,另一台安装方法类似

8.2.1、下载软件,解压,做软链接

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md#downloads-for-v1154

选择安装 Server Binaries中的1.15.4的包

上传到HDSS7-21上

cd /opt/src
du -sh kubernetes-server-linux-amd64.tar.gz
tar xf kubernetes-server-linux-amd64.tar.gz -C /opt
[root@hdss7-21 opt]# ls
containerd  etcd  etcd-v3.1.8  kubernetes  src
[root@hdss7-21 opt]# cd kubernetes/
[root@hdss7-21 kubernetes]# ls
addons  kubernetes-src.tar.gz  LICENSES  server
[root@hdss7-21 kubernetes]# rm -rf kubernetes-src.tar.gz 
[root@hdss7-21 kubernetes]# cd ..
[root@hdss7-21 opt]# ls
containerd  etcd  etcd-v3.1.8  kubernetes  src
[root@hdss7-21 opt]# mv kubernetes kubernetes-v1.15.4
[root@hdss7-21 opt]# ln -s kubernetes-v1.15.4/ kubernetes
[root@hdss7-21 opt]# ls -al
total 0
drwxr-xr-x.  6 root root 110 Jul 16 01:06 .
dr-xr-xr-x. 18 root root 236 Jul 15 01:31 ..
drwx--x--x   4 root root  28 Jul 15 01:40 containerd
lrwxrwxrwx   1 root root  11 Jul 15 13:08 etcd -> etcd-v3.1.8
drwxrwxr-x   4 etcd etcd 166 Jul 15 14:12 etcd-v3.1.8
lrwxrwxrwx   1 root root  19 Jul 16 01:06 kubernetes -> kubernetes-v1.15.4/
drwxr-xr-x   4 root root  50 Jul 16 01:05 kubernetes-v1.15.4
drwxr-xr-x   2 root root  88 Jul 16 00:57 src

删除的是go语言编写的源码包用不到,删除掉

rm -rf kubernetes-src.tar.gz

将用不到的docker镜像文件删除,不用kubeadmin第三种方式部署,所以用不到

[root@hdss7-21 bin]# ls -al
total 1549316
drwxr-xr-x 2 root root      4096 Sep 18  2019 .
drwxr-xr-x 3 root root        17 Sep 18  2019 ..
-rwxr-xr-x 1 root root  43538912 Sep 18  2019 apiextensions-apiserver
-rwxr-xr-x 1 root root 100605984 Sep 18  2019 cloud-controller-manager
-rw-r--r-- 1 root root         8 Sep 18  2019 cloud-controller-manager.docker_tag
-rw-r--r-- 1 root root 144495104 Sep 18  2019 cloud-controller-manager.tar
-rwxr-xr-x 1 root root 200722064 Sep 18  2019 hyperkube
-rwxr-xr-x 1 root root  40186304 Sep 18  2019 kubeadm
-rwxr-xr-x 1 root root 164563360 Sep 18  2019 kube-apiserver
-rw-r--r-- 1 root root         8 Sep 18  2019 kube-apiserver.docker_tag
-rw-r--r-- 1 root root 208452096 Sep 18  2019 kube-apiserver.tar
-rwxr-xr-x 1 root root 116462624 Sep 18  2019 kube-controller-manager
-rw-r--r-- 1 root root         8 Sep 18  2019 kube-controller-manager.docker_tag
-rw-r--r-- 1 root root 160351744 Sep 18  2019 kube-controller-manager.tar
-rwxr-xr-x 1 root root  42985504 Sep 18  2019 kubectl
-rwxr-xr-x 1 root root 119690288 Sep 18  2019 kubelet
-rwxr-xr-x 1 root root  36987488 Sep 18  2019 kube-proxy
-rw-r--r-- 1 root root         8 Sep 18  2019 kube-proxy.docker_tag
-rw-r--r-- 1 root root  84282368 Sep 18  2019 kube-proxy.tar
-rwxr-xr-x 1 root root  38786144 Sep 18  2019 kube-scheduler
-rw-r--r-- 1 root root         8 Sep 18  2019 kube-scheduler.docker_tag
-rw-r--r-- 1 root root  82675200 Sep 18  2019 kube-scheduler.tar
-rwxr-xr-x 1 root root   1648224 Sep 18  2019 mounter
[root@hdss7-21 bin]# pwd
/opt/kubernetes/server/bin
[root@hdss7-21 bin]# rm -f *.tar
[root@hdss7-21 bin]# rm -f *_tag
[root@hdss7-21 bin]# ls -al
total 884968
drwxr-xr-x 2 root root       239 Jul 16 01:14 .
drwxr-xr-x 3 root root        17 Sep 18  2019 ..
-rwxr-xr-x 1 root root  43538912 Sep 18  2019 apiextensions-apiserver
-rwxr-xr-x 1 root root 100605984 Sep 18  2019 cloud-controller-manager
-rwxr-xr-x 1 root root 200722064 Sep 18  2019 hyperkube
-rwxr-xr-x 1 root root  40186304 Sep 18  2019 kubeadm
-rwxr-xr-x 1 root root 164563360 Sep 18  2019 kube-apiserver
-rwxr-xr-x 1 root root 116462624 Sep 18  2019 kube-controller-manager
-rwxr-xr-x 1 root root  42985504 Sep 18  2019 kubectl
-rwxr-xr-x 1 root root 119690288 Sep 18  2019 kubelet
-rwxr-xr-x 1 root root  36987488 Sep 18  2019 kube-proxy
-rwxr-xr-x 1 root root  38786144 Sep 18  2019 kube-scheduler
-rwxr-xr-x 1 root root   1648224 Sep 18  2019 mounter

8.2.3、签发client证书

运维主机HDSS7-200上:

apiserver请求ETCD的client的生成证书签名请求的(csr)的json配置文件

vim /opt/certs/client-csr.json


{
    "CN": "k8s-node",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

生成证书命令

[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client
2022/07/16 01:25:17 [INFO] generate received request
2022/07/16 01:25:17 [INFO] received CSR
2022/07/16 01:25:17 [INFO] generating key: rsa-2048
2022/07/16 01:25:17 [INFO] encoded CSR
2022/07/16 01:25:17 [INFO] signed certificate with serial number 302479965363744006748243169863146862201696131758
2022/07/16 01:25:17 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

8.2.4、签发server证书

生成apiserver证书的生成证书签名请求的(csr)的json配置文件

vim /opt/cerst/apiserver-csr.json


{
    "CN": "apiserver",
    "hosts": [
        "127.0.0.1",
        "192.168.0.1",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local",
        "10.4.7.10",
        "10.4.7.21",
        "10.4.7.22",
        "10.4.7.23"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

生成证书的命令

[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver
2022/07/16 01:38:06 [INFO] generate received request
2022/07/16 01:38:06 [INFO] received CSR
2022/07/16 01:38:06 [INFO] generating key: rsa-2048
2022/07/16 01:38:07 [INFO] encoded CSR
2022/07/16 01:38:07 [INFO] signed certificate with serial number 170669182476926994376800557812860647625281936559
2022/07/16 01:38:07 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

8.2.5、拷贝证书到各运算节点,并创建配置

HDSS7-21

[root@hdss7-21 bin]# mkdir cert
[root@hdss7-21 cerst]# pwd
/opt/kubernetes/server/bin/cert
[root@hdss7-21 cerst]#

[root@hdss7-21 cerst]# scp hdss7-200:/opt/certs/ca-key.pem .
root@hdss7-200's password: 
ca-key.pem                                                                                                                                               100% 1679     1.6MB/s   00:00  
[root@hdss7-21 cerst]# scp hdss7-200:/opt/certs/client.pem .
root@hdss7-200's password: 
client.pem                                                                                                                                               100% 1363     1.3MB/s   00:00  
[root@hdss7-21 cerst]# scp hdss7-200:/opt/certs/client-key.pem .
root@hdss7-200's password: 
client-key.pem   
[root@hdss7-21 cerst]# ls -al
total 24
drwxr-xr-x 2 root root  118 Jul 16 01:50 .
drwxr-xr-x 3 root root  252 Jul 16 01:45 ..
-rw------- 1 root root 1679 Jul 16 01:47 ca-key.pem
-rw-r--r-- 1 root root 1346 Jul 16 01:47 ca.pem
-rw------- 1 root root 1675 Jul 16 01:48 client-key.pem
-rw-r--r-- 1 root root 1363 Jul 16 01:48 client.pem
-rw------- 1 root root 1679 Jul 16 01:50 server-key.pem
-rw-r--r-- 1 root root 1594 Jul 16 01:50 server.pem
[root@hdss7-21 cerst]# 
                   

8.2.6、创建apiserver配置文件

[root@hdss7-21 bin]# cd conf/
[root@hdss7-21 conf]# pwd
/opt/kubernetes/server/bin/conf
[root@hdss7-21 conf]# vim audit.yaml
[root@hdss7-21 conf]# 



apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"

./kube-apiserver --help

./kube-apiserver --help|grep -A 5 target-ram-md

apiserver启动脚本

cat kube-apiserver.sh


#!/bin/bash
./kube-apiserver \
  --apiserver-count 2 \
  --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
  --audit-policy-file ./conf/audit.yaml \
  --authorization-mode RBAC \
  --client-ca-file ./cert/ca.pem \
  --requestheader-client-ca-file ./cert/ca.pem \
  --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
  --etcd-cafile ./cert/ca.pem \
  --etcd-certfile ./cert/client.pem \
  --etcd-keyfile ./cert/client-key.pem \
  --etcd-servers https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \
  --service-account-key-file ./cert/ca-key.pem \
  --service-cluster-ip-range 192.168.0.0/16 \
  --service-node-port-range 3000-29999 \
  --target-ram-mb=1024 \
  --kubelet-client-certificate ./cert/client.pem \
  --kubelet-client-key ./cert/client-key.pem \
  --log-dir  /data/logs/kubernetes/kube-apiserver \
  --tls-cert-file ./cert/apiserver.pem \
  --tls-private-key-file ./cert/apiserver-key.pem \
  --v 2

脚本添加权限,创建日志路径目录


[root@hdss7-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-apiserver.sh
[root@hdss7-21 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver

8.2.7、创建supervisor服务的配置,实现自动拉起apiserver

vim /etc/supervisord.d/kube-apiserver.ini


[program:kube-apiserver]
command=/opt/kubernetes/server/bin/kube-apiserver.sh            ; the program (relative uses PATH, can take args)
numprocs=1                                                      ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                            ; directory to cwd to before exec (def no cwd)
autostart=true                                                  ; start at supervisord start (default: true)
autorestart=true                                                ; retstart at unexpected quit (default: true)
startsecs=22                                                    ; number of secs prog must stay running (def. 1)
startretries=3                                                  ; max # of serial start failures (default 3)
exitcodes=0,2                                                   ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                 ; signal used to kill process (default TERM)
stopwaitsecs=10                                                 ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                       ; setuid to this UNIX account to run the program
redirect_stderr=false                                           ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log        ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                    ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                        ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                     ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                     ; emit events on stdout writes (default false)
stderr_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stderr.log        ; stderr log path, NONE for none; default AUTO
stderr_logfile_maxbytes=64MB                                    ; max # logfile bytes b4 rotation (default 50MB)
stderr_logfile_backups=4                                        ; # of stderr logfile backups (default 10)
stderr_capture_maxbytes=1MB                                     ; number of bytes in 'capturemode' (default 0)
stderr_events_enabled=false                                     ; emit events on stderr writes (default false)

查看验证是否启动成功

supervisorctl status

supervisorctl update

8.2.8、配置四层反向代理

安装nginx

yum install nginx -y

vim /etc/nginx/nginx.conf


stream {
    upstream kube-apiserver {
        server 10.4.7.21:6443     max_fails=3 fail_timeout=30s;
        server 10.4.7.22:6443     max_fails=3 fail_timeout=30s;
    }
    server {
        listen 7443;
        proxy_connect_timeout 2s;
        proxy_timeout 900s;
        proxy_pass kube-apiserver;
    }
}

配置完成后,检查语法有以下报错

[root@hdss7-12 nginx]# nginx -t
nginx: [emerg] unknown directive "stream" in /etc/nginx/nginx.conf:86

# 安装nginx源
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
# 先安装
yum -y install epel-release

#应该是缺少modules模块
yum -y install nginx-all-modules.noarch

[root@hdss7-12 nginx]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

keepalived配置

yum install keepalived -y

vim /etc/keeplived/chech_port.sh


#!/bin/bash
#keepalived 监控端口脚本
#使用方法:
#在keepalived的配置文件中
#vrrp_script check_port {#创建一个vrrp_script脚本,检查配置
#    script "/etc/keepalived/check_port.sh 6379" #配置监听的端口
#    interval 2 #检查脚本的频率,单位(秒)
#}
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
else
        echo "Check Port Cant Be Empty!"
fi

主节点keeplived


! Configuration File for keepalived

global_defs {
   router_id 10.4.7.11

}

vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 7443"
    interval 2
    weight -20
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 10.4.7.11
    nopreempt

    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        10.4.7.10
    }
}

备keepalived

vim /etc/keepalived/keepalived.conf


! Configuration File for keepalived
global_defs {
	router_id 10.4.7.12
}
vrrp_script chk_nginx {
	script "/etc/keepalived/check_port.sh 7443"
	interval 2
	weight -20
}
vrrp_instance VI_1 {
	state BACKUP
	interface eth0
	virtual_router_id 251
	mcast_src_ip 10.4.7.12
	priority 90
	advert_int 1
	authentication {
		auth_type PASS
		auth_pass 11111111
	}
	track_script {
		chk_nginx
	}
	virtual_ipaddress {
		10.4.7.10
	}
}

systemctl start keepalived

systemctl enable keepalived

注意keepalived配置文件keepalievd.conf配置的权限

报错信息如下

Configuration file '/etc/keepalived/keepalived.conf' is not a regular non-executable file
[root@hdss7-11 keepalived]# ls -al 
total 24
drwxr-xr-x   2 root root   77 Jul 16 15:17 .
drwxr-xr-x. 77 root root 8192 Jul 16 06:50 ..
-rwxr-xr-x   1 root root  280 Jul 16 15:17 check_port.sh
-rw-r--r--   1 root root  513 Jul 16 07:05 keepalived.conf
-rwxr-xr-x   1 root root 3598 Jul 16 04:00 keepalived.conf.bak

[root@hdss7-11 keepalived]# chmod 644 keepalived.conf
[root@hdss7-11 keepalived]# ls -al 
total 24
drwxr-xr-x   2 root root   77 Jul 16 15:17 .
drwxr-xr-x. 77 root root 8192 Jul 16 06:50 ..
-rwxr-xr-x   1 root root  280 Jul 16 15:17 check_port.sh
-rw-r--r--   1 root root  513 Jul 16 07:05 keepalived.conf
-rwxr-xr-x   1 root root 3598 Jul 16 04:00 keepalived.conf.bak


9、安装主控节点控制器和调度器服务

9.1、部署controller-manager

集群规划

主机名角色ip
HDSS7-21.host.comcontroller-manager10.4.7.21
HDSS7-22.host.comcontroller-manager10.4.7.22

创建启动脚本

vim /opt/kubernetes/server/bin/kube-controller-manager.sh


#!/bin/sh
./kube-controller-manager \
  --cluster-cidr 172.7.0.0/16 \
  --leader-elect true \
  --log-dir /data/logs/kubernetes/kube-controller-manager \
  --master http://127.0.0.1:8080 \
  --service-account-private-key-file ./cert/ca-key.pem \
  --service-cluster-ip-range 192.168.0.0/16 \
  --root-ca-file ./cert/ca.pem \
  --v 2

创建日志目录,给脚本添加权限

[root@hdss7-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-controller-manager.sh
[root@hdss7-21 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager

创建supervisor配置文件,拉起kube-controller-manager

vim /etc/supervisord.d/kube-conntroller-manager.ini


[program:kube-controller-manager]
command=/opt/kubernetes/server/bin/kube-controller-manager.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                              ; directory to cwd to before exec (def no cwd)
autostart=true                                                                    ; start at supervisord start (default: true)
autorestart=true                                                                  ; retstart at unexpected quit (default: true)
startsecs=22                                                                      ; number of secs prog must stay running (def. 1)
startretries=3                                                                    ; max # of serial start failures (default 3)
exitcodes=0,2                                                                     ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                                   ; signal used to kill process (default TERM)
stopwaitsecs=10                                                                   ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                                         ; setuid to this UNIX account to run the program
redirect_stderr=false                                                             ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controll.stdout.log  ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                                       ; emit events on stdout writes (default false)
stderr_logfile=/data/logs/kubernetes/kube-controller-manager/controll.stderr.log  ; stderr log path, NONE for none; default AUTO
stderr_logfile_maxbytes=64MB                                                      ; max # logfile bytes b4 rotation (default 50MB)
stderr_logfile_backups=4                                                          ; # of stderr logfile backups (default 10)
stderr_capture_maxbytes=1MB                                                       ; number of bytes in 'capturemode' (default 0)
stderr_events_enabled=false                                                       ; emit events on stderr writes (default false)

启动服务并检查

[root@hdss7-22 ~]# supervisorctl update
[root@hdss7-22 ~]# supervisorctl status
etcd-server-7-22                 RUNNING   pid 6593, uptime 0:34:34
kube-apiserver                   RUNNING   pid 6592, uptime 0:34:34
kube-controller-manager          RUNNING   pid 6320, uptime 0:34:39
[root@hdss7-22 ~]# 

9.2、部署kube-scheduler组件

创建启动脚本

vim /opt/kubernetes/server/bin/kube-scheduler.sh

#!/bin/sh
./kube-scheduler \
  --leader-elect  \
  --log-dir /data/logs/kubernetes/kube-scheduler \
  --master http://127.0.0.1:8080 \
  --v 2

创建日志文件目录,给脚本添加执行权限

[root@hdss7-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-scheduler.sh
[root@hdss7-21 bin]# mkdir -p /data/logs/kubernetes/kube-scheduler

创建supervisor配置文件

vim /etc/supervisord.d/kube-scheduler.ini


[program:kube-scheduler]
command=/opt/kubernetes/server/bin/kube-scheduler.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                               ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                     ; directory to cwd to before exec (def no cwd)
autostart=true                                                           ; start at supervisord start (default: true)
autorestart=true                                                         ; retstart at unexpected quit (default: true)
startsecs=22                                                             ; number of secs prog must stay running (def. 1)
startretries=3                                                           ; max # of serial start failures (default 3)
exitcodes=0,2                                                            ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                          ; signal used to kill process (default TERM)
stopwaitsecs=10                                                          ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                                ; setuid to this UNIX account to run the program
redirect_stderr=false                                                    ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                             ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                                 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                              ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                              ; emit events on stdout writes (default false)
stderr_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stderr.log ; stderr log path, NONE for none; default AUTO
stderr_logfile_maxbytes=64MB                                             ; max # logfile bytes b4 rotation (default 50MB)
stderr_logfile_backups=4                                                 ; # of stderr logfile backups (default 10)
stderr_capture_maxbytes=1MB                                              ; number of bytes in 'capturemode' (default 0)
stderr_events_enabled=false                                              ; emit events on stderr writes (default false)

启动服务检查


[root@hdss7-21 bin]# supervisorctl update
[root@hdss7-21 ~]# supervisorctl status
etcd-server-7-21                 RUNNING   pid 6703, uptime 0:00:44
kube-apiserver                   RUNNING   pid 6761, uptime 0:00:43
kube-controller-manager          RUNNING   pid 6295, uptime 0:00:48
kube-scheduler                   RUNNING   pid 6293, uptime 0:00:48

查看组件状态

[root@hdss7-21 ~]#  kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok           
controller-manager   Healthy   ok           
etcd-2               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   

10、部署node节点服务

10.1、部署kubelet

集群规划

主机名角色ip
HDSS7-21.host.comkubelet10.4.7.21
HDSS7-22.host.comkubelet10.4.7.22

10.1.1、签发kubelet证书

在运维主机上操作HDSS7-200

创建生成证书签名请求(csr)的json配置文件

kubelet-csr.json


{
    "CN": "kubelet-node",
    "hosts": [
    "127.0.0.1",
    "10.4.7.10",
    "10.4.7.21",
    "10.4.7.22",
    "10.4.7.23",
    "10.4.7.24",
    "10.4.7.25",
    "10.4.7.26",
    "10.4.7.27",
    "10.4.7.28"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

生成kubelet证书和私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
[root@hdss7-200 certs]# ls |grep kubelet
kubelet.csr
kubelet-csr.json
kubelet-key.pem
kubelet.pem
[root@hdss7-200 certs]# 

10.1.2、考配到各运算节点,并创建配置

拷贝证书、私钥,注意私钥文件属性600

/opt/kubernetes/server/bin/cert

                                                                                      
[root@hdss7-22 ~]# scp hdss7-200:/opt/certs/kubelet.pem .
root@hdss7-200's password: 
kubelet.pem                                                                                                                                              100% 1468     1.1MB/s   00:00  
[root@hdss7-22 ~]# scp hdss7-200:/opt/certs/kubelet-key.pem .
root@hdss7-200's password: 
kubelet-key.pem                                                                                                                                          100% 1679     1.3MB/s   00:00  
[root@hdss7-21 ~]# mv kube* /opt/kubernetes/server/bin/cert
[root@hdss7-21 cert]# ls -l |grep kube
-rw------- 1 root root 1679 Jul 17 03:54 kubelet-key.pem
-rw-r--r-- 1 root root 1468 Jul 17 03:54 kubelet.pem
[root@hdss7-21 cert]# 

创建配置

set-cluster

cd /opt/kubernetes/server/conf


[root@hdss7-21 conf]# kubectl config set-cluster myk8s \
  --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
  --embed-certs=true \
  --server=https://10.4.7.10:7443 \
  --kubeconfig=kubelet.kubeconfig

Cluster "myk8s" set.
[root@hdss7-21 conf]# cat kubelet.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0RENDQXB5Z0F3SUJBZ0lVTDJTeXcvNzc4Q1dtUnpOcGwrTkRBSHFudmxjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lERUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjJKbGFXcHBibWN4RURBT0JnTlZCQWNUQjJKbAphV3BwYm1jeEN6QUpCZ05WQkFvVEFtOWtNUXd3Q2dZRFZRUUxFd052Y0hNeEVqQVFCZ05WQkFNVENVOXNaR0p2CmVVVmtkVEFlRncweU1qQTNNVFF4TnpBeE1EQmFGdzAwTWpBM01Ea3hOekF4TURCYU1HQXhDekFKQmdOVkJBWVQKQWtOT01SQXdEZ1lEVlFRSUV3ZGlaV2xxYVc1bk1SQXdEZ1lEVlFRSEV3ZGlaV2xxYVc1bk1Rc3dDUVlEVlFRSwpFd0p2WkRFTU1Bb0dBMVVFQ3hNRGIzQnpNUkl3RUFZRFZRUURFd2xQYkdSaWIzbEZaSFV3Z2dFaU1BMEdDU3FHClNJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUM2OWExV2JqL3VzdDhvVVNEbGR6QXpnU1RQUlZSb3lWSncKaG0rSDUzdEY5QlFjWks4YXU1ZytRYkxmQ3htamh2QlBsS29xY0kwdkZtcUMySWdnc2krY2Ftb292TkhNTTBYMApkdU5wMjZiYVNjZ3Q2OXptcnQzL2ZETTV2WWF4OW95N1RwelB1bDlldnhNZ1dlWHBkQk9hbW1NWjVpSzR0NnJTCkkrMzFabjAzZGlYbkdqR3lEbDgwRjd3VzRWSHhKZ3UvRjFJdzJpREx6N1BPbmxOZXF3a00rYUFhanFOc1RMUWwKNktFakVScTFIKzgwZklvQVpFZTZPVGlKaERScXdKbDBoVmZ5TzRMbVVsYktxZXFDcGFLVDgzb2UrQUk0a2hqawp1NTdDcnNjRmdabG1yMHI3bTl2NDAwbEtRcS8yblJlSVpBU0QvaC9GSDJSdXAzalY5VlBKQWdNQkFBR2paakJrCk1BNEdBMVVkRHdFQi93UUVBd0lCQmpBU0JnTlZIUk1CQWY4RUNEQUdBUUgvQWdFQ01CMEdBMVVkRGdRV0JCUzUKU01weTJMYWFHdjgyUyswSFQ5UC8zNnBEcWpBZkJnTlZIU01FR0RBV2dCUzVTTXB5MkxhYUd2ODJTKzBIVDlQLwozNnBEcWpBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQWJXaVJhYkVjNDJhUnEwNys2azdQL2lqWDZ3cGp3eExzCm9MYU51WHB6WG9zdHpjVi81ZkpMdkFHTFpsSzNvOE94cGlvVys0YWNSUVlDS3V5Y1d2eVVRZjRLVjRVbE1zQ3oKY0NDVzRoOG9OclJrZGlLRUFrVWYrbHE4NCtsZ21PNytVUUZCcEJPRFhZaCt1UHVhVk1LcUFieFp5UXZEcmJ5TwpvdmczYmNCNzZYdFArTVJXMjZSNkwzcGxKR2Y1QnVWOTBDbE5VVTFma1BZZ0hCd0JXSlQ2WVN3ZDg5TDRHQWVMCmh2UExFV2lkTUdQd3BURm9SM01JZ0FGUW5pK2F2Ymw2L0VRalJ4RWZoR3hCOXNzMHBXR3RZT3JrRHhQN2RlT0YKaEU5TDBhQUhtRkVRcmFJdUU1emdLSFVSWlJlL3lNNWllUmdIMFNnQ29PTndDWVRnMS9PVk13PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.4.7.10:7443
  name: myk8s
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []


[root@hdss7-21 conf]# echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0RENDQXB5Z0F3SUJBZ0lVTDJTeXcvNzc4Q1dtUnpOcGwrTkRBSHFudmxjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lERUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjJKbGFXcHBibWN4RURBT0JnTlZCQWNUQjJKbAphV3BwYm1jeEN6QUpCZ05WQkFvVEFtOWtNUXd3Q2dZRFZRUUxFd052Y0hNeEVqQVFCZ05WQkFNVENVOXNaR0p2CmVVVmtkVEFlRncweU1qQTNNVFF4TnpBeE1EQmFGdzAwTWpBM01Ea3hOekF4TURCYU1HQXhDekFKQmdOVkJBWVQKQWtOT01SQXdEZ1lEVlFRSUV3ZGlaV2xxYVc1bk1SQXdEZ1lEVlFRSEV3ZGlaV2xxYVc1bk1Rc3dDUVlEVlFRSwpFd0p2WkRFTU1Bb0dBMVVFQ3hNRGIzQnpNUkl3RUFZRFZRUURFd2xQYkdSaWIzbEZaSFV3Z2dFaU1BMEdDU3FHClNJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUM2OWExV2JqL3VzdDhvVVNEbGR6QXpnU1RQUlZSb3lWSncKaG0rSDUzdEY5QlFjWks4YXU1ZytRYkxmQ3htamh2QlBsS29xY0kwdkZtcUMySWdnc2krY2Ftb292TkhNTTBYMApkdU5wMjZiYVNjZ3Q2OXptcnQzL2ZETTV2WWF4OW95N1RwelB1bDlldnhNZ1dlWHBkQk9hbW1NWjVpSzR0NnJTCkkrMzFabjAzZGlYbkdqR3lEbDgwRjd3VzRWSHhKZ3UvRjFJdzJpREx6N1BPbmxOZXF3a00rYUFhanFOc1RMUWwKNktFakVScTFIKzgwZklvQVpFZTZPVGlKaERScXdKbDBoVmZ5TzRMbVVsYktxZXFDcGFLVDgzb2UrQUk0a2hqawp1NTdDcnNjRmdabG1yMHI3bTl2NDAwbEtRcS8yblJlSVpBU0QvaC9GSDJSdXAzalY5VlBKQWdNQkFBR2paakJrCk1BNEdBMVVkRHdFQi93UUVBd0lCQmpBU0JnTlZIUk1CQWY4RUNEQUdBUUgvQWdFQ01CMEdBMVVkRGdRV0JCUzUKU01weTJMYWFHdjgyUyswSFQ5UC8zNnBEcWpBZkJnTlZIU01FR0RBV2dCUzVTTXB5MkxhYUd2ODJTKzBIVDlQLwozNnBEcWpBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQWJXaVJhYkVjNDJhUnEwNys2azdQL2lqWDZ3cGp3eExzCm9MYU51WHB6WG9zdHpjVi81ZkpMdkFHTFpsSzNvOE94cGlvVys0YWNSUVlDS3V5Y1d2eVVRZjRLVjRVbE1zQ3oKY0NDVzRoOG9OclJrZGlLRUFrVWYrbHE4NCtsZ21PNytVUUZCcEJPRFhZaCt1UHVhVk1LcUFieFp5UXZEcmJ5TwpvdmczYmNCNzZYdFArTVJXMjZSNkwzcGxKR2Y1QnVWOTBDbE5VVTFma1BZZ0hCd0JXSlQ2WVN3ZDg5TDRHQWVMCmh2UExFV2lkTUdQd3BURm9SM01JZ0FGUW5pK2F2Ymw2L0VRalJ4RWZoR3hCOXNzMHBXR3RZT3JrRHhQN2RlT0YKaEU5TDBhQUhtRkVRcmFJdUU1emdLSFVSWlJlL3lNNWllUmdIMFNnQ29PTndDWVRnMS9PVk13PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo="| base64 -d
-----BEGIN CERTIFICATE-----
MIIDtDCCApygAwIBAgIUL2Syw/778CWmRzNpl+NDAHqnvlcwDQYJKoZIhvcNAQEL
BQAwYDELMAkGA1UEBhMCQ04xEDAOBgNVBAgTB2JlaWppbmcxEDAOBgNVBAcTB2Jl
aWppbmcxCzAJBgNVBAoTAm9kMQwwCgYDVQQLEwNvcHMxEjAQBgNVBAMTCU9sZGJv
eUVkdTAeFw0yMjA3MTQxNzAxMDBaFw00MjA3MDkxNzAxMDBaMGAxCzAJBgNVBAYT
AkNOMRAwDgYDVQQIEwdiZWlqaW5nMRAwDgYDVQQHEwdiZWlqaW5nMQswCQYDVQQK
EwJvZDEMMAoGA1UECxMDb3BzMRIwEAYDVQQDEwlPbGRib3lFZHUwggEiMA0GCSqG
SIb3DQEBAQUAA4IBDwAwggEKAoIBAQC69a1Wbj/ust8oUSDldzAzgSTPRVRoyVJw
hm+H53tF9BQcZK8au5g+QbLfCxmjhvBPlKoqcI0vFmqC2Iggsi+camoovNHMM0X0
duNp26baScgt69zmrt3/fDM5vYax9oy7TpzPul9evxMgWeXpdBOammMZ5iK4t6rS
I+31Zn03diXnGjGyDl80F7wW4VHxJgu/F1Iw2iDLz7POnlNeqwkM+aAajqNsTLQl
6KEjERq1H+80fIoAZEe6OTiJhDRqwJl0hVfyO4LmUlbKqeqCpaKT83oe+AI4khjk
u57CrscFgZlmr0r7m9v400lKQq/2nReIZASD/h/FH2Rup3jV9VPJAgMBAAGjZjBk
MA4GA1UdDwEB/wQEAwIBBjASBgNVHRMBAf8ECDAGAQH/AgECMB0GA1UdDgQWBBS5
SMpy2LaaGv82S+0HT9P/36pDqjAfBgNVHSMEGDAWgBS5SMpy2LaaGv82S+0HT9P/
36pDqjANBgkqhkiG9w0BAQsFAAOCAQEAbWiRabEc42aRq07+6k7P/ijX6wpjwxLs
oLaNuXpzXostzcV/5fJLvAGLZlK3o8OxpioW+4acRQYCKuycWvyUQf4KV4UlMsCz
cCCW4h8oNrRkdiKEAkUf+lq84+lgmO7+UQFBpBODXYh+uPuaVMKqAbxZyQvDrbyO
ovg3bcB76XtP+MRW26R6L3plJGf5BuV90ClNUU1fkPYgHBwBWJT6YSwd89L4GAeL
hvPLEWidMGPwpTFoR3MIgAFQni+avbl6/EQjRxEfhGxB9ss0pWGtYOrkDxP7deOF
hE9L0aAHmFEQraIuE5zgKHURZRe/yM5ieRgH0SgCoONwCYTg1/OVMw==
-----END CERTIFICATE-----

就是ca,可以查看以下cat ca.pem

set-credentials

[root@hdss7-21 conf]# kubectl config set-credentials k8s-node --client-certificate=/opt/kubernetes/server/bin/cert/client.pem --client-key=/opt/kubernetes/server/bin/cert/client-key.pem --embed-certs=true --kubeconfig=kubelet.kubeconfig 

User "k8s-node" set.
set-context

[root@hdss7-21 conf]# kubectl config set-context myk8s-context \
  --cluster=myk8s \
  --user=k8s-node \
  --kubeconfig=kubelet.kubeconfig

Context "myk8s-context" created.
set-context

[root@hdss7-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig

Switched to context "myk8s-context".
k8s-node.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node

k8s-node.yaml

角色绑定


apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node

应用资源配置文件

/opt/kubernetes/server/conf


[root@hdss7-21 conf]# kubectl create -f k8s-node.yaml

clusterrolebinding.rbac.authorization.k8s.io/k8s-node created

检查

/opt/kubernetes/server/conf


[root@hdss7-21 conf]# kubectl get clusterrolebinding k8s-node
NAME           AGE
k8s-node       3m
[root@hdss7-21 conf]# kubectl get clusterrolebinding k8s-node -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: "2022-07-16T20:40:07Z"
  name: k8s-node
  resourceVersion: "23196"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/k8s-node
  uid: 4b7a6dcd-6941-4122-9ac8-367d66403f46
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node

[root@hdss7-22 conf]# scp hdss7-21:/opt/kubernetes/server/bin/conf/kubelet.kubeconfig .
root@hdss7-21's password: 
kubelet.kubeconfig                               100% 6195     4.7MB/s   00:00  
[root@hdss7-22 conf]# ls
audit.yaml  kubelet.kubeconfig
[root@hdss7-22 conf]# 

10.2.3、准备基础镜像pause

[root@hdss7-200 ~]# docker pull kubernetes/pause:latest
docker tag f9d5de079539 harbor.od.com/public/pause:latest
[root@hdss7-200 ~]# docker push harbor.od.com/public/pause:latest
编辑kubelet启动脚本
[root@hdss7-21 ~]# cat /opt/kubernetes/server/bin/kubelet-721.sh
#!/bin/sh
./kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 192.168.0.2 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on="false" \
  --client-ca-file ./cert/ca.pem \
  --tls-cert-file ./cert/kubelet.pem \
  --tls-private-key-file ./cert/kubelet-key.pem \
  --hostname-override hdds7-21.host.com \
  --image-gc-high-threshold 20 \
  --image-gc-low-threshold 10 \
  --kubeconfig ./conf/kubelet.kubeconfig \
  --log-dir /data/logs/kubernetes/kube-kubelet \
  --pod-infra-container-image harbor.od.com/public/pause:latest \
  --root-dir /data/kubelet

添加权限,创建目录
[root@hdss7-22 ~]# chmod +x /opt/kubernetes/server/bin/kubelet.sh
[root@hdss7-22 ~]# mkdir -p /data/logs/kubernetes/kube-kubelet  /data/kubelet

创建supervisor配置文件
vim /etc/supervisord.d/kube-kubelet.ini
[root@hdss7-21 ~]# cat  /etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-7-21]
command=/opt/kubernetes/server/bin/kubelet.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                              ; directory to cwd to before exec (def no cwd)
autostart=true                                                    ; start at supervisord start (default: true)
autorestart=true              									  ; retstart at unexpected quit (default: true)
startsecs=22                  									  ; number of secs prog must stay running (def. 1)
startretries=3                									  ; max # of serial start failures (default 3)
exitcodes=0,2                 									  ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT               									  ; signal used to kill process (default TERM)
stopwaitsecs=10               									  ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                         ; setuid to this UNIX account to run the program
redirect_stderr=false                                             ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log   ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                       ; emit events on stdout writes (default false)
stderr_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stderr.log   ; stderr log path, NONE for none; default AUTO
stderr_logfile_maxbytes=64MB                                      ; max # logfile bytes b4 rotation (default 50MB)
stderr_logfile_backups=4                                          ; # of stderr logfile backups (default 10)
stderr_capture_maxbytes=1MB   									  ; number of bytes in 'capturemode' (default 0)
stderr_events_enabled=false   									  ; emit events on stderr writes (default false)
[root@hdss7-21 ~]# 

查询各组件运行情况
supervisorctl update
supervisorctl status
[root@hdss7-22 ~]# supervisorctl status
etcd-server-7-22                 RUNNING   pid 6672, uptime 1:10:27
kube-apiserver                   RUNNING   pid 6671, uptime 1:10:27
kube-controller-manager          RUNNING   pid 6308, uptime 1:10:32
kube-kubelet-7-22                RUNNING   pid 6305, uptime 1:10:32
kube-scheduler                   RUNNING   pid 6304, uptime 1:10:32
[root@hdss7-22 ~]# 

给节点上ROLES打上标签如下,可以根据自己的意愿随意打
[root@hdss7-22 ~]# kubectl label node hdss7-22.host.com node-role.kubernetes.io/master=
node/hdss7-22.host.com labeled

[root@hdss7-21 ~]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/master=

查询节点信息
kubectl get node
[root@hdss7-22 ~]# kubectl get nodes
NAME                STATUS   ROLES    AGE   VERSION
hdss7-21.host.com   Ready    master   44m   v1.15.4
hdss7-22.host.com   Ready    master   67m   v1.15.4

11.1、部署kube-proxy

集群规划

主机名角色ip
HDSS7-21.host.comkubu-proxy10.4.7.21
HDSS7-22.host.comkube-proxy10.4.7.22

11.1.1签发kube-proxy证书

运维主机HDSS7-200.host.com上

创建生成证书签名请求(csr)的json配置文件

/opt/certs/kube-proxy-csr.json


{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
生成kubu-proxy证书和密钥
[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json | cfssl-json -bare kube-proxy-client
2022/07/18 23:04:18 [INFO] generate received request
2022/07/18 23:04:18 [INFO] received CSR
2022/07/18 23:04:18 [INFO] generating key: rsa-2048
2022/07/18 23:04:18 [INFO] encoded CSR
2022/07/18 23:04:18 [INFO] signed certificate with serial number 595636371949390868652510600721511823618578526619
2022/07/18 23:04:18 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@hdss7-200 certs]# ls -al |grep kube-proxy
-rw-r–r-- 1 root root 1005 Jul 18 23:04 kube-proxy-client.csr
-rw------- 1 root root 1675 Jul 18 23:04 kube-proxy-client-key.pem
-rw-r–r-- 1 root root 1375 Jul 18 23:04 kube-proxy-client.pem
-rw-r–r-- 1 root root 267 Jul 18 23:02 kube-proxy-csr.json
[root@hdss7-200 certs]#

11.1.2 创建配置

set-cluster

cd /opt/kubernetes/server/bin/conf/


[root@hdss7-21 conf]# kubectl config set-cluster myk8s \
>   --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
>   --embed-certs=true \
>   --server=https://10.4.7.10:7443 \
>   --kubeconfig=kube-proxy.kubeconfig

set-creadentials


[root@hdss7-21 conf]# kubectl config set-credentials kube-proxy \
  --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
  --client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

User "kube-proxy" set.

set-context


[root@hdss7-21 conf]# kubectl config set-context myk8s-context \
  --cluster=myk8s \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

Context "myk8s-context" created.

use-context


[root@hdss7-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig

Switched to context "myk8s-context".

将生成的配置拷贝到另一台主机上

[root@hdss7-22 ~]# scp hdss7-21:/opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig .

11.1.3 创建ipvs脚本,加载模块

[root@hdss7-22 ~]# cat ipvs.sh 
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
   /sbin/modinfo -F filename $i &>/dev/null
   if [ $? -eq 0 ];then
      /sbin/modprobe $i
   fi
done

[root@hdss7-22 ~]# chmod +x ipvs.sh 
[root@hdss7-22 ~]# bash ipvs.sh 
[root@hdss7-22 ~]# lsmod |grep ip_vs
ip_vs_wrr              12697  0 
ip_vs_wlc              12519  0 
ip_vs_sh               12688  0 
ip_vs_sed              12519  0 
ip_vs_rr               12600  0 
ip_vs_pe_sip           12740  0 
nf_conntrack_sip       33860  1 ip_vs_pe_sip
ip_vs_nq               12516  0 
ip_vs_lc               12516  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_ftp              13079  0 
ip_vs_dh               12688  0 
ip_vs                 145497  24 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblc
nf_nat                 26787  3 ip_vs_ftp,nf_nat_ipv4,nf_nat_masquerade_ipv4
nf_conntrack          133095  8 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

11.1.4 创建kube-proxy启动脚本

/opt/kubernetes/server/bin/kube-proxy.sh

[root@hdss7-22 bin]# cat kube-proxy.sh 
#!/bin/sh
./kube-proxy \
  --cluster-cidr 172.7.0.0/16 \
  --hostname-override hdss7-21.host.com \
  --proxy-mode=ipvs \
  --ipvs-scheduler=nq \
  --kubeconfig ./conf/kube-proxy.kubeconfig

略:也可以使用以下配置,iptables只支持rr模式,不建议使用这用这种模式


#!/bin/sh
./kube-proxy \
  --cluster-cidr 172.7.0.0/16 \
  --hostname-override 10.4.7.21 \
  --proxy-mode=iptables \
  --ipvs-scheduler=rr \
  --kubeconfig ./conf/kube-proxy.kubeconfig

创建日志目录

[root@hdss7-21 bin]#  mkdir -p /data/logs/kubernetes/kube-proxy

11.1.5 创建supervisor配置

vim /etc/supervisord.d/kube-proxy.ini


[root@hdss7-21 bin]# cat /etc/supervisord.d/kube-proxy.ini 
[program:kube-proxy-7-21]
command=/opt/kubernetes/server/bin/kube-proxy.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                           ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                 ; directory to cwd to before exec (def no cwd)
autostart=true                                                       ; start at supervisord start (default: true)
autorestart=true                                                     ; retstart at unexpected quit (default: true)
startsecs=22                                                         ; number of secs prog must stay running (def. 1)
startretries=3                                                       ; max # of serial start failures (default 3)
exitcodes=0,2                                                        ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                      ; signal used to kill process (default TERM)
stopwaitsecs=10                                                      ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                		         ; setuid to this UNIX account to run the program
redirect_stderr=false                                           		 ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log     ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                    		 ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                        		 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                     		 ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                     		 ; emit events on stdout writes (default false)
stderr_logfile=/data/logs/kubernetes/kube-proxy/proxy.stderr.log     ; stderr log path, NONE for none; default AUTO
stderr_logfile_maxbytes=64MB                                    		 ; max # logfile bytes b4 rotation (default 50MB)
stderr_logfile_backups=4                                        		 ; # of stderr logfile backups (default 10)
stderr_capture_maxbytes=1MB   						                           ; number of bytes in 'capturemode' (default 0)
stderr_events_enabled=false   						                           ; emit events on stderr writes (default false)
						                           ; emit events on stderr writes (default false)

supervisorctl update

supervisorctl status

[root@hdss7-22 bin]# supervisorctl  status
etcd-server-7-22                 RUNNING   pid 6653, uptime 8:18:32
kube-apiserver                   RUNNING   pid 6506, uptime 8:18:33
kube-controller-manager          RUNNING   pid 6288, uptime 8:18:38
kube-kubelet-7-22                RUNNING   pid 6274, uptime 8:18:38
kube-proxy-7-22                  RUNNING   pid 123157, uptime 0:15:59
kube-scheduler                   RUNNING   pid 6273, uptime 8:18:38

安装ipvsadm查看轮询状态

yum install -y ipvsadm

[root@hdss7-21 bin]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq
  -> 10.4.7.21:6443               Masq    1      0          0   
  -> 10.4.7.22:6443               Masq    1      0          0   
[root@hdss7-21 bin]# 

11.1.6 验证集群部署

在hdss7-21上创建资源配置清单


[root@hdss7-21 ~]# cat nginx-ds.yaml 
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ds
spec:
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: harbor.od.com/public/nginx:1.7.9
        ports:
        - containerPort: 80
[root@hdss7-21 ~]# 

应用资源配置,并检查

[root@hdss7-21 ~]# kubectl create -f nginx-ds.yaml
service/nginx-ds created
daemonset.extensions/nginx-ds created
[root@hdss7-21 ~]#  kubectl get pods
NAME             READY   STATUS              RESTARTS   AGE
nginx-ds-qbrdf   0/1     ContainerCreating   0          15s
nginx-ds-s28tt   0/1     ContainerCreating   0          15s
[root@hdss7-21 ~]# 

[root@hdss7-21 ~]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE     IP           NODE                NOMINATED NODE   READINESS GATES
nginx-ds-qbrdf   1/1     Running   0          9m22s   172.7.21.2   hdss7-21.host.com   <none>           <none>
nginx-ds-s28tt   1/1     Running   0          9m22s   172.7.22.2   hdss7-22.host.com   <none>           <none>

[root@hdss7-21 ~]# curl 172.7.21.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

[root@hdss7-21 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok             
controller-manager   Healthy   ok             
etcd-0               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
[root@hdss7-21 ~]# kubectl get node
NAME                STATUS   ROLES    AGE   VERSION
hdss7-21.host.com   Ready    master   14d   v1.15.4
hdss7-22.host.com   Ready    master   14d   v1.15.4
[root@hdss7-21 ~]# kubectl get pod
NAME             READY   STATUS    RESTARTS   AGE
nginx-ds-qbrdf   1/1     Running   0          12m
nginx-ds-s28tt   1/1     Running   0          12m
[root@hdss7-21 ~]# 


11.1.7 证书工具的使用,转换成json格式,查看期限

[root@hdss7-200 certs]# cfssl-certinfo -cert apiserver.pem 
{
  "subject": {
    "common_name": "apiserver",
    "country": "CN",
    "organization": "od",
    "organizational_unit": "ops",
    "locality": "beijing",
    "province": "beijing",
    "names": [
      "CN",
      "beijing",
      "beijing",
      "od",
      "ops",
      "apiserver"
    ]
  },
  "issuer": {
    "common_name": "OldboyEdu",
    "country": "CN",
    "organization": "od",
    "organizational_unit": "ops",
    "locality": "beijing",
    "province": "beijing",
    "names": [
      "CN",
      "beijing",
      "beijing",
      "od",
      "ops",
      "OldboyEdu"
    ]
  },
  "serial_number": "267701268164562403695006954689823669528694523279",
  "sans": [
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local",
    "127.0.0.1",
    "192.168.0.1",
    "10.4.7.10",
    "10.4.7.21",
    "10.4.7.22",
    "10.4.7.23"
  ],
  "not_before": "2022-07-15T18:34:00Z",
  "not_after": "2042-07-10T18:34:00Z",
  "sigalg": "SHA256WithRSA",
  "authority_key_id": "B9:48:CA:72:D8:B6:9A:1A:FF:36:4B:ED:7:4F:D3:FF:DF:AA:43:AA",
  "subject_key_id": "2A:A7:56:EA:CE:FB:F:4F:35:DF:F5:C1:99:8E:A:D1:B4:D7:28:FA",
  "pem": "-----BEGIN CERTIFICATE-----\nMIIEazCCA1OgAwIBAgIULuQj2Rao0XS76lJI4oeO2giKhY8wDQYJKoZIhvcNAQEL\nBQAwYDELMAkGA1UEBhMCQ04xEDAOBgNVBAgTB2JlaWppbmcxEDAOBgNVBAcTB2Jl\naWppbmcxCzAJBgNVBAoTAm9kMQwwCgYDVQQLEwNvcHMxEjAQBgNVBAMTCU9sZGJv\neUVkdTAeFw0yMjA3MTUxODM0MDBaFw00MjA3MTAxODM0MDBaMGAxCzAJBgNVBAYT\nAkNOMRAwDgYDVQQIEwdiZWlqaW5nMRAwDgYDVQQHEwdiZWlqaW5nMQswCQYDVQQK\nEwJvZDEMMAoGA1UECxMDb3BzMRIwEAYDVQQDEwlhcGlzZXJ2ZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCsZgOO3aqGjp2VXH6EqNtbnNS/xbYbAcmb\nqjjLwlJUkghZXP2aSxSibgb6xk1dxQ8a7pvgz13Gy0KR0bJVZuN/LjHVw0JtT9rK\n/Ebas5BYYy1urQyWI4Dd6+ptgS/JR+d1fQu642emYrO3Gm5zaUtXMlAbrT0kQ73y\n2CE4miubT7H0qdIBZAybxY552p1V8DxiY+2yF/cKMrUOKSRowuJ+eACyudvgg+IK\nrcKSyAyy1MgE5wUbs9ssQYZHne82yUfQKODWXX7Z4NYtA/uze08/pNxN1ReHnstG\n3WkQGbI2Z6rxxPEfm7hFNvE1DYaaLdi8/3ETIJ8+1+pTe4jrL8WpAgMBAAGjggEb\nMIIBFzAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUHAwEwDAYDVR0T\nAQH/BAIwADAdBgNVHQ4EFgQUKqdW6s77D0813/XBmY4K0bTXKPowHwYDVR0jBBgw\nFoAUuUjKcti2mhr/NkvtB0/T/9+qQ6owgaEGA1UdEQSBmTCBloISa3ViZXJuZXRl\ncy5kZWZhdWx0ghZrdWJlcm5ldGVzLmRlZmF1bHQuc3Zjgh5rdWJlcm5ldGVzLmRl\nZmF1bHQuc3ZjLmNsdXN0ZXKCJGt1YmVybmV0ZXMuZGVmYXVsdC5zdmMuY2x1c3Rl\nci5sb2NhbIcEfwAAAYcEwKgAAYcECgQHCocECgQHFYcECgQHFocECgQHFzANBgkq\nhkiG9w0BAQsFAAOCAQEAhLv4UQpQf0jlMTGQV+eH4k8KMmOXapL461DU9DI4tLn4\n7SxHTagUvHUSpoXw1fXQMbylCQiOML/YKpQX+AjKM8sPaUuJD7YqijZMLmUYUOo5\nfZPywG9ll5OyvCdfdcHcu2tCE9G8MoMCgE3thONIdR9baf09+9bFf+IYBXN0O+QH\nGsRbgpjNWcxOXjakNUyLWt0TbvwHtl6mpygpZhPbRZZQ+XPvJSW+yA8mjzxxPAiL\n1Ba8s0cQi51QDGQeo16tCUAbHILTwlLE98Ce1HXAMA8lIjzm2oEaFzXA7km+bzBR\n52GJOVMM9/UlcxGPcKrKmiClddU0jO7ar9jRwyvlhQ==\n-----END CERTIFICATE-----\n"
}

略:证书默认是一年,所以需要检查ca证书期限

[root@hdss7-200 certs]# cfssl-certinfo -domain www.baidu.com
{
  "subject": {
    "common_name": "baidu.com",
    "country": "CN",
    "organization": "Beijing Baidu Netcom Science Technology Co., Ltd",
    "organizational_unit": "service operation department",
    "locality": "beijing",
    "province": "beijing",
    "names": [
      "CN",
      "beijing",
      "beijing",
      "service operation department",
      "Beijing Baidu Netcom Science Technology Co., Ltd",
      "baidu.com"
    ]
  },
  "issuer": {
    "common_name": "GlobalSign RSA OV SSL CA 2018",
    "country": "BE",
    "organization": "GlobalSign nv-sa",
    "names": [
      "BE",
      "GlobalSign nv-sa",
      "GlobalSign RSA OV SSL CA 2018"
    ]
  },
  "serial_number": "21073761258320394826617220968",
  "sans": [
    "baidu.com",
    "baifubao.com",
    "www.baidu.cn",
    "www.baidu.com.cn",
    "mct.y.nuomi.com",
    "apollo.auto",
    "dwz.cn",
    "*.baidu.com",
    "*.baifubao.com",
    "*.baidustatic.com",
    "*.bdstatic.com",
    "*.bdimg.com",
    "*.hao123.com",
    "*.nuomi.com",
    "*.chuanke.com",
    "*.trustgo.com",
    "*.bce.baidu.com",
    "*.eyun.baidu.com",
    "*.map.baidu.com",
    "*.mbd.baidu.com",
    "*.fanyi.baidu.com",
    "*.baidubce.com",
    "*.mipcdn.com",
    "*.news.baidu.com",
    "*.baidupcs.com",
    "*.aipage.com",
    "*.aipage.cn",
    "*.bcehost.com",
    "*.safe.baidu.com",
    "*.im.baidu.com",
    "*.baiducontent.com",
    "*.dlnel.com",
    "*.dlnel.org",
    "*.dueros.baidu.com",
    "*.su.baidu.com",
    "*.91.com",
    "*.hao123.baidu.com",
    "*.apollo.auto",
    "*.xueshu.baidu.com",
    "*.bj.baidubce.com",
    "*.gz.baidubce.com",
    "*.smartapps.cn",
    "*.bdtjrcv.com",
    "*.hao222.com",
    "*.haokan.com",
    "*.pae.baidu.com",
    "*.vd.bdstatic.com",
    "*.cloud.baidu.com",
    "click.hm.baidu.com",
    "log.hm.baidu.com",
    "cm.pos.baidu.com",
    "wn.pos.baidu.com",
    "update.pan.baidu.com"
  ],
  "not_before": "2022-07-05T05:16:02Z",
  "not_after": "2023-08-06T05:16:01Z",
  "sigalg": "SHA256WithRSA",
  "authority_key_id": "F8:EF:7F:F2:CD:78:67:A8:DE:6F:8F:24:8D:88:F1:87:3:2:B3:EB",
  "subject_key_id": "3B:70:2D:3D:E8:19:5:0:47:12:2:EF:81:18:D3:41:8:E5:16:52",
  "pem": "-----BEGIN CERTIFICATE-----\nMIIKEjCCCPqgAwIBAgIMRBfOhu+C7GkhzG9oMA0GCSqGSIb3DQEBCwUAMFAxCzAJ\nBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMSYwJAYDVQQDEx1H\nbG9iYWxTaWduIFJTQSBPViBTU0wgQ0EgMjAxODAeFw0yMjA3MDUwNTE2MDJaFw0y\nMzA4MDYwNTE2MDFaMIGnMQswCQYDVQQGEwJDTjEQMA4GA1UECBMHYmVpamluZzEQ\nMA4GA1UEBxMHYmVpamluZzElMCMGA1UECxMcc2VydmljZSBvcGVyYXRpb24gZGVw\nYXJ0bWVudDE5MDcGA1UEChMwQmVpamluZyBCYWlkdSBOZXRjb20gU2NpZW5jZSBU\nZWNobm9sb2d5IENvLiwgTHRkMRIwEAYDVQQDEwliYWlkdS5jb20wggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqL8xBjSWug+n0J8QAszlvDpgqVX0H5YBJ\ngvrT04WYtd97b7sC3e145AwHK54ehkv2aoZY11dvIVkR2G+WbtLeNij2tOPOlTIp\nAMFljmmwAP5SN/SIP4ttD7vw7MXAMe+ttQwGZq2+3EMTxGawXc9WU+LRloIcBrub\nX+1gjdLt89JQ7rvNsjaXyM570ku3XLSIyjdui875lv209Ue1IHe7/KidgbJs+McJ\nat0iboM/p1Pf8dovKWsiw+kdZejFoLoTThY/A5PwpVmKGoDoJ31JI9/R+UuXtwHE\nGfXxxf+RM9ChdMbu1M/2OAztvV6qRPuI93uZcHY0VX5V0g+ev5STAgMBAAGjggaS\nMIIGjjAOBgNVHQ8BAf8EBAMCBaAwgY4GCCsGAQUFBwEBBIGBMH8wRAYIKwYBBQUH\nMAKGOGh0dHA6Ly9zZWN1cmUuZ2xvYmFsc2lnbi5jb20vY2FjZXJ0L2dzcnNhb3Zz\nc2xjYTIwMTguY3J0MDcGCCsGAQUFBzABhitodHRwOi8vb2NzcC5nbG9iYWxzaWdu\nLmNvbS9nc3JzYW92c3NsY2EyMDE4MFYGA1UdIARPME0wQQYJKwYBBAGgMgEUMDQw\nMgYIKwYBBQUHAgEWJmh0dHBzOi8vd3d3Lmdsb2JhbHNpZ24uY29tL3JlcG9zaXRv\ncnkvMAgGBmeBDAECAjAJBgNVHRMEAjAAMD8GA1UdHwQ4MDYwNKAyoDCGLmh0dHA6\nLy9jcmwuZ2xvYmFsc2lnbi5jb20vZ3Nyc2FvdnNzbGNhMjAxOC5jcmwwggNhBgNV\nHREEggNYMIIDVIIJYmFpZHUuY29tggxiYWlmdWJhby5jb22CDHd3dy5iYWlkdS5j\nboIQd3d3LmJhaWR1LmNvbS5jboIPbWN0LnkubnVvbWkuY29tggthcG9sbG8uYXV0\nb4IGZHd6LmNuggsqLmJhaWR1LmNvbYIOKi5iYWlmdWJhby5jb22CESouYmFpZHVz\ndGF0aWMuY29tgg4qLmJkc3RhdGljLmNvbYILKi5iZGltZy5jb22CDCouaGFvMTIz\nLmNvbYILKi5udW9taS5jb22CDSouY2h1YW5rZS5jb22CDSoudHJ1c3Rnby5jb22C\nDyouYmNlLmJhaWR1LmNvbYIQKi5leXVuLmJhaWR1LmNvbYIPKi5tYXAuYmFpZHUu\nY29tgg8qLm1iZC5iYWlkdS5jb22CESouZmFueWkuYmFpZHUuY29tgg4qLmJhaWR1\nYmNlLmNvbYIMKi5taXBjZG4uY29tghAqLm5ld3MuYmFpZHUuY29tgg4qLmJhaWR1\ncGNzLmNvbYIMKi5haXBhZ2UuY29tggsqLmFpcGFnZS5jboINKi5iY2Vob3N0LmNv\nbYIQKi5zYWZlLmJhaWR1LmNvbYIOKi5pbS5iYWlkdS5jb22CEiouYmFpZHVjb250\nZW50LmNvbYILKi5kbG5lbC5jb22CCyouZGxuZWwub3JnghIqLmR1ZXJvcy5iYWlk\ndS5jb22CDiouc3UuYmFpZHUuY29tgggqLjkxLmNvbYISKi5oYW8xMjMuYmFpZHUu\nY29tgg0qLmFwb2xsby5hdXRvghIqLnh1ZXNodS5iYWlkdS5jb22CESouYmouYmFp\nZHViY2UuY29tghEqLmd6LmJhaWR1YmNlLmNvbYIOKi5zbWFydGFwcHMuY26CDSou\nYmR0anJjdi5jb22CDCouaGFvMjIyLmNvbYIMKi5oYW9rYW4uY29tgg8qLnBhZS5i\nYWlkdS5jb22CESoudmQuYmRzdGF0aWMuY29tghEqLmNsb3VkLmJhaWR1LmNvbYIS\nY2xpY2suaG0uYmFpZHUuY29tghBsb2cuaG0uYmFpZHUuY29tghBjbS5wb3MuYmFp\nZHUuY29tghB3bi5wb3MuYmFpZHUuY29tghR1cGRhdGUucGFuLmJhaWR1LmNvbTAd\nBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHwYDVR0jBBgwFoAU+O9/8s14\nZ6jeb48kjYjxhwMCs+swHQYDVR0OBBYEFDtwLT3oGQUARxIC74EY00EI5RZSMIIB\ngQYKKwYBBAHWeQIEAgSCAXEEggFtAWsAdwDoPtDaPvUGNTLnVyi8iWvJA9PL0RFr\n7Otp4Xd9bQa9bgAAAYHMyXLxAAAEAwBIMEYCIQCAJkJ9PZHF14jjp5pQmOqVHgWc\nKJQe/CzCYYtU1jRusgIhAIKcdmnRNSZFGV0wNG9kEneZq3NtcvkKFiiNc4OV9HXe\nAHcAb1N2rDHwMRnYmQCkURX/dxUcEdkCwQApBo2yCJo32RMAAAGBzMlyzgAABAMA\nSDBGAiEA5hWVNwpM5TgNnimQHL0PCjW/sMVCuDDcCjcIn8Q+yvoCIQDuUewoBK8z\niBT+9+oWB4PZNcn845lM6CmywA7DkbPzgQB3AFWB1MIWkDYBSuoLm1c8U/DA5Dh4\ncCUIFy+jqh0HE9MMAAABgczJcvwAAAQDAEgwRgIhAJ8GGwADLa35t5AptPcYTl8G\nw8Z/ApE7WLF/aa008TuvAiEAnn86opE4Rx90KXEWoY1UuuK3A3c0DMhNkLT7SWM+\nuRIwDQYJKoZIhvcNAQELBQADggEBAGMhByNHBuuzfHds37xVErnxXmoEYBa+0AsY\nnJQMqIIIJQ0m+93L/Iwn2Qz6SrYxtmfwJiwNlpY5ZT/Zoe7enBBNVOHI1qkOd9sA\n4jfjP7ScMU+sdNMiElM20O8YBy2O0OaRsmxKXjlTFFhO0VAEyYN+DXsVlocR111K\nF6yqn4TjqCSd1hd3JoyfensY2jkvd/crxyO4l2/D0XJMfvzGDcxzOBmB++fBeui5\nHToF3DYEm/Hw4aZHoDBPVZBs2s+esnYSEaFctmGNFaRoZZpXL3puox/1tJJaPN9x\nCs1X1NAVNn661QMlJ0W0YM0uAsEPCudBb1hpIJ6tR1IatebljR0=\n-----END CERTIFICATE-----\n"
}
[root@hdss7-200 certs]# 

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐