github访问不了,可以修改DNS为 203.208.39.99 或者 52.74.223.119

一、了解K8S 相关概念

1.基础概念

  • pod
    • pod是K8s里能够被运行的最小逻辑单元
    • 一个pod里面可以运行多个容器, 他们共享UTS + NET + IPC命名空间
    • 一个pod里面运行多个容器,又叫:边车模式(SideCar)
  • pod控制器
    • Pod控制器是pod启动的一种模板,用来保证k8s里启动的pod应始终按照人们的预期运行(副本数、生命周期、健康状态检查)
    • k8s提供了众多控制器:
      • deployment
      • deamonset
      • replicaset
      • statefulset
      • job
      • cronjob
  • Name:
    • 由于k8s内部,使用“资源”来定义每一种逻辑概念(功能),故每种资源都应该由自己的名称
    • 资源有 api版本(apiVersion)、类别(kind)、元数据(metadata)、定义清单(spec)、状态(status)等配置信息
    • “名称”通常定义在资源的“元数据”信息里
  • Namespace:
    • 用于隔离k8s内各种资源的方法
    • 名称空间可以理解成k8s内部虚拟的集群组
    • 不同名称空间内的资源名称可以相同,相同名称空间内的资源名称不能相同
  • Label:
    • 标签是k8s特色的管理方式,便于分类资源管理对象
    • 一个标签可以对应多个资源,一个资源也可以对应多个标签
    • 一个资源拥有多个标签可以实现不同维度的管理
    • 标签的组成:key=value (标签的value不能多于64字节)
    • 与标签类似的还有一种“注解”(annotation)
  • Label选择器:
    • 给资源大会上那个标签后可以利用标签选择器过滤指定的标签
    • 标签选择器目前有两个:基于等值关系(等于、不等于)和 基于集合关系(属于、不属于、存在)
    • 许多资源支持内嵌标签选择器字段:
      • matchLabels
      • matchExpressions
  • Service:
    • 在k8s里,每一个pod都会被分配一个单独的ip,但这个地址会随着pod的销毁而消失
    • Service(服务)就是用来解决这个问题的核心概念
    • 一个Service可以看做一组用来提供相同服务的pod的对外访问接口
    • Service作用于哪些pod是通过标签选择器来定义的
  • __Ingress: __
    • Ingress 是k8s集群里工作在OSI网络参考模型下,第七层的应用,对外暴露接口
    • Service只能进行L4层流量调度,表现形式是 ip + port
    • Ingress则可以调度不同的业务域,不同的URL访问路径的 业务流量

2.核心组件

  • 配置存储中心:etcd服务集群
  • 主控(master)节点:
    • kube-apiserver
    • kube-controller-manager
    • kube-secduler
  • 运算(node)节点:
    • kube-kubelete
    • kube-proxy
  • CLI客户端
  • kubectl
  • 核心附件:
    • CNI网络插件:flannel / calico
    • 服务发现插件:coredns
    • 服务暴露插件:traefik
    • GUI管理插件:Dashboard
      在这里插入图片描述
      在这里插入图片描述

二、安装部署K8S

1. 常见的安装部署方式

  • MInikube:单节点微型k8s,(参考地址:https://kubernetes.io/docs/tutorials/hello-minikube/)
  • 二进制安装部署(推荐使用)
  • 使用kubeadmin部署安装(部署简单)

2. 部署规划及准备

  • 5台Linux机器,centos系统,10.4.7.0/24网段,
  • 调整操作系统,关闭防火墙,swap分区、核心防护
  • 安装bind,自建DNS
  • 准备自签证书环境
  • 安装docker,部署harbor仓库

使用VMware虚拟机进行试验,可以将Vm8这张网卡的跃迁优先级调高,后面使用DNS服务时,会优先使用vm8这个网卡
在这里插入图片描述

2.1 准备机器

设置主机名

[root@localhost ~]# hostnamectl set-hostname master7-11
[root@localhost ~]# bash
[root@master7-11 ~]# hostname -i
fe80::ec46:118b:8414:9e71%ens33 10.4.7.11 192.168.122.1
# --------------------------------------
[root@localhost ~]# hostnamectl set-hostname master7-12
[root@localhost ~]# bash
[root@master7-12 ~]# hostname -i
fe80::7749:799e:fca2:8347%ens33 10.4.7.12 192.168.122.1
# --------------------------------------
[root@localhost ~]# hostnamectl set-hostname node7-21
[root@localhost ~]# bash
[root@node7-21 ~]# hostname -i
fe80::364b:470f:4693:f5ba%ens33 10.4.7.21 192.168.122.1
# --------------------------------------
[root@localhost ~]# hostnamectl set-hostname node7-22
[root@localhost ~]# bash
[root@node7-22 ~]# hostname -i
fe80::7246:bbdf:de67:3614%ens33 10.4.7.22 192.168.122.1
# --------------------------------------
[root@harbor7-200 ~]# hostnamectl set-hostname src7-200
[root@harbor7-200 ~]# bash
[root@harbor7-200 ~]# hostname -i
fe80::f695:abb2:67d7:98a%ens33 10.4.7.200 172.17.0.1

2.2 调整操作系统

安装 epel-release 源

[root@localhost ~]# yum -y install epel-release 

关闭selinux和firewalld

[root@localhost ~]# systemctl stop firewalld

[root@localhost ~]# systemctl disable firewalld

[root@localhost ~]# setenforce 0

安装必要工具

[root@localhost ~]# yum -y install wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils

2.3 DNS初始化

安装 bind 服务: 在10.4.7.11机器上

[root@hdss7-11 ~]# yum -y install bind

[root@hdss7-11 ~]# named -v
BIND 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.5 (Extended Support Version) <id:7107deb>

修改DNS配置

[root@master7-11 ~]# vim /etc/named.conf
options {
        // 监听本地地址
        listen-on port 53 { 10.4.7.11; };
//      listen-on-v6 port 53 { ::1; };   用不到ipv6解析,直接注释掉
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        recursing-file  "/var/named/data/named.recursing";
        secroots-file   "/var/named/data/named.secroots";
        allow-query     { any; };
        //添加"forwarders { 10.4.7.2; };"配置,表示上一级DNS,当本地无法解析时,调用上一级DNS解析。我们这里配置成网关地址,表示调用宿主机的DNS解析
        forwarders      { 10.4.7.2; };
        // recursion 参数一定要配置成yes,表示采用递归算法查询
        recursion yes;
		// 下面两个参数可以设置成no,节省资源,(yes也行,可以不用改)
        dnssec-enable no;
        dnssec-validation no;

检查配置是否有误

# 无错误提示就是正确
[root@master7-11 ~]# named-checkconf 

修改区域配置文件

# 配置主机域和业务域(假的域名,只有在局域网内可用)
[root@master7-11 ~]# vim /etc/named.rfc1912.zones
// 配置主机域 (都是自定义的域名)
zone "host.com" IN {
        type master;
        file "host.com.zone";
        allow-update { 10.4.7.11; };
};
// 配置业务域
zone "prod.com" IN {
        type master;
        file "prod.com.zone";
        allow-update { 10.4.7.11; };
};

配置数据文件

[root@master7-11 ~]# cp -p /var/named/named.localhost /var/named/host.com.zone
[root@master7-11 ~]# cp -p /var/named/named.localhost /var/named/prod.com.zone
# 配置主机域数据文件
[root@master7-11 ~]# vim /var/named/host.com.zone
$TTL 600 ;10 minutes 这里的分号代表注释
@       IN SOA  dns.host.com. dnsadmin.host.com. ( ;这里()之间是SOA记录信息
                                     20210605   ; serial (time)
                                     10800      ; refresh (3 hours)
                                     900        ; retry (15 minutes)
                                     604800     ; expire (w week)
                                     86400 )    ; minimum (1 day)
        NS      dns.host.com.        ; 这里是 NS 记录
$TTL 60 ;1 minutes
dns             A       10.4.7.11
master7-11      A       10.4.7.11
master7-12      A       10.4.7.12
node7-21        A       10.4.7.21
node7-21        A       10.4.7.22
src7-200     A       10.4.7.200

# 配置业务域数据文件
[root@master7-11 ~]# vim /var/named/prod.com.zone
$TTL 600 ;10 minutes 这里的分号代表注释
@       IN SOA  dns.prod.com. dnsadmin.prod.com. ( ;这里()之间是SOA记录信息
                                     20210605   ; serial (time)
                                     10800      ; refresh (3 hours)
                                     900        ; retry (15 minutes)
                                     604800     ; expire (w week)
                                     86400 )    ; minimum (1 day)
        NS      dns.prod.com.        ; 这里是 NS 记录
$TTL 60 ;1 minutes
dns             A       10.4.7.11

检查数据文件

[root@master7-11 ~]# named-checkzone prod.com /var/named/prod.com.zone
zone prod.com/IN: loaded serial 20210605
OK
[root@master7-11 ~]# named-checkzone host.com /var/named/host.com.zone 
zone host.com/IN: loaded serial 20210605
OK

启动DNS服务

[root@master7-11 ~]# systemctl start named
[root@master7-11 ~]# ss -lnpt |grep 53
LISTEN     0      10     10.4.7.11:53                       *:*                   users:(("named",pid=44103,fd=21))
LISTEN     0      5      192.168.122.1:53                       *:*                   users:(("dnsmasq",pid=1381,fd=6))
LISTEN     0      128    127.0.0.1:953                      *:*                   users:(("named",pid=44103,fd=22))
LISTEN     0      128        ::1:953                     :::*                   users:(("named",pid=44103,fd=23))

检测是否能正常解析

# 都可以解析
[root@master7-11 ~]# dig -t A master7-11.host.com @10.4.7.11 +short
10.4.7.11
[root@master7-11 ~]# dig -t A master7-12.host.com @10.4.7.11 +short
10.4.7.12

修改DNS 服务器地址为 10.4.7.11 (所有机器都修改)

[root@master7-11 ~]# vim /etc/resolv.conf 
nameserver 10.4.7.11

访问主机域测试(内外网都通)

[root@master7-12 ~]# ping www.baidu.com
PING www.a.shifen.com (36.152.44.95) 56(84) bytes of data.
64 bytes from 36.152.44.95 (36.152.44.95): icmp_seq=1 ttl=128 time=17.0 ms

[root@master7-12 ~]# ping master7-11.host.com
PING master7-11.host.com (10.4.7.11) 56(84) bytes of data.
64 bytes from 10.4.7.11 (10.4.7.11): icmp_seq=1 ttl=64 time=0.163 ms
64 bytes from 10.4.7.11 (10.4.7.11): icmp_seq=2 ttl=64 time=1.51 ms

添加search域,直接ping主机名测试

[root@master7-12 ~]# vim /etc/resolv.conf 
nameserver 10.4.7.11
search host.com       # 添加后,会自动在域名后加上 .host.com后缀
# 这个一般只用于主机域

# 可以直接ping通,相当于ping master7-11.host.com
[root@master7-12 ~]# ping master7-11
PING master7-11.host.com (10.4.7.11) 56(84) bytes of data.
64 bytes from 10.4.7.11 (10.4.7.11): icmp_seq=1 ttl=64 time=0.179 ms
64 bytes from 10.4.7.11 (10.4.7.11): icmp_seq=2 ttl=64 time=0.426 ms

在宿主机上测试ping
windows宿主机上安装telnet
“控制面板” > “程序” > “程序和功能” > “启动或关闭windows功能” > “选择telnet” > “确定”
在这里插入图片描述
修改宿主机的DNS地址

  • 注意,需要在高级里,将"自动跃迁"勾去,设置优先级10以上,这样解析时会优先使用这张vm8网卡
    在这里插入图片描述
    在这里插入图片描述

2.4 准备证书签发环境

部署在运维主机 10.4.7.200 上

1. 安装CFSSl证书签发工具
  • 证书签发工具:
    • cfssl
    • cfssl-json
    • cfssl-certinfo
[root@src7-200 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
[root@src7-200 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
[root@src7-200 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo

[root@src7-200 ~]# chmod +x /usr/bin/cfssl*

2. 创建CA证书签名(csr)的Json配置文件

要自签证书,首先要有一个CA证书也叫根证书(权威证书)

  • 签发根证书 – 创建生成CA证书签名请求(csr)的JSON配置文件
# 创建放证书的目录
[root@src7-200 ~]# mkdir /opt/certs

# json配置文件
[root@src7-200 ~]# vi /opt/certs/ca-csr.json
{
    "CN": "Prod",		# 机构名称,浏览器使用该字段验证网站是否合法,一般写的是域名,非常重要,浏览器使用该字段验证网站是否合法
    "hosts": [	
    ],
    "key": {			
        "algo": "rsa",		# 算法
        "size": 2048		# 长度
    },
    "names": [
        {
            "C": "CN",		# C,国家
            "ST": "beijing",	# ST 州,省
            "L": "beijing",	# L 地区 城市
            "O": "od",		# O 组织名称,公司名称
            "OU": "ops"		# OU 组织单位名称,公司部门
        }
    ],
    "ca": {
        "expiry": "175200h"	# expiry 过期时间,任何证书都有过期时间.20}
}

签发承载式证书

# cfssl gencert -initca ca-csr.json 命令会生成证书内容,集成在一起
# cfssl-json -bare ca 利用证书内容生成ca证书文件
[root@src7-200 certs]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca
2021/06/06 02:20:17 [INFO] generating a new CA key and certificate from CSR
2021/06/06 02:20:17 [INFO] generate received request
2021/06/06 02:20:17 [INFO] received CSR
2021/06/06 02:20:17 [INFO] generating key: rsa-2048
2021/06/06 02:20:17 [INFO] encoded CSR
2021/06/06 02:20:17 [INFO] signed certificate with serial number 498553788627069248598419850882925409677742511292

# 生成了ca.csr  ca-key.pem  ca.pem 三个文件
[root@src7-200 certs]# ls 
ca.csr  ca-csr.json  ca-key.pem  ca.pem

2.5 部署docker环境

官方地址:https://docs.docker.com/engine/install/centos/

# 每个节点都安装docker
[root@src7-200 certs]# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

[root@src7-200 certs]# mkdir -p /data/docker

[root@master7-11 ~]# vi /etc/docker/daemon.json
{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.prod.com"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "bip": "172.7.11.1/24",			# 定义k8s主机上k8s pod的ip地址网段
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}
# ----------------------------------
[root@master7-12 ~]# vi /etc/docker/daemon.json
{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.prod.com"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "bip": "172.7.12.1/24",			# 定义k8s主机上k8s pod的ip地址网段
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}
# ----------------------------------
[root@node7-21 ]# vi /etc/docker/daemon.json
{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.prod.com"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "bip": "172.7.21.1/24",			# 定义k8s主机上k8s pod的ip地址网段
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}
# ----------------------------------
[root@node7-22 docker]# vi /etc/docker/daemon.json
{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.prod.com"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "bip": "172.7.22.1/24",			# 定义k8s主机上k8s pod的ip地址网段
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}
# ----------------------------------
[root@src7-200 ]# vi /etc/docker/daemon.json
{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.prod.com"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "bip": "172.7.200.1/24",			# 定义k8s主机上k8s pod的ip地址网段
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}

2.6 部署harbor私有镜像仓库

2.6.1 下载并解压二进制包

官方地址:https://github.com/goharbor/harbor
(建议下载1.75以上版本)

[root@src7-200 opt]# mkdir /opt/src
[root@src7-200 opt]# cd /opt/src/
# 下载软件包
[root@src7-200 opt]# wget 'https://github.com/goharbor/harbor/releases/download/v2.2.2/harbor-offline-installer-v2.2.2.tgz'

[root@src7-200 src]# tar zxvf harbor-offline-installer-v2.2.2.tgz 

[root@src7-200 src]# ls
harbor

# 将目录修改成带版本号,便于后面升级
[root@src7-200 src]# mv harbor harbor-v2.2.2
[root@src7-200 src]# ln -s /opt/src/harbor-v2.2.2/ /opt/harbor
2.6.2 修改harbor.yml配置
[root@src7-200 opt]# cd /opt/harbor/

[root@src7-200 harbor]# vim harbor.yml.tmpl 
hostname: harbor.prod.com      # 修改hostname(这里用了自建DNS的业务域)
http:
  port: 1180                  # 原本端口是80,这里最好改下,防止冲突
harbor_admin_password: Harbor12345   # 默认密码,在生产中一定要改
data_volume: /data/harbor            # 数据目录挂载在/data/harbor,便于管理
     location: /data/harbor/logs     # 日志目录页改下,便于插看

[root@src7-200 harbor]# mv harbor.yml.tmpl harbor.yml     
# 创建日志目录
[root@src7-200 harbor]# mkdir -p /data/harbor/logs      
# 下载docker-compose编排工具
[root@master7-12 ~]# yum -y install docker-compose
# 查勘docker-compose版本
[root@src7-200 harbor]# rpm -qa docker-compose
docker-compose-1.18.0-4.el7.noarch

# 执行harbor中install.sh脚本(脚本用到了docker-compose)
[root@src7-200 harbor]# sh /opt/harbor/install.sh

执行报错:ERROR:root:Error: The protocol is https but attribute ssl_cert is not set
在这里插入图片描述

解决办法:注释harbor.yml中https的相关内容
在这里插入图片描述

检查安装是否正常

[root@src7-200 harbor]# docker-compose ps     # 查看编排的容器,都是UP
      Name                     Command               State                    Ports                  
-----------------------------------------------------------------------------------------------------
harbor-core         /harbor/entrypoint.sh            Up                                              
harbor-db           /docker-entrypoint.sh            Up                                              
harbor-jobservice   /harbor/entrypoint.sh            Up                                              
harbor-log          /bin/sh -c /usr/local/bin/ ...   Up      127.0.0.1:1514->10514/tcp               
harbor-portal       nginx -g daemon off;             Up                                              
nginx               nginx -g daemon off;             Up      0.0.0.0:1180->8080/tcp,:::1180->8080/tcp
redis               redis-server /etc/redis.conf     Up                                              
registry            /home/harbor/entrypoint.sh       Up                                              
registryctl         /home/harbor/start.sh            Up                                      
2.6.3 安装nginx做 harbor地址 的域名代理
# 安装NGINX
[root@src7-200 harbor]# yum -y install nginx
# 配置代理
[root@src7-200 harbor]# /etc/nginx/conf.d/harbor.prod.com.conf
server {
    listen       80;
    server_name  harbor.prod.com;

    client_max_body_size 1000m;

    location / {
        proxy_pass http://127.0.0.1:1180;
    }
}

# 检查nginx配置
[root@src7-200 harbor]# nginx -t
# 启动nginx
[root@src7-200 harbor]# systemctl start nginx
[root@src7-200 harbor]# systemctl enable nginx

在10.4.7.11 DNS服务器上 增加 配置 数据文件


$TTL 600 ;10 minutes 这里的分号代表注释
@       IN SOA  dns.prod.com. dnsadmin.prod.com. ( ;这里()之间是SOA记录信息
                                     20210605   ; serial (time) 注意:这里往后滚动一个记录编号02,每次更改配置,必须滚动一个序号
                                     10800      ; refresh (3 hours)
                                     900        ; retry (15 minutes)
                                     604800     ; expire (w week)
                                     86400 )    ; minimum (1 day)
        NS      dns.prod.com.        ; 这里是 NS 记录
$TTL 60 ;1 minutes
dns             A       10.4.7.11
harbor          A       10.4.7.200 ;加上这个harbor解析

# 重启named服务
[root@master7-11 opt]# systemctl restart named
2.6.4 宿主机访问测试

在这里插入图片描述
新建项目
在这里插入图片描述
上传镜像测试

# 首先要在节点上登入仓库
[root@master7-11 docker]# docker login harbor.prod.com
Username: admin    
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

# 下载一个nginx镜像
[root@master7-11 opt]# docker pull nginx:1.7.9

# 查看镜像
[root@master7-11 docker]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
nginx        1.7.9     84581e99d807   6 years ago   91.7MB

# 打标签
[root@master7-11 docker]# docker tag nginx:1.7.9 harbor.prod.com/public/nginx:1.7.9
#上传
[root@master7-11 docker]# docker push harbor.prod.com/public/nginx:1.7.9

到宿主机登入网页查看镜像是否上传成功(上传成功,镜像数多出一)
在这里插入图片描述

3、master 节点服务

3.1 部署etcd集群

  • 部署在10.4.7.12(leader)10.4.7.22(follow)10.4.7.21(follow)三台机器上
  • 注释:这里部署文档以master7-12.host.com主机为例,另外两台安装部署方法类似
3.1.1 在sre7-200上创建基于CA根证书的config配置文件
[root@src7-200 ~]# cat /opt/certs/ca-config.json
{
    "signing": {
        "default": {
            "expiry": "175200h"
        },
        "profiles": {
            "server": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "client": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {				
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
3.1.2 在sre7-200上创建基于etcd的证书请求文件

注意:此处IP地址必须在提前规划好,IP地址为有可能装ETCD的主机,可以多一个IP作为预备

[root@src7-200 ~]# vi /opt/certs/etcd-peer-csr.json
{
    "CN": "k8s-etcd",
    "hosts": [
        "10.4.7.11",
        "10.4.7.12",
        "10.4.7.21",
        "10.4.7.22"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

3.1.3 生成etcd集群通信证书
[root@src7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json |cfssl-json -bare etcd-peer

[root@src7-200 certs]# ls -ltr
total 36
-rw-r--r-- 1 root root  325 Jun  6 02:19 ca-csr.json
-rw-r--r-- 1 root root 1330 Jun  6 02:20 ca.pem
-rw------- 1 root root 1679 Jun  6 02:20 ca-key.pem
-rw-r--r-- 1 root root  989 Jun  6 02:20 ca.csr
-rw-r--r-- 1 root root  840 Jun  6 20:47 ca-config.json
-rw-r--r-- 1 root root  363 Jun  6 21:06 etcd-peer-csr.json
-rw-r--r-- 1 root root 1419 Jun  6 21:06 etcd-peer.pem    # 生成了这三个文件etcd-peer.pem etcd-peer-key.pem etcd-peer.csr
-rw------- 1 root root 1675 Jun  6 21:06 etcd-peer-key.pem
-rw-r--r-- 1 root root 1062 Jun  6 21:06 etcd-peer.csr
3.1.6 下载etcd包到10.4.7.12上并解压

官方地址:github.com/etcd.io/etcd/
下载etcd软件,建议用不超3.3的版本

# 包放在 /opt/src目录中
[root@master7-12 src]# tar xfv etcd-v3.3.10-linux-amd64.tar.gz -C /opt/

# 修改目录名称,保留版本号
[root@master7-12 src]# mv /opt/etcd-v3.3.10-linux-amd64 /opt/etcd-v3.3.10

[root@master7-12 src]# ln -s /opt/etcd-v3.3.10 /opt/etcd

[root@master7-12 src]# cd /opt/etcd
[root@master7-12 etcd]# ls
Documentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md
3.1.5 在etcd主机上(10.4.7.12)创建etcd用户
# 创建etcd用户
[root@master7-12 ~]# useradd -s /sbin/nologin -M etcd
# 查看etcd用户
[root@master7-12 ~]# id etcd
uid=1001(etcd) gid=1001(etcd) groups=1001(etcd)
3.1.6 在10.4.7.12 etcd主机上创建目录,拷贝7.200上的证书、私钥
  • 其他节点一眼,这里是以10.4.7.12举例

[root@src7-200 etcd]# mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server

# 将ca.pem etcd-peer-key.pem etcd-peer.pem 拷贝过来
[root@master7-12 ~]# sshpass -p 123456 ssh -o StrictHostKeyChecking=no src7-200.host.com "ls /opt/certs/*.pem"
/opt/certs/ca-key.pem
/opt/certs/ca.pem
/opt/certs/etcd-peer-key.pem
/opt/certs/etcd-peer.pem

[root@master7-12 ~]# i=$(sshpass -p 123456 ssh -o StrictHostKeyChecking=no src7-200.host.com "ls /opt/certs/*.pem |grep -v ca-key.pem") 

[root@master7-12 ~]# for a in $i;do sshpass -p 123456 scp 10.4.7.200:$a /opt/etcd/certs/;done

[root@master7-12 ~]# ll /opt/etcd/certs/
total 12                            # 证书已经拷贝过来了,注意:etcd-peer-key.pem 这个私钥权限一定要控制好,只能自己看(600权限)
-rw-r--r-- 1 root root 1330 Jun  6 21:43 ca.pem
-rw------- 1 root root 1675 Jun  6 21:43 etcd-peer-key.pem
-rw-r--r-- 1 root root 1419 Jun  6 21:43 etcd-peer.pem

3.1.7 在10.4.7.12 etcd主机上创建etcd服务启动脚本
  • 注意:要IP地址改成本机IP
  • 其他节点一眼,这里是以10.4.7.12举例
  • 该脚本是利用systemd管理进程的,也可以利用 supervisor 工具来后台启动进程

# 将属主改为etcd
[root@master7-12 etcd]# chown -R etcd.etcd /opt/etcd-v3.3.10/
[root@master7-12 etcd]# ll
total 34300
drwxr-xr-x  2 etcd etcd       66 Jun  6 21:43 certs
drwxr-xr-x 11 etcd etcd     4096 Jun  6 22:02 Documentation
-rwxr-xr-x  1 etcd etcd 19237536 Jun  6 22:02 etcd
-rwxr-xr-x  1 etcd etcd 15817472 Jun  6 22:02 etcdctl
-rw-r--r--  1 etcd etcd      981 Jun  6 21:50 etcd-server-startup.sh
-rw-r--r--  1 etcd etcd    38864 Jun  6 22:02 README-etcdctl.md
-rw-r--r--  1 etcd etcd     7262 Jun  6 22:02 README.md
-rw-r--r--  1 etcd etcd     7855 Jun  6 22:02 READMEv2-etcdctl.md

[root@master7-12 etcd]# chown -R etcd.etcd /data/etcd /data/logs/etcd-server

[root@master7-12 ~]# vi /opt/etcd/etcd-server-startup.sh
# !/bin/bash
# example: bash etcd.sh etcd01 192.168.10.10 etcd02=https://192.168.10.20:2380,etcd03=https://192.168.10.30:2380    ##执行脚本侯建集群的示例,

ETCD_NAME="etcd01"         # 定义这个etcd实例名称,值是"示例命令"中的etcd01,其他节点为etcd02,etcd03等等(可以起相同的名称)
ETCD_IP="10.4.7.12"        # 定义的变量,值是etcd01的ip
ETCD_CLUSTER="etcd02=https://10.4.7.21:2380,etcd03=https://10.4.7.22:2380"     ## 定义了变量值为 etcd 群集中的其他成员的 名称 及 ip 地址

WORK_DIR=/opt/etcd   ##定义了工作目录

cat <<EOF >$WORK_DIR/etcd.cfg                      # 生成etcd文件到cfg配置文件目录下

# [Member]                                         # 对etcd成员的配置

ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/data/logs/etcd-server"             # 定义数据存放目录
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"    # 开放群集之间内部通讯的一个端口 2379
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"         # 自己(即etcd01)对外客户端开放的一个端口是2380
 
# [Clustering]                                       # 集群内的配置

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"                # 同样群集内部通讯的端口
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"            # 对外开放的端口
ETCD_INITIAL_CLUSTER="${ETCD_NAME}=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"                ## 群集内所有节点的配置
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"                                                                  ## token认证令牌"etcd-cluster",每个etcd节点必须一致
ETCD_INITIAL_CLUSTER_STATE="new"                # 指明状态,构建新集群的状态
EOF

cat <<EOF >/usr/lib/systemd/system/etcd.service          # systemctl启动文件
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify                                                                                                        ## 类型:notify
EnvironmentFile=${WORK_DIR}/etcd.cfg
ExecStart=${WORK_DIR}/bin/etcd \
       --quota-backend-bytes 8000000000 \
       --name=\${ETCD_NAME} \                              # 指明了名称 etcd01
       --data-dir=\${ETCD_DATA_DIR} \                      # 指明数据存放目录
       --listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \      # 指定群集间内部通讯端口2380
       --listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \     ## 客户端监听端口2379,监听本地2379端口,放开给客户端
       --advertise-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
       --initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
       --initial-cluster=\${ETCD_INITIAL_CLUSTER} \
       --initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
       --initial-cluster-state=new \
       --ca-file ${WORK_DIR}/certs/ca.pem \
       --cert-file ${WORK_DIR}/certs/etcd-peer.pem \
       --key-file ${WORK_DIR}/certs/etcd-peer-key.pem \
       --client-cert-auth  \
       --trusted-ca-file ${WORK_DIR}/certs/ca.pem \
       --peer-ca-file ${WORK_DIR}/certs/ca.pem \
       --peer-cert-file ${WORK_DIR}/certs/etcd-peer.pem \
       --peer-key-file ${WORK_DIR}/certs/etcd-peer-key.pem \
       --peer-client-cert-auth \
       --peer-trusted-ca-file ${WORK_DIR}/certs/ca.pem \
       --log-output stdout
Restart=on-failure                                       # 重启策略
LimitNOFILE=65536                                        # 限制用户打开最大文件数                                          

[Install]
WantedBy=multi-user.target                                # 运行多用户登入
EOF
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
echo "等待其他节点加入,等待中(请在其他成员节点修改脚本并执行)..."
# --------------------------

# 检查集群状态
[root@master7-12 etcd]# ./etcdctl cluster-health
member 988139385f78284 is healthy: got healthy result from http://127.0.0.1:2379
member 5a0ef2a004fc4349 is healthy: got healthy result from http://127.0.0.1:2379
member f4a0cb0a765574a8 is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy

# 查看集群成员角色
[root@node7-21 etcd]# ./etcdctl member list
988139385f78284: name=etcd03 peerURLs=https://10.4.7.22:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.22:2379 isLeader=false
5a0ef2a004fc4349: name=etcd02 peerURLs=https://10.4.7.21:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.21:2379 isLeader=false
f4a0cb0a765574a8: name=etcd01 peerURLs=https://10.4.7.12:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.12:2379 isLeader=true

3.2 部署主控节点的kube-apiserver

3.2.1 下载并解压相关的软件包

官方地址:https://github.com/kubernetes/kubernetes/releases

  • 在集群内每个节点都需要下载解压
[root@master7-12 src]# wget https://dl.k8s.io/v1.19.11/kubernetes-server-linux-amd64.tar.gz

[root@master7-12 src]# tar zxvf kubernetes-server-linux-amd64.tar.gz -C /opt/

[root@master7-12 src]# mv /opt/kubernetes/ /opt/kubernetes-v1.19.11
[root@master7-12 src]# ln -s /opt/kubernetes-v1.19.11 /opt/kubernetes
# kubernetes-src.tar.gz是源码包,这里用不到,可以删了
[root@master7-12 kubernetes]# cd /opt/kubernetes
[root@master7-12 kubernetes]# ls
addons  kubernetes-src.tar.gz  LICENSES  server
[root@master7-12 kubernetes]# rm -rf kubernetes-src.tar.gz

# 这里的.tar包夜不需要要,可以都删掉
[root@master7-12 kubernetes]# cd server/bin/
[root@master7-12 bin]# ls
apiextensions-apiserver    kube-apiserver.tar                  kubelet                kube-scheduler.docker_tag
kubeadm                    kube-controller-manager             kube-proxy             kube-scheduler.tar
kube-aggregator            kube-controller-manager.docker_tag  kube-proxy.docker_tag  mounter
kube-apiserver             kube-controller-manager.tar         kube-proxy.tar
kube-apiserver.docker_tag  kubectl                             kube-scheduler

[root@master7-12 bin]# rm -rf *.tar

3.2.2 签发client证书
  • 签发apiserver-client证书:apiserver与etc通信用的证书。apiserver是客户端,etcd是服务端(用于apiserver和etcd通信)

10.4.7.200上生成证书
创建生成证书签名请求(csr)的JSON配置文件

[root@src7-200 ~]# vi /opt/certs/client-csr.json
{
    "CN": "k8s-node",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

[root@src7-200 certs]# cd /opt/certs/

[root@src7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client

3.2.3 签发 server 证书
  • 签发apiserver-server证书:用于apiserver对外发放节点上的服务
  • 要将规划好的可能加入集群内的ip写入进去
    创建签名请求(csr)的JSON配置文件,apiserver,server端证书
[root@src7-200 certs]# vi /opt/certs/apiserver-csr.json 
{
    "CN": "k8s-apiserver",
    "hosts": [
        "127.0.0.1",
        "10.254.0.1",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local",
        "10.4.7.10",
        "10.4.7.21",
        "10.4.7.22",
        "10.4.7.23"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

[root@src7-200 certs]#  cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json  |cfssl-json -bare apiserver
3.2.4 查看K8S相关的证书
# 一共是下面6个证书
[root@src7-200 certs]# ls -ltr *.pem |grep -v etcd
-rw-r--r-- 1 root root 1330 Jun  6 02:20 ca.pem
-rw------- 1 root root 1679 Jun  6 02:20 ca-key.pem
-rw-r--r-- 1 root root 1359 Jun  7 00:27 client.pem
-rw------- 1 root root 1679 Jun  7 00:27 client-key.pem
-rw-r--r-- 1 root root 1590 Jun  7 00:36 apiserver.pem
-rw------- 1 root root 1679 Jun  7 00:36 apiserver-key.pem

3.2.5 将证书拷贝过来

从 10.4.7.200 拷贝到 集群内节点上

# 在bin目录下创建存放证书的目录
[root@master7-12 bin]# mkdir /opt/kubernetes/server/bin/cert
# 从10.4.7.200上copy证书过来
[root@master7-12 bin]# pem=$(sshpass -p 123456 ssh -o StrictHostKeyChecking=no src7-200.host.com "ls /opt/certs/*.pem | grep -v etcd")
[root@master7-12 bin]# for i in $pem;do sshpass -p 123456 scp 10.4.7.200:$i /opt/kubernetes/server/bin/cert/;done
[root@master7-12 bin]# ll /opt/kubernetes/server/bin/cert/
total 24
-rw------- 1 root root 1679 Jun  7 01:12 apiserver-key.pem
-rw-r--r-- 1 root root 1590 Jun  7 01:12 apiserver.pem
-rw------- 1 root root 1679 Jun  7 01:12 ca-key.pem
-rw-r--r-- 1 root root 1330 Jun  7 01:12 ca.pem
-rw------- 1 root root 1679 Jun  7 01:12 client-key.pem
-rw-r--r-- 1 root root 1359 Jun  7 01:12 client.pem

3.2.6 创建启动apiserver脚本

首先需要创建一个apiserver日志审计的文件,在启动apiserver是需要用到

# 创建目录存放配置
[root@node7-21 bin]# mkdir /opt/kubernetes/server/bin/config
# 写配置文件,官网上就有,直接复制粘贴
[root@node7-21 bin]# cat <<EOF >/opt/kubernetes/server/bin/config/audit.yaml
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"
EOF

编写启动脚本

[root@node7-21 bin]# vi /opt/kubernetes/server/bin/kube-apiserver.sh
#!/bin/bash
./kube-apiserver \
  --apiserver-count 2 \         # apiserver的数量
  --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
  --audit-policy-file ./conf/audit.yaml \
  --authorization-mode RBAC \
  --client-ca-file ./cert/ca.pem \
  --requestheader-client-ca-file ./cert/ca.pem \
  --enable-admission-plugins
NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
  --etcd-cafile ./cert/ca.pem \
  --etcd-certfile ./cert/client.pem \
  --etcd-keyfile ./cert/client-key.pem \
  --etcd-servers https://10.4.7.11:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \
  --service-account-key-file ./cert/ca-key.pem \
  --service-cluster-ip-range 10.254.0.0/16 \
  --service-node-port-range 3000-29999 \
  --target-ram-mb=1024 \
  --kubelet-client-certificate ./cert/client.pem \
  --kubelet-client-key ./cert/client-key.pem \
  --log-dir  /data/logs/kubernetes/kube-apiserver \
  --tls-cert-file ./cert/apiserver.pem \
  --tls-private-key-file ./cert/apiserver-key.pem \
  --v 2

[root@node7-21 bin]# chmod +x kube-apiserver.sh

安装 supervisord,利用 supervisord 来监控apiserver脚本执行,将脚本托管给 supervisord
(类似于systemd的作用,后台启动程序)

[root@node7-21 bin]# yum -y install epel-release
#安装 supervisord
[root@node7-21 bin]# systemctl start supervisord

[root@node7-21 bin]# systemctl start supervisord
[root@node7-21 bin]# systemctl enable supervisord

# 创建后台启动(创建.ini结尾的配置文件)
[root@node7-21 bin]# vi /etc/supervisord.d/kube-apiserver.ini
[program:kube-apiserver-7-12]                                   ; 根据实际IP地址更改名称
command=/opt/kubernetes/server/bin/kube-apiserver.sh            ; the program (relative uses PATH, can take args)
numprocs=1                                                      ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                            ; directory to cwd to before exec (def no cwd)
autostart=true                                                  ; start at supervisord start (default: true)
autorestart=true                                                ; retstart at unexpected quit (default: true)
startsecs=30                                                    ; number of secs prog must stay running (def. 1)
startretries=3                                                  ; max # of serial start failures (default 3)
exitcodes=0,2                                                   ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                 ; signal used to kill process (default TERM)
stopwaitsecs=10                                                 ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                       ; setuid to this UNIX account to run the program
redirect_stderr=true                                            ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log        ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                    ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                        ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                     ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false     

# 创建日志目录
[root@node7-21 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver

三、部署traefik-ingress

  • 官网:https://github.com/containous/traefik
1、部署ingress
  • 创建rbac角色访问控制
[root@master01 traefik]# cat rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system

  • 创建ingress-controller控制器
[root@master01 traefik]# vim ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: traefik-ingress-lb
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
      name: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      terminationGracePeriodSeconds: 60
      hostNetwork: true
      restartPolicy: Always
      serviceAccountName: traefik-ingress-controller
      containers:
      - image: traefik:v1.7
        name: traefik-ingress-lb
        resources:
          limits:
            cpu: 200m
            memory: 30Mi
          requests:
            cpu: 100m
            memory: 20Mi
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: admin
          containerPort: 8080
          hostPort: 8080
        args:
        - --web
        - --web.address=:8080
        - --kubernetes
        - --api
        - --kubernetes
        - --logLevel=INFO
        - --insecureskipverify=true
        - --kubernetes.endpoint=https://192.168.2.100:16443
        - --accesslog
        - --accesslog.filepath=/var/log/traefik_access.log
        - --traefiklog
        - --traefiklog.filepath=/var/log/traefik.log
        - --metrics.prometheus

  • 创建 traefik 的web界面
[root@master01 traefik]# vim svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
  - name: web
    port: 80
    targetPort: 8080

  • 创建traefik web管理的 ingress
[root@master01 traefik]# vim ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  rules:
  - host: traefik.prod.com
    http:
      paths:
      - path: /
        backend:
          serviceName: traefik-web-ui
          servicePort: web

2. 使用nginx做流量调度
  • 在是三个master节点上安装nginx,并配置代理
[root@master01 traefik]# cat /etc/nginx/conf.d/od.com.conf
upstream default_backend_traefik {
    server 10.4.7.14:81    max_fails=3 fail_timeout=10s;
    server 10.4.7.15:81    max_fails=3 fail_timeout=10s;
}
server {
    server_name *.prod.com;

    location / {
        proxy_pass http://default_backend_traefik;
        proxy_set_header Host       $http_host;
        proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
    }
}

3. 发布一个nginx的ingress测试
  • 创建pod和svc
[root@master01 nginx]# cat nginx-svc.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nginx-deployment
  namespace: default
  labels:
    k8s-app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: nginx
  template:
    metadata:
      labels:
        k8s-app: nginx
        name: nginx
    spec:
      containers:
      - image: nginx:1.7.9
        name: nginx-v1-7-9
        ports:
        - name: http
          containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: default
spec:
  selector:
    k8s-app: nginx
  ports:
  - name: web
    port: 80
    targetPort: 80

  • 发布一个 ingress
[root@master01 nginx]# cat ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-ingress
  namespace: default
spec:
  rules:
  - host: nginx.prod.com
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx-service
          servicePort: 80

四、部署dashboard

  • 官网:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard
  • https://github.com/kubernetes/dashboard
    下载镜像,上传仓库
[root@master01 ~]# docker pull k8scn/kubernetes-dashboard-amd64:v1.8.3
[root@master01 ~]# docker tag fcac9aa03fd6 registry.host.com:5000/dashboard:v1.8.3
[root@master01 ~]# docker push registry.host.com:5000/dashboard:v1.8.3

创建dashboard的yaml文件

[root@master01 ~]# cd /k8s-yaml/
[root@master01 k8s-yaml]# mkdir dashboard
[root@master01 k8s-yaml]# cd dashboard/
[root@master01 dashboard]# vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system

[root@master01 dashboard]# cat dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      priorityClassName: system-cluster-critical
      containers:
      - name: kubernetes-dashboard
        image: registry.host.com:5000/dashboard:v1.8.3
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 50m
            memory: 100Mi
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          # PLATFORM-SPECIFIC ARGS HERE
          - --auto-generate-certificates
        volumeMounts:
        - name: tmp-volume
          mountPath: /tmp
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard-admin
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"

[root@master01 dashboard]# cat svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 443
    targetPort: 8443

[root@master01 dashboard]# cat ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: dashboard.prod.com
    http:
      paths:
      - backend:
          serviceName: kubernetes-dashboard
          servicePort: 443

[root@master01 dashboard]# kubectl apply -f rbac.yaml

[root@master01 dashboard]# kubectl apply -f dp.yaml

[root@master01 dashboard]# kubectl apply -f svc.yaml

[root@master01 dashboard]# kubectl apply -f ingress.yaml

[root@master01 dashboard]# kubectl get ingress -n kube-system
NAME                   HOSTS                ADDRESS   PORTS   AGE
kubernetes-dashboard   dashboard.prod.com             80      70s
traefik-web-ui         traefik.prod.com               80      2d

配置dns

[root@master01 dashboard]# vim /var/named/prod.com.zone
$ORIGIN prod.com.
$TTL 600    ; 10 minutes
@       IN SOA    dns.prod.com. dnsadmin.host.com. (
                  2019111001 ; serial
                  10800      ; refresh (3 hours)
                  900        ; retry (15 minutes)
                  604800     ; expire (1 week)
                  86400      ; minimum (1 day)
                  )
              NS   dns.prod.com.
$TTL 60       ; 1 minute
harbor     A    10.4.7.11
dns        A    10.4.7.11
traefik    A    10.4.7.253
dashboard  A    10.4.7.253

[root@master01 dashboard]# dig dashboard.prod.com @10.4.7.11 +short
10.4.7.253

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐