一篇完整的K8S安装文档
服务器配置运维主机数量1双核2G 50G硬盘k8s_master服务器1双核2G 50G硬盘k8s_node服务器2双核4G 100G硬盘定义IP运维主机IP192.168.16.1master服务器IP192.168.16.2node1服务器IP192.168.16.11node2服务器IP192.168.16.12虚拟_IP 172.0.0.0/16虚拟pod_IP 10.254.0.0/16
最简易的概念图
可能出现的异常
1.ipvs未启动
conf]# cd /root/
执行脚本
conf]# ./ipvs.sh
2.如果开机后harbor
harbor]# docker-compose up -d
服务器配置
nginx反向代理服务器1 双核4G 100G硬盘
存储服务器1 双核4G 100G硬盘
k8s服务器2 双核4G 100G硬盘
定义IP
nginx反向代理服务器IP192.168.16.2
k8s节点服务器IP192.168.16.11
k8s节点服务器IP192.168.16.12
存储服务器IP192.168.16.1
定义k8s主机上k8s pod的ip地址网段172.7.(所属服务器最后一位IP地址).1/24
容器IP10.254.0.0/16
定义计算机名称和域
服务器名称k162、k1611、k1612
内网域host.com
外网域od.com
设置计算机名称
在162、1611、1612服务器执行
vim /etc/hostname
[root@localhost ~]# vi /etc/hostname
[root@localhost ~]# cat /etc/hostname
1611.host.com
修改计算机名称后重启生效
配置集群使用清华源镜像
在162、1611、1612服务器执行
cat > 01.yumrepo.sh << 'EOF'
# 创建备份路径
mkdir -p /etc/yum.repos.d/repo.bak/
# 备份源
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/repo.bak/
# 导入镜像内容到yum文件中
cat > /etc/yum.repos.d/centos-tuna.repo << 'EOO'
#CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client. You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#
[base]
name=CentOS-\$releasever - Base
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/\$releasever/os/\$basearch/
enabled=1
gpgcheck=0
#released updates
[updates]
name=CentOS-\$releasever - Updates
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/\$releasever/updates/\$basearch/
#baseurl=https://mirrors.aliyun.com/centos/\$releasever/updates/\$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=\$releasever&arch=\$basearch&repo=updates
enabled=1
gpgcheck=0
#additional packages that may be useful
[centosplus]
name=CentOS-\$releasever - Plus
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/\$releasever/centosplus/\$basearch/
#baseurl=https://mirrors.aliyun.com/centos/\$releasever/centosplus/\$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=\$releasever&arch=\$basearch&repo=centosplus
enabled=1
gpgcheck=0
[cloud]
name=CentOS-\$releasever - Cloud
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/\$releasever/cloud/\$basearch/openstack-train/
#baseurl=https://mirrors.aliyun.com/centos/\$releasever/cloud/\$basearch/openstack-train/
enabled=1
gpgcheck=0
[paas]
name=CentOS-\$releasever - paas
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/\$releasever/paas/\$basearch/openshift-origin13/
#baseurl=https://mirrors.aliyun.com/centos/\$releasever/paas/\$basearch/openshift-origin13/
enabled=1
gpgcheck=0
[kvm]
name=CentOS-\$releasever - kvm
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/\$releasever/virt/\$basearch/kvm-common/
#baseurl=https://mirrors.aliyun.com/centos/\$releasever/virt/\$basearch/kvm-common/
enabled=1
gpgcheck=0
[extras]
name=CentOS-\$releasever - extras
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/\$releasever/extras/\$basearch/
#baseurl=https://mirrors.aliyun.com/centos/\$releasever/extras/\$basearch/
enabled=1
gpgcheck=0
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=0
[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=0
[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=0
EOO
#清除yum缓存
yum clean all
#重新做yum 缓存
yum makecache
EOF
# 执行该脚本
bash 01.yumrepo.sh
安装常用yum
在162、1611、1612服务器执行
yum install -y epel-release wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils vim unzip zip net-tools
安装DNS服务(bind9)
在k16_2服务器执行
~]# yum install -y bind
~]# vi /etc/named.conf
修改以下参数
13 listen-on port 53 { 192.168.16.2; }; # 监听本机IP
14 listen-on-v6 port 53 { ::1; }; # 删除,不监听IPV6
20 allow-query { any; }; # 允许所有主机查看
21 forwarders { 192.168.16.254 }; # 办公网上一级的DNS
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
// See the BIND Administrator's Reference Manual (ARM) for details about the
// configuration located in /usr/share/doc/bind-{version}/Bv9ARM.html
options {
listen-on port 53 { 192.168.16.2; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
recursing-file "/var/named/data/named.recursing";
secroots-file "/var/named/data/named.secroots";
allow-query { any; };
forwarders { 192.168.16.254; };
/*
- If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
- If you are building a RECURSIVE (caching) DNS server, you need to enable
recursion.
- If your recursive DNS server has a public IP address, you MUST enable access
control to limit queries to your legitimate users. Failing to do so will
cause your server to become part of large scale DNS amplification
attacks. Implementing BCP38 within your network would greatly
reduce such attack surface
*/
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
/* Path to ISC DLV key */
bindkeys-file "/etc/named.root.key";
managed-keys-directory "/var/named/dynamic";
pid-file "/run/named/named.pid";
session-keyfile "/run/named/session.key";
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
zone "." IN {
type hint;
file "named.ca";
};
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
配置区域配置文件
在最后添加
zone “host.com” IN {
type master;
file “host.com.zone”;
allow-update { 192.168.16.2; };
};
zone “od.com” IN {
type master;
file “od.com.zone”;
allow-update { 192.168.16.2; };
};
[root@162 ~]# cat /etc/named.rfc1912.zones
// named.rfc1912.zones:
//
// Provided by Red Hat caching-nameserver package
//
// ISC BIND named zone configuration for zones recommended by
// RFC 1912 section 4.1 : localhost TLDs and address zones
// and http://www.ietf.org/internet-drafts/draft-ietf-dnsop-default-local-zones-02.txt
// (c)2007 R W Franks
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
zone "localhost.localdomain" IN {
type master;
file "named.localhost";
allow-update { none; };
};
zone "localhost" IN {
type master;
file "named.localhost";
allow-update { none; };
};
zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN {
type master;
file "named.loopback";
allow-update { none; };
};
zone "1.0.0.127.in-addr.arpa" IN {
type master;
file "named.loopback";
allow-update { none; };
};
zone "0.in-addr.arpa" IN {
type master;
file "named.empty";
allow-update { none; };
};
zone "host.com" IN {
type master;
file "host.com.zone";
allow-update { 192.168.16.2; };
};
zone "od.com" IN {
type master;
file "od.com.zone";
allow-update { 192.168.16.2; };
};
配置区域数据文件
vi /var/named/host.com.zone
vi /var/named/od.com.zone
$TTL 600 ; 10 minutes
@ IN SOA dns.host.com. dnsadmin.host.com. ( # 区域授权文件的开始,OSA记录,dnsadmin.host.com为邮箱
2019120901 ; serial # 安装的当天时间
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.host.com. # NS记录
$TTL 60 ; 1 minute
dns A 192.168.16.2 # A记录
[root@k162 ~]# cat /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.od.com. dnsadmin.od.com. (
2020122601 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.od.com.
$TTL 60 ; 1 minute
dns A 192.168.16.2
* A 192.168.16.2
[root@k162 ~]# cat /var/named/host.com.zone
$ORIGIN host.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.host.com. dnsadmin.host.com. (
2020122605 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.host.com.
$TTL 60 ; 1 minute
dns A 192.168.16.2
k162 IN A 192.168.16.2
k1611 IN A 192.168.16.11
k1612 IN A 192.168.16.12
更改文件的属组,权限
named]# chown root:named /var/named/host.com.zone
named]# chown root:named /var/named/od.com.zone
named]# chmod 640 /var/named/host.com.zone
named]# chmod 640 /var/named/od.com.zone
启动named
named]# systemctl restart named
named]# systemctl enable named
修改网的dns指向
在k16_2、k16_11、k16_12服务器执行
vi /etc/sysconfig/network-scripts/ifcfg-ens192
设置DNS1=192.168.16.2
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens192
UUID=7f1002fc-2afb-4809-aee0-bc3c6a1caae3
DEVICE=ens192
ONBOOT=yes
IPADDR=192.168.16.2
PREFIX=24
GATEWAY=192.168.16.254
DNS1=192.168.16.2
IPV6_PRIVACY=no
[root@k16_11 ~]# systemctl restart network
修改完主计算机名后短域名会自动在/etc/resolv.conf 添加search host.com
named]# cat /etc/resolv.conf
search host.com
检查、尝试在任意主机ping
尝试在任意主机ping 短域名
[root@k1611 ~]# ping 1612.host.com
PING 1612.host.com (192.168.16.12) 56(84) bytes of data.
64 bytes from 192.168.16.12 (192.168.16.12): icmp_seq=1 ttl=64 time=0.460 ms
64 bytes from 192.168.16.12 (192.168.16.12): icmp_seq=2 ttl=64 time=0.348 ms
[root@k1611 ~]# ping k1612
PING k1612.host.com (192.168.16.12) 56(84) bytes of data.
64 bytes from 192.168.16.12 (192.168.16.12): icmp_seq=1 ttl=64 time=0.451 ms
安装harbor服务
在k16_2服务器执行
~]# mkdir install/
~]# cd install/
下载harbor安装包到本地(下载地址)
???????下载地址?????
解压到OPT下
install]# tar zxvf harbor-offline-installer-v1.8.3.tgz -C /opt/
install]# cd /opt
opt]# mv harbor/ harbor-v1.8.0
创建快捷方式
opt]# ln -s /opt/harbor-v1.8.0/ /opt/harbor
编辑harbor文件
opt]# cd harbor
harbor]# vi harbor.yml
修改以下位置参数
5 hostname: harbor.od.com
10 port: 180
27 harbor_admin_password: Harbor12345
40 data_volume: /data/harbor
87 location: /data/harbor/logs # 更改日志存储路径
创建日志文件夹
harbor]# mkdir -p /data/harbor/logs
docker安装
在k16_2、k16_11、k16_12服务器执行
阿里源安装docker 方便所以改变默认yum
清除之前安装yum缓存
yum clean all
CentOS7更换yum为阿里源
1 备份本地源(可以不做基本上没差)
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo_bak
2 获取阿里源配置文件
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
3 更新epel仓库
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
4 更新cache
yum makecache
从阿里云安装docker
~]# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
创建docker使用文件夹
mkdir -p /etc/docker
mkdir -p /data/docker
]# vi /etc/docker/daemon.json
去掉#
{
"graph": "/data/docker",
"storage-driver": "overlay2",
"insecure-registries": ["registry.access.redhat.com","quay.io","harbor.od.com"],
"registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
"bip": "172.7.200.1/24", # 定义k8s主机上k8s pod的ip地址网段 -- 改成node节点的ip
"exec-opts": ["native.cgroupdriver=systemd"],
"live-restore": true
}
启动docker
systemctl start docker
systemctl enable docker
启动harbor服务
在k16_2服务器执行
单机编排工具
]# cd /opt/harbor
harbor]# yum install -y docker-compose
安装harbot
harbor]# ./install.sh
如果开机后harbor 无法访问可以查看
docker-compose ps
状态是不是都是UP的如果不是执行
harbor]# docker-compose up -d
安装nginx做反向代理
harbor]# yum install -y nginx
将harbor.od.com反向代理到本机的80端口(之前配置文件修改为180端口)
harbor]# vi /etc/nginx/conf.d/harbor.od.com.conf
server {
listen 80;
server_name harbor.od.com;
client_max_body_size 1000m;
location / {
proxy_pass http://127.0.0.1:180;
}
}
启动NGINX
harbor]# systemctl start nginx
harbor]# systemctl enable nginx
dns服务器
[root@k162 harbor]# vi /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.od.com. dnsadmin.od.com. (
2020122602 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.od.com.
$TTL 60 ; 1 minute
dns A 192.168.16.2
t A 192.168.16.2
harbor A 192.168.16.2
更新dns后需要重启服务生效
systemctl restart named
检查是否添加成功
harbor]# dig -t A harbor.od.com +short
192.168.16.2
http://harbor.od.com/
用户名是admin 密码是配置文件中设置的Harbot12345
新建一个public项目,公开
harbor]# docker pull nginx:1.7.9
给本地的NGIXN打标签
harbor]# docker tag nginx:1.7.9 harbor.od.com/public/nginx:v1.7.9
登入harbot
harbor]# docker login harbor.od.com
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
harbor]# docker push harbor.od.com/public/nginx:v1.7.9
上传成功
签证书(2眼泪汪汪)
在K162执行
下载证书签发工具
~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo
设置执行权限
chmod +x /usr/bin/cfssl*
查看证书命令是否存在
harbor]# which cfssl-certinfo
/usr/bin/cfssl-certinfo
签发证书
[root@hdss7-200 ~]# cd /opt/
[root@hdss7-200 opt]# mkdir certs
[root@hdss7-200 opt]# cd certs/
签发根证书 -- 创建生成CA证书签名请求(csr)的JSON配置文件
这里是简要说明
{
"CN": "OldboyEdu", # 机构名称,浏览器使用该字段验证网站是否合法,一般写的是域名,非常重要,浏览器使用该字段验证网站是否合法
"hosts": [
],
"key": {
"algo": "rsa", # 算法
"size": 2048 # 长度
},
"names": [
{
"C": "CN", # C,国家
"ST": "beijing", # ST 州,省
"L": "beijing", # L 地区 城市
"O": "od", # O 组织名称,公司名称
"OU": "ops" # OU 组织单位名称,公司部门
}
],
"ca": {
"expiry": "175200h" # expiry 过期时间,任何证书都有过期时间.20年
}
}
~]# vi /opt/certs/ca-csr.json
{
"CN": "OldboyEdu",
"hosts": [
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
],
"ca": {
"expiry": "175200h"
}
}
签发承载式证书
certs]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca
certs]# ll
总用量 16
-rw-r–r-- 1 root root 993 12月 10 11:54 ca.csr
-rw-r–r-- 1 root root 328 12月 10 11:53 ca-csr.json
-rw------- 1 root root 1679 12月 10 11:54 ca-key.pem # 根证书的私钥
-rw-r–r-- 1 root root 1346 12月 10 11:54 ca.pem # 根证书
ETCD服务器
在K162上创建基于根证书的config配置文件
~]# vi /opt/certs/ca-config.json
下面写入证书的4模式
{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"server": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
},
"client": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
},
"peer": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
配置可以使用证书的ETCD服务器(这边的IP是固定死的证书创建后无法在添加了,所以可以多设置几个给以后的ETCD服务器备用)
[root@k162 certs]# vi etcd-peer-csr.json
{
"CN": "k8s-etcd",
"hosts": [
"192.168.16.2",
"192.168.16.11",
"192.168.16.12"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
ca.pem ca-key.pem ca-config.json etcd-peer-csr.json
生成证书
certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json |cfssl-json -bare etcd-peer
查看生成证书
[root@k162 certs]# ll
总用量 36
-rw-r–r--. 1 root root 840 12月 27 16:47 ca-config.json
-rw-r–r--. 1 root root 993 12月 27 16:23 ca.csr
-rw-r–r--. 1 root root 334 12月 27 16:22 ca-csr.json
-rw-------. 1 root root 1671 12月 27 16:23 ca-key.pem
-rw-r–r--. 1 root root 1346 12月 27 16:23 ca.pem
-rw-r–r--. 1 root root 1054 12月 27 16:50 etcd-peer.csr
-rw-r–r--. 1 root root 353 12月 27 16:50 etcd-peer-csr.json
-rw-------. 1 root root 1679 12月 27 16:50 etcd-peer-key.pem
-rw-r–r--. 1 root root 1419 12月 27 16:50 etcd-peer.pem
在k162,k1611,k1612主机上创建etcd用户(无家目录)
[root@k162 install]# useradd -s /sbin/nologin -M etcd
[root@k162 install]# id etcd
uid=1000(etcd) gid=1000(etcd) 组=1000(etcd)
[root@k162 install]# mkdir /opt/src
[root@k162 install]#
[root@k162 install]# cd /opt/src/
下载etcd。。。。。。
[root@k162 install]# tar xfv etcd-v3.1.20-linux-amd64.tar.gz -C /opt/
[root@k162 install]# cd /opt
opt]# mv etcd-v3.1.20-linux-amd64 etcd-v3.1.20
opt]# ln -s /opt/etcd-v3.1.20 /opt/etcd
创建目录,拷贝证书、私钥
mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
cd /opt/etcd/certs
除了,.scr私钥和创建的json文件 将生成的etcd-peer-key.pem ,etcd-peer.pem,ca.pem拷贝到certs文件夹内 用于提供etcd访问与被访问使用
certs]# scp k162:/opt/certs/ca.pem .
certs]# scp k162:/opt/certs/etcd-peer-key.pem .
certs]# scp k162:/opt/certs/etcd-peer.pem .
更改属主属组
certs]# chown -R etcd.etcd /opt/etcd/certs /data/etcd /data/logs/etcd-server
创建etcd服务启动脚本IP地址改成本机IP 直接上传文件
certs]# vi /opt/etcd/etcd-server-startup.sh
–initial-cluster处填写对应的所有etcdip+端口,etcd只能是单数建议3,5,7台,数量越多读取越快,写入越慢
#!/bin/sh
./etcd --name etcd-server-16-2 \
--data-dir /data/etcd/etcd-server \
--listen-peer-urls https://192.168.16.2:2380 \
--listen-client-urls https://192.168.16.2:2379,http://127.0.0.1:2379 \
--quota-backend-bytes 8000000000 \
--initial-advertise-peer-urls https://192.168.16.2:2380 \
--advertise-client-urls https://192.168.16.2:2379,http://127.0.0.1:2379 \
--initial-cluster etcd-server-16-2=https://192.168.16.2:2380,etcd-server-16-11=https://192.168.16.11:2380,etcd-server-16-12=https://192.168.16.12:2380 \
--ca-file ./certs/ca.pem \
--cert-file ./certs/etcd-peer.pem \
--key-file ./certs/etcd-peer-key.pem \
--client-cert-auth \
--trusted-ca-file ./certs/ca.pem \
--peer-ca-file ./certs/ca.pem \
--peer-cert-file ./certs/etcd-peer.pem \
--peer-key-file ./certs/etcd-peer-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file ./certs/ca.pem \
--log-output stdout
赋予执行权限
certs]# chmod +x /opt/etcd/etcd-server-startup.sh
更改属主属组
certs]# chown -R etcd.etcd /opt/etcd-v3.1.20/ /data/etcd /data/logs/etcd-server
使etcd 后端运行
yum install supervisor -y
systemctl start supervisord
systemctl enable supervisord
更改supervisord的配置文件:[program:etcd-server-16-2]名字需要根据实际更改
logs]# vi /etc/supervisord.d/etcd-server.ini
[program:etcd-server-16-2]
command=/opt/etcd/etcd-server-startup.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/etcd ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=etcd ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
创建后端启动etcd
certs]# supervisorctl update
etcd-server-16-2: added process group
查看启动状态
etcd]# supervisorctl status
etcd-server-16-2 RUNNING pid 32121, uptime 0:00:35
如果启动一场可以进入supervisorctl,然后执行reload 重置,或者是一个一个装载、内存小于4G时容易出现异常
如果启动失败可以手动启动etcd.sh文件查看报错
安装部署主控节点服务 – apiserver
在k162执行创建证书
创建生成证书签名请求(csr)的JSON配置文件
~]# vi /opt/certs/client-csr.json
{
"CN": "k8s-node",
"hosts": [
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client
创建签名请求(csr)的JSON配置文件
certs]# vi apiserver-csr.json
{
"CN": "k8s-apiserver",
"hosts": [
"127.0.0.1",
"10.254.0.1",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"192.168.16.2",
"192.168.16.11",
"192.168.16.12"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver
在k1611,k1612服务器
apiserver在node节点上用于寻找pod使用
src]# cd /root/install/
上传文件kubernetes-server-linux-amd64-v1.15.2.tar.gz
install]# tar xf kubernetes-server-linux-amd64-v1.15.2.tar.gz -C /opt/
install]# cd /opt/
创建快捷方式
opt]# ln -s /opt/kubernetes-v1.15.2/ /opt/kubernetes
opt]# cd kubernetes
kubernetes]# ls
删除源码包
kubernetes]# rm -rf kubernetes-src.tar.gz
kubernetes]# cd server/bin/
删除没用的文件docker镜像等
bin]# rm -rf *.tar
bin]# rm -rf *_tag
拷贝证书
~]# cd /opt/kubernetes/server/bin/
bin]# mkdir cert
bin]# cd cert/
最后有个点,表示拷贝到当前目录
cert]# scp k162:/opt/certs/ca.pem .
cert]# scp k162:/opt/certs/ca-key.pem .
cert]# scp k162:/opt/certs/client.pem .
cert]# scp k162:/opt/certs/client-key.pem .
cert]# scp k162:/opt/certs/apiserver.pem .
cert]# scp k162:/opt/certs/apiserver-key.pem .
创建启动配置脚本 – 直接上传
cert]# cd /opt/kubernetes/server/bin
bin]# mkdir conf
bin]# cd conf/
conf]# vi audit.yaml
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
# Resource "pods" doesn't match requests to any subresource of pods,
# which is consistent with the RBAC policy.
resources: ["pods"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]
# Log configmap and secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]
# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.
# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
# Long-running requests like watches that fall under this rule will not
# generate an audit event in RequestReceived.
omitStages:
- "RequestReceived"
conf]# cat /opt/kubernetes/server/bin/kube-apiserver.sh
–apiserver-count 2 \指 node节点数量
–etcd-servers 配置为etcd服务器地址数量为3、5、7
#!/bin/bash
./kube-apiserver \
--apiserver-count 2 \
--audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
--audit-policy-file ./conf/audit.yaml \
--authorization-mode RBAC \
--client-ca-file ./cert/ca.pem \
--requestheader-client-ca-file ./cert/ca.pem \
--enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
--etcd-cafile ./cert/ca.pem \
--etcd-certfile ./cert/client.pem \
--etcd-keyfile ./cert/client-key.pem \
--etcd-servers https://192.168.16.2:2379,https://192.168.16.11:2379,https://192.168.16.12:2379 \
--service-account-key-file ./cert/ca-key.pem \
--service-cluster-ip-range 10.254.0.0/16 \
--service-node-port-range 3000-29999 \
--target-ram-mb=1024 \
--kubelet-client-certificate ./cert/client.pem \
--kubelet-client-key ./cert/client-key.pem \
--log-dir /data/logs/kubernetes/kube-apiserver \
--tls-cert-file ./cert/apiserver.pem \
--tls-private-key-file ./cert/apiserver-key.pem \
--v 2
conf]# cd /opt/kubernetes/server/bin/
添加执行权限
[root@hdss7-21 bin]# chmod +x kube-apiserver.sh
创建后台启动
[root@hdss7-21 bin]# vi /etc/supervisord.d/kube-apiserver.ini
[program:kube-apiserver-16-11]
command=/opt/kubernetes/server/bin/kube-apiserver.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
bin]# mkdir -p /data/logs/kubernetes/kube-apiserver
bin]# supervisorctl update
bin]# supervisorctl status
etcd-server-16-11 RUNNING pid 2612, uptime 0:09:31
kube-apiserver-16-11 RUNNING pid 2594, uptime 0:09:32
反向代理apiserver
在k162上
nginx配置
server 192.168.16.11:6443和server 192.168.16.12:6443代理到本机的7443端口
~]# vi /etc/nginx/nginx.conf --在最下面添加
stream {
upstream kube-apiserver {
server 192.168.16.11:6443 max_fails=3 fail_timeout=30s;
server 192.168.16.12:6443 max_fails=3 fail_timeout=30s;
}
server {
listen 7443;
proxy_connect_timeout 2s;
proxy_timeout 900s;
proxy_pass kube-apiserver;
}
}
检查配置文件
~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
~]# systemctl start nginx
~]# systemctl enable nginx
安装部署主控节点控制器/调度器服务Controller Manager
在k1611、k1612
Controller Manager作为集群内部的管理控制中心,负责集群内的Node、Pod副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个Node意外宕机时,Controller Manager会及时发现并执行自动化修复流程,确保集群始终处于预期的工作状态
~]# vi /opt/kubernetes/server/bin/kube-controller-manager.sh
~]# mkdir -p /data/logs/kubernetes/kube-controller-manager
~]# chmod +x /opt/kubernetes/server/bin/kube-controller-manager.sh
~]# vi /etc/supervisord.d/kube-conntroller-manager.ini
[program:kube-controller-manager16-11]
command=/opt/kubernetes/server/bin/kube-controller-manager.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
添加到自启动
bin]# supervisorctl update
bin]# supervisorctl status
etcd-server-16-11 RUNNING pid 2612, uptime 0:42:25
kube-apiserver-16-11 RUNNING pid 2594, uptime 0:42:26
kube-controller-manager16-11 RUNNING pid 2674, uptime 0:01:10
kube-scheduler部署
在k1611、k1612
调度器的指责主要是为新创建的pod在集群中寻找最合适的node,并将pod调度到Node上。
supervisord.d]# vi /opt/kubernetes/server/bin/kube-scheduler.sh
#!/bin/sh
./kube-scheduler \
--leader-elect \
--log-dir /data/logs/kubernetes/kube-scheduler \
--master http://127.0.0.1:8080 \
--v 2
bin]# chmod +x /opt/kubernetes/server/bin/kube-scheduler.sh
bin]# mkdir -p /data/logs/kubernetes/kube-scheduler
添加自启动
vi /etc/supervisord.d/kube-scheduler.ini
[program:kube-scheduler-1-12]
command=/opt/kubernetes/server/bin/kube-scheduler.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
检查是否启动成功
bin]# supervisorctl update
kube-scheduler-1-11: added process group
[root@k1611 bin]# supervisorctl status
etcd-server-16-11 RUNNING pid 2612, uptime 0:48:07
kube-apiserver-16-11 RUNNING pid 2594, uptime 0:48:08
kube-controller-manager16-11 RUNNING pid 2674, uptime 0:06:52
kube-scheduler-1-11 RUNNING pid 2695, uptime 0:01:55
创建which
bin]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl
bin]# which kubectl
/usr/bin/kubectl
验证集群
bin]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {“health”: “true”}
etcd-0 Healthy {“health”: “true”}
etcd-1 Healthy {“health”: “true”}
部署kubelet
在k162上创建证书
etc]# cd /opt/certs/
certs]# vi kubelet-csr.json
{
"CN": "k8s-kubelet",
"hosts": [
"127.0.0.1",
"192.168.16.2",
"192.168.16.12",
"192.168.16.11"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "fujian",
"L": "xiamen",
"O": "od",
"OU": "ops"
}
]
}
certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
在k1611、k1612
~]# cd /opt/kubernetes/server/bin/cert/
cert]# scp k162:/opt/certs/kubelet.pem .
cert]# scp k162:/opt/certs/kubelet-key.pem .
cert]# cd /opt/kubernetes/server/bin/conf
set-context – 只做一次,最后生成的 kubelet.kubeconfig 拷贝至其他节点
注意:在conf目录下
IP地址提前改好 master主机的7443端口
conf]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.16.2:7443 \
--kubeconfig=kubelet.kubeconfig
conf]# kubectl config set-credentials k8s-node \
--client-certificate=/opt/kubernetes/server/bin/cert/client.pem \
--client-key=/opt/kubernetes/server/bin/cert/client-key.pem \
--embed-certs=true \
--kubeconfig=kubelet.kubeconfig
conf]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=k8s-node \
--kubeconfig=kubelet.kubeconfig
conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
conf]# vi k8s-node.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: k8s-node
conf]# kubectl create -f k8s-node.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-node created
让成为运算节点的权限
conf]# kubectl get clusterrolebinding k8s-node -o yaml
在k1612上: 拷贝k1611上的文件
[root@hdss7-22 cert]# cd /opt/kubernetes/server/bin/conf
[root@hdss7-22 conf]# scp hdss7-13:/opt/kubernetes/server/bin/conf/kubelet.kubeconfig .
在k162上准备pause基础镜像
~]# docker pull kubernetes/pause
~]# docker tag f9d5de079539 harbor.od.com/public/pause:latest
~]# docker push harbor.od.com/public/pause:latest
在k1611、1612执行
编写启动脚本 – # 注意更改主机名
conf]# vi /opt/kubernetes/server/bin/kubelet.sh
#!/bin/sh
./kubelet \
--anonymous-auth=false \
--cgroup-driver systemd \
--cluster-dns 10.254.0.2 \
--cluster-domain cluster.local \
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on="false" \
--client-ca-file ./cert/ca.pem \
--tls-cert-file ./cert/kubelet.pem \
--tls-private-key-file ./cert/kubelet-key.pem \
--hostname-override k1611.host.com \
--image-gc-high-threshold 20 \
--image-gc-low-threshold 10 \
--kubeconfig ./conf/kubelet.kubeconfig \
--log-dir /data/logs/kubernetes/kube-kubelet \
--pod-infra-container-image harbor.od.com/public/pause:latest \
--root-dir /data/kubelet
conf]# mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet
conf]# chmod +x /opt/kubernetes/server/bin/kubelet.sh
执行docker
sudo service docker start
conf]# vi /etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-16-11]
command=/opt/kubernetes/server/bin/kubelet.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
conf]# supervisorctl update
conf]# supervisorctl status
bin]# supervisorctl status
etcd-server-16-12 RUNNING pid 2245, uptime 0:10:54
kube-apiserver-16-12 RUNNING pid 2244, uptime 0:10:54
kube-controller-manager16-12 RUNNING pid 2243, uptime 0:10:54
kube-kubelet-16-12 RUNNING pid 2558, uptime 0:00:55
kube-scheduler-1-12 RUNNING pid 2242, uptime 0:10:54
查看2个node节点是否都启动了
[root@k1611 /]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k1611.host.com Ready 25s v1.15.2
k1612.host.com Ready 14m v1.15.2
在其中一个节点执行就可
ROlES添加标签,设定节点角色,可同时加两个标签(随意无差别)
cert]# kubectl label node k1611.host.com node-role.kubernetes.io/master=
cert]# kubectl label node k1611.host.com node-role.kubernetes.io/node=
cert]# kubectl label node k1612.host.com node-role.kubernetes.io/master=
cert]# kubectl label node k1612.host.com node-role.kubernetes.io/node=
kube-proxy部署
kube-proxy是Kubernetes的核心组件,部署在每个Node节点上,它是实现Kubernetes Service的通信与负载均衡机制的重要组件; kube-proxy负责为Pod创建代理服务,从apiserver获取所有server信息,并根据server信息创建代理服务,实现server到Pod的请求路由和转发,从而实现K8s层级的虚拟转发网络。
在k16执行
certs]# vi /opt/certs/kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client
在k1611、1612执行
分发证书
~]# cd /opt/kubernetes/server/bin/cert/
[root@k1611 cert]# scp k162:/opt/certs/kube-proxy-client-key.pem .
[root@k1611 cert]# scp k162:/opt/certs/kube-proxy-client.pem .
在k1611执行
创建配置在conf文件夹下 复制下列代码执行
conf]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.16.2:7443 \
--kubeconfig=kube-proxy.kubeconfig
conf]# kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
--client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
conf]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
在k1612执行
第一台node节点部署完成后,将生成的配置文件拷贝至各个Node节点
cert]# cd /opt/kubernetes/server/bin/conf
conf]# scp k1611:/opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig .
在k1611、1612执行
[root@hdss7-21 conf]# vi /root/ipvs.sh
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
/sbin/modinfo -F filename $i &>/dev/null
if [ $? -eq 0 ];then
/sbin/modprobe $i
fi
done
conf]# chmod +x /root/ipvs.sh
conf]# cd /root/
执行脚本
conf]# ./ipvs.sh
查看内核是否加载ipvs模块
[root@hdss7-21 conf]# lsmod | grep ip_vs
创建启动模板
~]# vi /opt/kubernetes/server/bin/kube-proxy.sh
#!/bin/sh
./kube-proxy \
--cluster-cidr 172.7.0.0/16 \
--hostname-override hdss7-21.host.com \
--proxy-mode=ipvs \
--ipvs-scheduler=nq \
--kubeconfig ./conf/kube-proxy.kubeconfig
赋权限
[root@k1612 ~]# chmod +x /opt/kubernetes/server/bin/kube-proxy.sh
创建文件夹
[root@k1612 ~]# mkdir -p /data/logs/kubernetes/kube-proxy
创建自启动脚本
[root@k1611 ~]# vi /etc/supervisord.d/kube-proxy.ini
[root@k1611 ~]# supervisorctl update
查看ipvs是否生效
~]# yum install -y ipvsadm # 只安装,不启动
~]# ipvsadm -Ln
查看网络绑定
kube-proxy部署成功
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.254.0.1:443 nq
-> 192.168.16.11:6443 Masq 1 0 0
-> 192.168.16.12:6443 Masq 1 0 0
验证集群
在任意一个运算节点,创建一个资源配置清单
这里我们选择DHSS7-21.host.com主机
[root@hdss7-22 ~]# vi /root/nginx-ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ds
spec:
template:
metadata:
labels:
app: nginx-ds
spec:
containers:
- name: my-nginx
image: harbor.od.com/public/nginx:v1.7.9
ports:
- containerPort: 80
创建NGINX
~]# kubectl create -f nginx-ds.yaml
daemonset.extensions/nginx-ds created
1分钟或查看是否启动成功
~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ds-2jfv9 1/1 Running 0 54s 172.7.11.2 k1611.host.com <none> <none>
nginx-ds-784hz 1/1 Running 0 54s 172.7.12.2 k1612.host.com <none> <none>
访问nginx
~]# curl 172.7.11.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
flannel安装用于寻找
添加静态路由让节点可以被访问
在k1611、1612执行
flannel下载地址
下载到
~]# cd /opt/src/
src]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
src]# mkdir /opt/flannel-v0.11.0
src]# tar xf flannel-v0.11.0-linux-amd64.tar.gz -C /opt/flannel-v0.11.0/
src]# ln -s /opt/flannel-v0.11.0/ /opt/flannel
src]# cd …
opt]# cd flannel
flannel]# mkdir cert
下载证书
flannel]# cd cert/
cert]# scp k162:/opt/certs/ca.pem .
cert]# scp k162:/opt/certs/client.pem .
cert]# scp k162:/opt/certs/client-key.pem .
cert]# cd …
flannel]# vi subnet.env
FLANNEL_NETWORK=172.7.0.0/16 # pod的网段
FLANNEL_SUBNET=172.7.11.1/24 # 本机运行pod的网段
FLANNEL_MTU=1500
FLANNEL_IPMASQ=false
flannel]# vi flanneld.sh
#!/bin/sh
./flanneld \
--public-ip=192.168.16.11 \
--etcd-endpoints=https://192.168.16.2:2379,https://192.168.16.11:2379,https://192.168.16.12:2379 \
--etcd-keyfile=./cert/client-key.pem \
--etcd-certfile=./cert/client.pem \
--etcd-cafile=./cert/ca.pem \
--iface=ens192 \
--subnet-file=./subnet.env \
--healthz-port=2401
再其中一台node 主机执行
操作etcd,增加host-gw模型
~]# cd /opt/etcd
Flannel的host-gw模型,所有node ip必须在同一个物理网关设备下才能使用
etcd]# ./etcdctl set /coreos.com/network/config ‘{“Network”: “172.7.0.0/16”, “Backend”: {“Type”: “host-gw”}}’
{“Network”: “172.7.0.0/16”, “Backend”: {“Type”: “host-gw”}}
./etcdctl get /coreos.com/network/config
再k1611、1612执行
查看etcd集群
[root@hdss7-14 etcd]# ./etcdctl member list
a037fc7d1349d7f4: name=etcd-server-7-14 peerURLs=https://192.168.16.14:2380 clientURLs=http://127.0.0.1:2379,https://192.168.16.14:2379 isLeader=false
e3585dd915910937: name=etcd-server-7-13 peerURLs=https://192.168.16.13:2380 clientURLs=http://127.0.0.1:2379,https://192.168.16.13:2379 isLeader=true
fbdbdeb3498ee39d: name=etcd-server-7-12 peerURLs=https://192.168.16.12:2380 clientURLs=http://127.0.0.1:2379,https://192.168.16.12:2379 isLeader=false
查看flanneld网络
etcd]# ./etcdctl get /coreos.com/network/config
{“Network”: “172.7.0.0/16”, “Backend”: {“Type”: “host-gw”}}
flannel]# vi /etc/supervisord.d/flannel.ini
[program:flanneld-16-11]
command=/opt/flannel/flanneld.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/flannel ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/flanneld/flanneld.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
创建日志文件、给sh赋权
flannel]# mkdir -p /data/logs/flanneld
flannel]# chmod +x flanneld.sh
启动
flannel]# supervisorctl update
flanneld-7-11: added process group
flannel]# supervisorctl status
flanneld-7-11 RUNNING pid 9463, uptime 0:00:54
kube-kubelet-7-11 RUNNING pid 6683, uptime 18:24:51
kube-proxy-7-11 RUNNING pid 32574, uptime 17:50:07
详解flanneld工作原理
flannel原理就是:给宿主机添加一个静态路由,到达pod ip
Flannel的host-gw模型,所有node ip必须在同一个物理网管设备下才能使用
创建静态路由
再k162上
route add -net 172.7.11.0/24 gw 192.168.16.11
route add -net 172.7.12.0/24 gw 192.168.16.12
再k1611上
route add -net 172.7.12.0/24 gw 192.168.16.12
再k1612上
route add -net 172.7.11.0/24 gw 192.168.16.11
此时可以ping通容器了
再任意服务器上尝试ping 172.7.11.2的NGInx服务
电脑主机访问测试
之前配置DNS时配置过*.od.com、解析到192.168.16.2
所以我们ping
conf.d]# ping nginxt.od.com
PING nginxt.od.com (192.168.16.2) 56(84) bytes of data.
64 bytes from k162.host.com (192.168.16.2): icmp_seq=1 ttl=64 time=0.060 ms
64 bytes from k162.host.com (192.168.16.2): icmp_seq=2 ttl=64 time=0.059 ms
创建nginx方向代理 将nginxt.od.com反向代理到容器ip172.7.11.2
再k162
]# vi /etc/nginx/conf.d/nginxt.od.com.conf
server {
listen 80;
server_name nginxt.od.com;
client_max_body_size 1000m;
location / {
proxy_pass http://172.7.11.2;
}
}
重启nginx服务器
systemctl restart nginx
尝试访问
[root@k162 conf.d]# curl nginxt.od.com
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
coredns
1.简单来说,服务发现橘色服务(应用)之间相互定位的过程
2.服务发现并非云主机时代独有,传统的单体架构时代也会用到。以下应用场景下,更需要服务发现:
服务(应用)的动态性强
服务(应用)更新发布频繁
服务(应用)支持自动伸缩
3.在K8S集群里,POD的IP不断变化的,如何“以不变应万变”呢
抽象出了Service资源,通过标签选择器,关联一组pod
抽象出了集群网络,通过相对固定的“集群IP”,使服务接入点固定
4.如何自动关联Service资源的“名称”和“集群网络IP”,从而达到服务被集群自动发现的目的呢
传统的DNS模型:hdss7-14.host.com -> 192.168.16.14
能否在k8s里建立这样的模型:nginx-ds -> 10.254.0.5
5.K8S里服务发现的方式 - DNS
在k162主机
上传coredns
]# docker pull coredns/coredns:1.6.1
]# docker images|grep coredns
]# docker tag c0f6e815079e harbor.od.com/public/coredns:v1.6.1
]# docker push harbor.od.com/public/coredns:v1.6.1
创建yaml清单下载服务
[root@k162 harbor]# mkdir /data/k8s-yaml
[root@k162 harbor]# vi /etc/nginx/conf.d/k8s-yaml.od.com.conf
server {
listen 80;
server_name k8s-yaml.od.com;
location / {
autoindex on;
default_type text/plain;
root /data/k8s-yaml;
}
}
[root@k162 harbor]# nginx -s reload
测试访问
http://k8s-yaml.od.com/
创建资源管理配置清单
harbor]# cd /data/k8s-yaml/
k8s-yaml]# mkdir coredns
k8s-yaml]# cd coredns/
coredns]# vi rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
coredns]# vi cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
log
health
ready
kubernetes cluster.local 10.254.0.0/16 # service网段
forward . 192.168.16.11 # 物理机安装dns服务的地址
cache 30
loop
reload
loadbalance
}
coredns]# vi dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/name: "CoreDNS"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: coredns
template:
metadata:
labels:
k8s-app: coredns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
containers:
- name: coredns
image: harbor.od.com/public/coredns:v1.6.1
args:
- -conf
- /etc/coredns/Corefile
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
coredns]# vi svc.yaml
apiVersion: v1
kind: Service
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: coredns
clusterIP: 10.254.0.2 # dns服务的ip
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
- name: metrics
port: 9153
protocol: TCP
官方coredns资源配置清单下载地址
https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base
在k1611上运行
~]# kubectl apply -f http://k8s-yaml.od.com/coredns/rbac.yaml
~]# kubectl apply -f http://k8s-yaml.od.com/coredns/cm.yaml
~]# kubectl apply -f http://k8s-yaml.od.com/coredns/dp.yaml
~]# kubectl apply -f http://k8s-yaml.od.com/coredns/svc.yaml
查看cordns启用
~]# kubectl get svc -n kube-system
~]# kubectl get all -n kube-system
验证环节
bin]# cat kubelet.sh # 之前里面定义了dns的ip地址
如果缺少dig yum -y install bind-utils
查看DNS解析
~]# dig -t A k162.host.com @10.254.0.2 +short
192.168.16.2
测试cordns
[root@k1611 bin]# kubectl create deployment nginx-dp --image=harbor.od.com/public/nginx:v1.7.9 -n kube-public
deployment.apps/nginx-dp created
[root@k1611 bin]# kubectl get pod -n kube-public
NAME READY STATUS RESTARTS AGE
nginx-dp-5dfc689474-ttbhp 1/1 Running 0 7s
[root@k1611 bin]# kubectl expose deployment nginx-dp --port=80 -n kube-public
service/nginx-dp exposed
[root@k1611 bin]# dig -t A nginx-dp.kube-public.svc.cluster.local. @10.254.0.2 +short
10.254.102.208
dashboard仪表盘安装
在k162上
准备dashboard镜像
~]# docker pull k8scn/kubernetes-dashboard-amd64:v1.8.3
~]# docker images | grep dashboard
[root@k162 ~]# docker tag fcac9aa03fd6 harbor.od.com/public/dashboard:v1.8.3
[root@k162 ~]# docker push harbor.od.com/public/dashboard:v1.8.3
准备资源配置清单
进入k8s-yaml下载的文件夹内
[root@k162 /]# cd data/
[root@k162 data]# cd k8s-yaml/
[root@k162 k8s-yaml]# mkdir dashboard
[root@k162 k8s-yaml]# cd dashboard/
[root@k162 dashboard]#
准备3个资源清单
dp.yaml rbac.yaml svc.yaml
dashboard]# vi rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
name: kubernetes-dashboard-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-admin
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard-admin
namespace: kube-system
[root@k162 dashboard]# vi dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
priorityClassName: system-cluster-critical
containers:
- name: kubernetes-dashboard
image: harbor.od.com/public/dashboard:v1.8.3
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 50m
memory: 100Mi
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
volumeMounts:
- name: tmp-volume
mountPath: /tmp
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard-admin
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
[root@k162 dashboard]# vi svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 443
targetPort: 8443
应用资源清单配置
在k1611或1612节点运行
~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/rbac.yaml
~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/dp.yaml
~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/svc.yaml
查看安装和启动成功
~]# kubectl get pods -n kube-system
查看IP
~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns ClusterIP 10.254.0.2 <none> 53/UDP,53/TCP,9153/TCP 23h
kubernetes-dashboard ClusterIP 10.254.29.242 <none> 443/TCP 82s
配置nginx反向代理
conf.d]# vi dashboard.od.com.conf
server {
listen 80;
server_name dashboard.od.com;
client_max_body_size 1000m;
location / {
proxy_pass http://10.254.29.242:443;
}
}
配置静态路由
route add -net 10.254.0.0/16 gw 192.168.16.11
route add -net 10.254.0.0/16 gw 192.168.16.12
获取token
conf]# kubectl get secret -n kube-system
conf]# kubectl describe secret kubernetes-dashboard-admin-token-tm2fx -n kube-system
https://dashboard.od.com:443 进入图形界面 可选输入token。
更多推荐
所有评论(0)