Kubeadm 高可用集群安装
一、在生产环境中,我们k8s集群需要多master实现高可用,所以下面介绍如何通过kubeadm部署k8s高可用集群(建议生产环境master至少3个以上,如果没有下载好安装包,需要连接外网)二、master部署:1、三台maser节点上部署etcd集群2、使用VIP进行kubeadm初始化master注意:本次是通过物理服务器进行部署,如果使用阿里云服务器部署,由于阿里云服务器不支持VIP,可以
一、在生产环境中,我们k8s集群需要多master实现高可用,所以下面介绍如何通过kubeadm部署k8s高可用集群(建议生产环境master至少3个以上,如果没有下载好安装包,需要连接外网)
二、master部署:
1、三台maser节点上部署etcd集群
2、使用VIP进行kubeadm初始化master
注意:本次是通过物理服务器进行部署,如果使用阿里云服务器部署,由于阿里云服务器不支持VIP,可以通过SLB做负载均衡
三、环境准备;
节点主机: IP地址 安装的模块 操作系统 虚拟IP(VIP)
test-k8s-master-1 192.168.1.10 haproxy(调度器)、keepalived(高可用)、kubernetes(k8s相关组件)、docker-ce、kubeadm、kubelet、kubectl、docker-distribution(作为镜像仓库) CentOS Linux release 7.5.1804 (Core) 192.168.1.100
test-k8s-master-2 192.168.1.11 haproxy(调度器)、keepalived(高可用)、kubernetes(k8s相关组件)、docker-ce、kubeadm、kubelet、kubectl CentOS Linux release 7.5.1804 (Core)
test-k8s-master-3 192.168.1.12 haproxy(调度器)、keepalived(高可用)、kubernetes(k8s相关组件)、docker-ce、kubeadm、kubelet、kubectl CentOS Linux release 7.5.1804 (Core)
test-k8s-node-01 192.168.1.21 docker-ce、kubeadm、kubelet、kubectl:注意这三个的版本不要相差超过一个大版本。 CentOS Linux release 7.5.1804 (Core)
1.修改主机名并修改hosts文件(所有节点上面都要进操作):
hostnamectl set-hostname test-k8s-master-1
hostnamectl set-hostname test-k8s-master-2
hostnamectl set-hostname test-k8s-master-3
hostnamectl set-hostname test-k8s-node-01
vim /etc/hosts
192.168.1.10 k8s-master-1
192.168.1.11 k8s-master-2
192.168.1.12 k8s-master-3
192.168.1.21 k8s-node-01
2、配置yum源、安装相关的依赖包以及相关主件、配置内核优化参数等,这里我使用一个脚本直接安装(所有节点上面都需要执行)
cat init.sh
复制代码
#yum源
yum install -y yum-utils
device-mapper-persistent-data
lvm2
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
wget http://mirrors.aliyun.com/repo/epel-7.repo -O /etc/yum.repos.d/epel.repo
cat >>/etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#安装K8S组件
yum install -y kubelet kubeadm kubectl docker-ce
systemctl restart docker && systemctl enable docker
systemctl enable kubelet
setenforce 0
sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/selinux/config
sed -i ‘s/SELINUX=permissive/SELINUX=disabled/g’ /etc/selinux/config
yum install -y epel-release vim screen bash-completion mtr lrzsz wget telnet zip unzip sysstat ntpdate libcurl openssl bridge-utils nethogs dos2unix iptables-services htop nfs-utils ceph-common git mysql
service firewalld stop
systemctl disable firewalld.service
service iptables stop
systemctl disable iptables.service
service postfix stop
systemctl disable postfix.service
wget http://mirrors.aliyun.com/repo/epel-7.repo -O /etc/yum.repos.d/epel.repo
note=’#Ansible: nptdate-time’
task=’*/10 * * * * /usr/sbin/ntpdate -u ntp.sjtu.edu.cn &> /dev/null’
echo “KaTeX parse error: Expected group after '^' at position 23: …b -l)" | grep "^̲{note}KaTeX parse error: Expected 'EOF', got '&' at position 3: " &̲>/dev/null || e…(crontab -l)\n
n
o
t
e
"
∣
c
r
o
n
t
a
b
−
e
c
h
o
"
{note}" | crontab - echo "
note"∣crontab−echo"(crontab -l)” | grep “^
t
a
s
k
{task}
task” &>/dev/null || echo -e “KaTeX parse error: Undefined control sequence: \n at position 13: (crontab -l)\̲n̲{task}” | crontab -
echo ‘/etc/security/limits.conf 参数调优,需重启系统后生效’
cp -rf /etc/security/limits.conf /etc/security/limits.conf.back
cat > /etc/security/limits.conf << EOF
- soft nofile 655350
- hard nofile 655350
- soft nproc unlimited
- hard nproc unlimited
- soft core unlimited
- hard core unlimited
root soft nofile 655350
root hard nofile 655350
root soft nproc unlimited
root hard nproc unlimited
root soft core unlimited
root hard core unlimited
EOF
echo ‘/etc/sysctl.conf 文件调优’
cp -rf /etc/sysctl.conf /etc/sysctl.conf.back
cat > /etc/sysctl.conf << EOF
vm.swappiness = 0
net.ipv4.neigh.default.gc_stale_time = 120
#see details in https://help.aliyun.com/knowledge_detail/39428.html
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2
#see details in https://help.aliyun.com/knowledge_detail/41334.html
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
kernel.sysrq = 1
kernel.pid_max=1000000
EOF
sysctl -p
备注:将改脚本放置在每个节点服务器上,执行 sh init.sh 即可
3、加载ipvs模块:
复制代码
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe – ip_vs
modprobe – ip_vs_rr
modprobe – ip_vs_wrr
modprobe – ip_vs_sh
modprobe – nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
复制代码
查看ipvs模块加载情况
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
4、高可用(反向代理配置):(此操作在test-k8s-master-1上面操作,如果服务器资源充足也可以单独使用一台服务器做相关反向代理 备注:所有master节点上面都需要部署)
使用nginx(upstream)或者 HAproxy加上keepalived
1.安装Nginx/haproxy和keepalived:
yum -y install nginx keepalived
systemctl start keepalived && systemctl enable keepalived
systemctl start nginx && systemctl enable nginx
2.配置Nginx的upstream反代:(此处设置则忽略haproxy)
[root@test-k8s-master-1 ~]# cd /etc/nginx
[root@test-k8s-master-1 ~]# mv nginx.conf nginx.conf.default
[root@test-k8s-master-1 ~]# vim nginx.conf.default
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
stream {
log_format proxy ‘$remote_addr
r
e
m
o
t
e
p
o
r
t
−
[
remote_port - [
remoteport−[time_local] $status KaTeX parse error: Double superscript at position 12: protocol ' '̲"upstream_addr" “
u
p
s
t
r
e
a
m
b
y
t
e
s
s
e
n
t
"
"
upstream_bytes_sent" "
upstreambytessent""upstream_connect_time”’ ;
access_log /var/log/nginx/nginx-proxy.log proxy;
upstream kubernetes_lb{
server 172.18.178.236:6443 weight=5 max_fails=3 fail_timeout=30s;
server 172.18.178.237:6443 weight=5 max_fails=3 fail_timeout=30s;
server 172.18.178.238:6443 weight=5 max_fails=3 fail_timeout=30s;
}
server {
listen 7443;
proxy_connect_timeout 30s;
proxy_timeout 30s;
proxy_pass kubernetes_lb;
}
}
检查Nginx配置文件语法是否正常,后重新加载Nginx
[root@test-k8s-master-1 nginx]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@test-k8s-master-1 nginx]# nginx -s reload
2.haproxy的反向代理(忽略上面的nginx)
[root@test-k8s-master-1 nginx]# yum -y install haproxy
[root@test-k8s-master-1 nginx]# vim /etc/haproxy/haproxy.cfg
listen stats 0.0.0.0:12345
mode http
log global
maxconn 10
stats enable
stats hide-version
stats refresh 30s
stats show-node
stats auth admin:password
stats uri /stats
frontend kube-api-https
bind 0.0.0.0:7443
mode tcp
default_backend kube-api-server
backend kube-api-server
balance roundrobin
mode tcp
server kubenode1 192.168.1.10:6443 check
#server kubenode2 192.168.1.11:6443 check ##注意先注释,之后集群完成之后再取消重启haproxy
#server kubenode3 192.168.1.12:6443 check ##注意先注释,之后集群完成之后再取消重启haproxy
systemctl enable haproxy && systemctl start haproxy
3.keeplived配置:
[root@test-k8s-master-1 ~]# /etc/keepalived
mv keepalived.conf keepalived.conf.defaul
vim keepalived.conf
global_defs {
notification_email {
test@gmail.com
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_1
}
vrrp_instance VI_1 {
state MASTER
interface ens192 #网卡设备名称,根据自己网卡信息进行更改
lvs_sync_daemon_inteface ens192
virtual_router_id 88
advert_int 1
priority 110
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.100 # 这就就是虚拟IP地址
}
}
重启keepalived:
systemctl restart keepalived
5、初始化节点:(仅在k8s-master1上操作)
kubeadm config print init-defaults > kubeadm-init.yaml
下载镜像:
kubeadm config images pull --config kubeadm-init.yaml
修改kubeadm-init.yaml:
[root@test-k8s-master-1 ~]# vim kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages: - signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.1.10 ##此处ip为master的IP地址,提供api server
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: test-k8s-master-1
taints: - effect: NoSchedule
key: node-role.kubernetes.io/master
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
controlPlaneEndpoint: “192.168.1.100:7443” ###此处ip为vip,为k8s集群统一访问ip,高可用集群需要添加该行,端口为haproxy指定的监听端口,如若不是多master高可用集群,则上面的nginx/haproxy+keepalived都不用设置。
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io ###镜像地址为私有仓库地址。
kind: ClusterConfiguration
kubernetesVersion: v1.16.3
networking:
dnsDomain: cluster.local
podSubnet: “10.244.0.0/16” ###此处需要填加pod的地址池
serviceSubnet: “10.96.0.0/16” ###此处为service的地址池
scheduler: {}
— ###该部分内容为指定kubeproxy的代理模式为ipvs,一般默认为iptables模式。
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: “ipvs”
初始化:
[root@test-k8s-master-1 ~]# kubeadm init --config=kubeadm-init.yaml
…
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown
(
i
d
−
u
)
:
(id -u):
(id−u):(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 192.168.1.100:7443 --token abcdef.0123456789abcdef
–discovery-token-ca-cert-hash sha256:274a3c078548240887903b51e75e2cc9548343e06dcd2a3ca0c3087c3fdd3175
–control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.100:7443 --token abcdef.0123456789abcdef
–discovery-token-ca-cert-hash sha256:274a3c078548240887903b51e75e2cc9548343e06dcd2a3ca0c3087c3fdd3175
复制代码
6、其他两个master 复制相关配置(改脚本的主要功能是将k8s-master1的/etc/kubernetes/下面的相关证书copy到别的master上)
复制代码
USER=root
CONTROL_PLANE_IPS=“test-k8s-master-2 test-k8s-master-3”
for host in
C
O
N
T
R
O
L
P
L
A
N
E
I
P
S
;
d
o
s
s
h
"
{CONTROL_PLANE_IPS}; do ssh "
CONTROLPLANEIPS;dossh"{USER}"@
h
o
s
t
"
m
k
d
i
r
−
p
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
e
t
c
d
"
s
c
p
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
c
a
.
∗
"
host "mkdir -p /etc/kubernetes/pki/etcd" scp /etc/kubernetes/pki/ca.* "
host"mkdir−p/etc/kubernetes/pki/etcd"scp/etc/kubernetes/pki/ca.∗"{USER}"@
h
o
s
t
:
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
s
c
p
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
s
a
.
∗
"
host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.* "
host:/etc/kubernetes/pki/scp/etc/kubernetes/pki/sa.∗"{USER}"@
h
o
s
t
:
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
s
c
p
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
f
r
o
n
t
−
p
r
o
x
y
−
c
a
.
∗
"
host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.* "
host:/etc/kubernetes/pki/scp/etc/kubernetes/pki/front−proxy−ca.∗"{USER}"@
h
o
s
t
:
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
s
c
p
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
e
t
c
d
/
c
a
.
∗
"
host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/etcd/ca.* "
host:/etc/kubernetes/pki/scp/etc/kubernetes/pki/etcd/ca.∗"{USER}"@
h
o
s
t
:
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
p
k
i
/
e
t
c
d
/
s
c
p
/
e
t
c
/
k
u
b
e
r
n
e
t
e
s
/
a
d
m
i
n
.
c
o
n
f
"
host:/etc/kubernetes/pki/etcd/ scp /etc/kubernetes/admin.conf "
host:/etc/kubernetes/pki/etcd/scp/etc/kubernetes/admin.conf"{USER}"@$host:/etc/kubernetes/
done
复制代码
7、master节点加入:
kubeadm join 172.18.178.240:7443 --token abcdef.0123456789abcdef
–discovery-token-ca-cert-hash sha256:274a3c078548240887903b51e75e2cc9548343e06dcd2a3ca0c3087c3fdd3175
–control-plane
8、node 节点加入:
kubeadm join 172.18.178.240:7443 --token abcdef.0123456789abcdef
–discovery-token-ca-cert-hash sha256:274a3c078548240887903b51e75e2cc9548343e06dcd2a3ca0c3087c3fdd3175
9、安装网络插件:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
**
10、相关master节点和node节点加入完成后,查看相关节点状态(以下情况说明部署成功)
[root@test-k8s-master-1 ~]# kubectl get node -A | grep master
test-k8s-master-1 Ready master 233d v1.16.3
test-k8s-master-2 Ready master 233d v1.16.3
test-k8s-master-3 Ready master 233d v1.16.3
更多推荐
所有评论(0)