k8s部署高可用集群例子 两个mater 1个Node,VIP(虚拟IP)
第2个master:1、keepalived2、haproxy3、添加master2到集群中 4、安装docker和网络插件。第1个master:1、keepalived2、haproxy3、初始化操作 4、安装docker和网络插件。加入到集群(这里的集群不是复制这里的就好,具体看自己初始化master1圈的图,复制起来添加就好了)9、安装kubeadm,kubelet和kubectl指定版本(
1、在hosts中添加VIP(虚拟IP)地址
vim /etc/hosts
vim /etc/hosts
2、2个master和node安装的内容是不同的
第1个master:1、keepalived2、haproxy3、初始化操作 4、安装docker和网络插件
第2个master:1、keepalived2、haproxy3、添加master2到集群中 4、安装docker和网络插件
node:加入到集群中 2、安装docker和网络插件
3、所有master节点部署keepalived
安装keepalived所需要的包
yum install -y conntrack-tools libseccomp libtool-ltdl
安装keepalived
yum install -y keepalived
4、master节点都添加keepalived.conf
这里需要修改对应的网卡,我的网卡是ens32
修改虚拟IP
4.1 master1的配置
cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
router_id k8s
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens32
virtual_router_id 51
priority 250
advert_int 1
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab
}
virtual_ipaddress {
192.168.102.51
}
track_script {
check_haproxy
}
}
EOF
4.2 master2的节点
! Configuration File for keepalived
global_defs {
router_id k8s
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens32
virtual_router_id 51
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab
}
virtual_ipaddress {
192.168.102.51
}
track_script {
check_haproxy
}
}
5、两台master启动keepalived
systemctl start keepalived.service
设置开机启动
systemctl enable keepalived.service
查看启动状态
systemctl status keepalived.service
ip a s ens32
在其中一台master节点可以看到其中一台有虚拟ip,当其中一台挂了,ip则会转移到另一台
6、两台master节点上安装haproxy
yum install haproxy
修改配置(master都要执行,需要改IP)
cat > /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
mode tcp
bind *:16443
option tcplog
default_backend kubernetes-apiserver
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
mode tcp
balance roundrobin
server hadoop100 192.168.102.48:6443 check #只需要修改ip
server hadoop101 192.168.102.49:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
bind *:1080
stats auth admin:awesomePassword
stats refresh 5s
stats realm HAProxy\ Statistics
stats uri /admin?stats
EOF
启动haproxy
systemctl start haproxy
设置为开机启动
systemctl enable haproxy
查看状态
systemctl status haproxy
7、配置dockera阿里云yum源
cat >/etc/yum.repos.d/docker.repo<<EOF
[docker-ce-edge]
name=Docker CE Edge - \$basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/\$basearch/edge
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
EOF
8、安装docker(三台机器都要安装)
yum -y install docker-ce
8.1查看docker版本
docker --version
8.2启动docker
systemctl enable docker
systemctl start docker
8.3 配置docker的镜像源
cat >> /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
然后重启docker
systemctl restart docker
9、安装kubeadm,kubelet和kubectl指定版本(三台机器都要安装)
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
设置开机启动
systemctl enable kubelet
创建init文件
vi kubeadm-config.yaml
apiServer:
certSANs:
- hadoop100
- hadoop101
- hadoop100.vip
- 192.168.102.48
- 192.168.102.49
- 192.168.102.51
- 127.0.0.1
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "hadoop100.vip:16443"
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.1.0.0/16
scheduler: {}
在master1上执行初始化
kubeadm init --config kubeadm-config.yaml
成功初始化
执行下方命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看节点
kubectl get nodes
查看pod
kubectl get pods -n kube-system
10、master2加入到主节点
把master1的证书什么的复制到master2
下面四条命令在master1执行,ip自己灵活变化
ssh root@192.168.102.49 mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/admin.conf root@192.168.102.49:/etc/kubernetes
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@192.168.102.49:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.* root@192.168.102.49:/etc/kubernetes/pki/etcd
如果路径没权限,在master执行
chmod 777 /etc/kubernetes
加入到集群(这里的集群不是复制这里的就好,具体看自己初始化master1圈的图,复制起来添加就好了)
kubeadm join hadoop100.vip:16443 --token vyfxn4.95q57l1dqlpxj5wb --discovery-token-ca-cert-hash sha256:dcdb1fac406137f5f5e478665e29cb7bcdcf578f14bac331d30431c3f5b5cd87 --control-plane
添加成功如图(这里报错可以参考这个地址https://blog.csdn.net/weixin_43205308/article/details/129740892)
执行下方命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
11、添加node节点
kubeadm join hadoop100.vip:16443 --token vyfxn4.95q57l1dqlpxj5wb --discovery-token-ca-cert-hash sha256:dcdb1fac406137f5f5e478665e29cb7bcdcf578f14bac331d30431c3f5b5cd87
12、安装flannel网络插件在master1或master2执行
wget -c https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
13、查看集群node的状态以下则证明成功
14、创建例子(到这一步就不用我说这三个命令了)
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc
15、原理图
16、参考资料
https://gitee.com/moxi159753/LearningNotes/tree/master/K8S/18_Kubernetes%E6%90%AD%E5%BB%BA%E9%AB%98%E5%8F%AF%E7%94%A8%E9%9B%86%E7%BE%A4
更多推荐
所有评论(0)