结合HAProxy & Keepalived 配置 K8SMaster的高可用
文章目录部署条件部署步骤一、安装软件二、配置Keepalived三、配置HAProxy部署条件2台已经安装好Linux操作系统的主机,本例以CentOS 7 为蓝本。配置好yum仓库,以便可直接安装相关负载均衡应用2台主机配置好网络,在同一个网段中,确保中间无防火墙,组播流量可正常发送与接收配置好时间同步关闭防火墙关闭SELinux两主机间通过hosts文件进行名称解析部署步骤一、安装软件yum
·
文章目录
部署环境
2台安装好Linux操作系统的主机,做为外置负载均衡集群单独部署,也可以运行静态的pod,以挂载卷的形式,挂载keepalived和haproxy的配置文件,即可运行一个Container进行负载均衡。本示例用单独主机负载。
部署配置先决条件
- 2台已经安装好Linux操作系统的主机,本例以CentOS 7 为蓝本。
- 配置好yum仓库,以便可直接安装相关负载均衡应用
- 2台主机配置好网络,在同一个网段中,确保中间无防火墙,组播流量可正常发送与接收
- 配置好时间同步
- 关闭防火墙
- 关闭SELinux
- 两主机间通过hosts文件进行名称解析
角色解释
如上图所示,本示例安装两台单独的Linux主机做为负载均衡器,安装keepalived和haproxy。其中keepalived主要用于两台loadbalancer的主备切换,主要工作的是loadbalancer-1,而loadbalancer-2是闲置状态。haproxy用于向3台master进行用户的请求负载均衡。在本示例中,使用的是roundrobin的方式进行轮询的负载均衡。
本文主要介绍keepalived和haproxy的配置介绍。
部署步骤
一、安装软件
yum install keepalived haproxy -y
二、配置Keepalived
cp /etc/keepalived/keepalived.conf{,.backup}
2.1 配置Keepalived的Master的配置文件
! Configuration File for keepalived
global_defs {
router_id Master
}
vrrp_script check_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens192
virtual_router_id 44
priority 110
advert_int 3
# use_vmac
authentication {
auth_type PASS
auth_pass PASSWORD
}
virtual_ipaddress {
172.16.133.67
}
track_script {
check_apiserver
}
}
2.2 配置Keepalived的BACKUP的配置文件
! Configuration File for keepalived
global_defs {
router_id Backup
}
vrrp_script check_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens192
virtual_router_id 44
priority 105
advert_int 3
# use_vmac
authentication {
auth_type PASS
auth_pass PASSWORD
}
virtual_ipaddress {
172.16.133.67
}
track_script {
check_apiserver
}
}
2.3 配置用于Keepalived的Kubernetes的健康检查配置文件(主备keepalived配置一样的脚本)
vim /etc/keepalived/check_apiserver.sh
#!/bin/bash
# 定义两个变量,用于定义APISERVER的ip地址,和端口
APISERVER_VIP="cluster-endpoint.microservice.for-best.cn" #域名要能够被DNS服务器解析,否则可以使用IP地址或者hosts文件解析
APISERVER_DEST_PORT=6443 # 默认端口为6443,如果端口不一样请一起改正
errorExit() {
echo "*** $*" 1>&2
exit 1
}
if ping -W 0.1 -c 3 -i 0.01 ${APISERVER_VIP} &> /dev/null; then
curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
fi
chmod +x check_apiserver.sh
2.4 启动keepalived 并检查
systemctl start keepalived
systemctl enable keepalived --now
三、配置HAProxy
3.1 配置HAProxy的配置文件(2台主备服务器一样的配置)
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log /dev/log local0
log /dev/log local1 notice
daemon
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 1
timeout http-request 10s
timeout queue 20s
timeout connect 5s
timeout client 20s
timeout server 20s
timeout http-keep-alive 10s
timeout check 10s
#---------------------------------------------------------------------
# apiserver frontend which proxys to the control plane nodes
#---------------------------------------------------------------------
frontend apiserver
bind *:6443
mode tcp
option tcplog
default_backend apiserver
#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
option httpchk GET /healthz
http-check expect status 200
mode tcp
option ssl-hello-chk
balance roundrobin
#server ${HOST1_ID} ${HOST1_ADDRESS}:${APISERVER_SRC_PORT} check
server master-1 172.16.133.56:6443 check
server master-2 172.16.133.57:6443 check
server master-3 172.16.133.58:6443 check
# [...]
3.2 配置开机自启动与启动HAProxy
systemctl start haproxy
systemctl enable haproxy
3.3 查看启动日志
Jul 18 19:00:15 LoadBalancer-1 systemd: Started HAProxy Load Balancer.
Jul 18 19:00:15 LoadBalancer-1 haproxy[1712]: Proxy apiserver started.
Jul 18 19:00:15 LoadBalancer-1 haproxy-systemd-wrapper: [WARNING] 198/190015 (1712) : config : 'option forwardfor' ignored for frontend 'apiserver' as it requires HTTP mode.
Jul 18 19:00:15 LoadBalancer-1 haproxy-systemd-wrapper: [WARNING] 198/190015 (1712) : config : 'option forwardfor' ignored for backend 'apiserver' as it requires HTTP mode.
Jul 18 19:00:15 LoadBalancer-1 haproxy[1712]: Proxy apiserver started.
Jul 18 19:00:15 LoadBalancer-1 haproxy[1712]: Proxy apiserver started.
Jul 18 19:00:15 LoadBalancer-1 haproxy[1712]: Proxy apiserver started.
Jul 18 19:00:16 LoadBalancer-1 haproxy[1713]: Server apiserver/master-2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jul 18 19:00:16 LoadBalancer-1 haproxy[1713]: Server apiserver/master-2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jul 18 19:00:16 LoadBalancer-1 haproxy[1713]: Server apiserver/master-3 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jul 18 19:00:16 LoadBalancer-1 haproxy[1713]: Server apiserver/master-3 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
访问测试
更多推荐
已为社区贡献1条内容
所有评论(0)