前言



这一篇介绍的是配置k8s的高可用环境



keepalived+Haproxy



如果是实体服务器或VM虚拟机等建议使用这个方案。
如果是阿里云等云服务器建议使用其官方的负载均衡方案或Kubernetes。因为其与实体机的并不同,平台对网络环境限制较大。强行配置keepalive,网络异常或云服务器重启后,可能会发生脑裂或其他延迟导致的问题。



keepalive部署



在3台节点上配置

配置文件各节点都需注意修改部分参数,再启动
state
interface
priority 

yum install -y keepalived
cd /etc/keepalived

cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

#通常global_defs全局配置。通常会配置邮件,如不需要可去掉邮件相关
#global_defs {
# notification_email {           #指定keepalived在发生切换时需要发送email到的对象,一行一个
#    sysadmin@fire.loc
#    systemd@fire.loc
#    }
#notification_email_from Alexandre.Cassen@firewall.loc #指定发件人
#   smtp_server localhost        #指定smtp服务器地址
#   smtp_connect_timeout 30      #指定smtp连接超时时间
#   router_id LVS_DEVEL          #router_id是一个全局定义,定义的是运行keepalived的该设备的一个名称
#}


global_defs {
   router_id LVS_DEVEL
}

#vrrp_script自定义一个脚本
#script后面可以直接接shell命令或一个可执行脚本,通常检测服务
#interval   运行间隔,单位秒   
#weight     权重,当脚本成功或失败对当前节点的优先级是增加还是减少
#fall       检测失败的最大次数,超过几次认为节点发生故障
#rise       请求几次成功认为节点恢复正常


vrrp_script check_haproxy {
#    script "/script/keepalived/check_haproxy.sh"
    script "killall -0 haproxy"
    interval 3
    weight -2
    fall 10
    rise 2
}

#state              MASTER 或 BACKUP  master01节点为MASTER,其余节点为BACKUP
#interface          注意你的网络设备名称
#virtual_router_id  组播ID,通过224.0.0.18可以监听到现在已经存在的VRRP ID,最好不要跟现有ID冲突
#priority           节点优先值,优先级保持唯一,数值越大优先级越高  (通常习惯上配置各节点相差10 or 50 ,无明确规定)
#advert_int         发送组播包的间隔时间,默认为1秒
#authentication     配置密码,验证类型为PASS(明文),auth_pass可自行配置密码
#virtual_ipaddress  配置虚拟ip(vip)
#track_script       引用vrrp_script自定义的脚本

vrrp_instance VI_1 {
    state MASTER
    interface eth0    
    virtual_router_id 51
    priority 250       
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 234tesdfr235v427x
    }
    virtual_ipaddress {
        192.168.1.200   
    }
    track_script {
        check_haproxy
    }
}
EOF


补充:
为了防止脑裂,script后面可以直接配置一个脚本,检测服务并在服务异常后停止该节点keepalive

cat > /script/keepalived/check_haproxy.sh << EOF
#!/bin/bash

CK=`ps -C haproxy --no-header | wc -l`

if [ $CK -eq 0 ];then
     systemctl start haproxy.service
     sleep 3
     if [ `ps -C haproxy --no-header | wc -l ` -eq 0 ];then
     systemctl stop keepalived.service
     fi

fi

EOF

chmod +x /script/keepalived/check_haproxy.sh


<2>启动并检测服务

systemctl enable keepalived
systemctl start keepalived
systemctl status keepalived



HAProxy部署



HAProxy将为apiserver提供反向代理,将所有请求轮询转发到每个master节点上

以下在3台节点上配置



yum install -y haproxy
cd /etc/haproxy
mv haproxy.cfg haproxy.cfg_back


cat > /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
    mode                 tcp
    bind                 *:16443
    option               tcplog
    default_backend      kubernetes-apiserver

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
    mode        tcp
    balance     roundrobin
    server  k8s-master01 192.168.1.1:6443 check   #更改对应的主机名和IP
    server  k8s-master02 192.168.1.2:6443 check   #更改对应的主机名和IP
    server  k8s-master03 192.168.1.3:6443 check   #更改对应的主机名和IP

#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
    bind                 *:1080
    stats auth           admin:awesomePassword
    stats refresh        5s
    stats realm          HAProxy\ Statistics
    stats uri            /admin?stats
EOF


<2>启动并检测服务

systemctl enable haproxy.service 
systemctl start haproxy.service 
systemctl status haproxy.service 


<3>配置完成后,可关闭master节点的haproxy服务。查看VIP是否漂移,keepalived是否关闭




SLB+Haproxy



SLB配置



自2021年3月23日后创建的传统型负载均衡CLB将不再支持挂载经典网络ECS

SLB服务通过设置vip,将同一地域的多台云服务器资源虚拟成一个高性能,高可用的应用服务池,再根据应用的指定方式,将来自客户的网络请求分发到云服务池中,并且会对云服务池中的ECS进行健康检查,自动隔离异常状态的ECS,从而解决ECS的单点问题,同时提高应用的整体服务能力。



<1>创建CLB

在这里插入图片描述



<2>选择合适的付费模式和可用区

可用区要与服务器的地域保持一致
在这里插入图片描述



<3>创建一个内网的SLB即可

内网SLB现在是收费的

在这里插入图片描述



<4>配置SLB
在这里插入图片描述



<5>配置监听端口

高级配置可根据需求配置,建议开启会话保持(kubectl会使用到)

在这里插入图片描述



<6>选择服务器组

勾选要配置的服务器

在这里插入图片描述



<7>配置端口和权重

在这里插入图片描述

在这里插入图片描述



<8>配置健康检查

在这里插入图片描述



<9>完成配置

记住这个SLB对应的ip后面有使用到
在这里插入图片描述



hosts配置



<1>进入主机,将SLB加入hosts,并拷贝到各节点

cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.1 k8s-master01
192.168.1.2 k8s-master02
192.168.1.3 k8s-master03
192.168.1.4 k8s-node01
192.168.1.5 k8s-node02
192.168.1.6 harbor.kkcai.vip
你的SLB分配的ip cluster.kube.com

SLB配置的域名,kubeadm需要使用到,可配置任意名称



HAProxy部署



HAProxy将为apiserver提供反向代理,将所有请求轮询转发到每个master节点上
以下在3台节点上配置



yum install -y haproxy
cd /etc/haproxy
mv haproxy.cfg haproxy.cfg_back


cat > /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
    mode                 tcp
    bind                 *:16443
    option               tcplog
    default_backend      kubernetes-apiserver

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
    mode        tcp
    balance     roundrobin
    server  k8s-master01 192.168.1.1:6443 check   #更改对应的主机名和IP
    server  k8s-master02 192.168.1.2:6443 check   #更改对应的主机名和IP
    server  k8s-master03 192.168.1.3:6443 check   #更改对应的主机名和IP

#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
    bind                 *:1080
    stats auth           admin:awesomePassword
    stats refresh        5s
    stats realm          HAProxy\ Statistics
    stats uri            /admin?stats
EOF


<2>启动并检测服务

systemctl enable haproxy.service 
systemctl start haproxy.service 
systemctl status haproxy.service 


<3>检测在k8s各节点上,telnet SLB端口

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐