基于kubeadm安装

步骤一:文档查看

步骤二:梳理部署环境

步骤三:部署虚拟机模板

步骤四:k8s节点配置优化与必选项

1.关闭/swap分区

2.调整k8s内核信息

3.关闭防火墙服务(已执行的忽略 )

4.k8s节点修改本地软件源

#repo配置备份

#修改base/epel软件源

#修改docker软件源

#修改k8s软件源

5.安装iptables 

6.开启NTP时钟同步

7.关联ipvs性能优化

8.安装k8s节点依赖软件包

9.k8s节点安装docker容器引擎

#安装docker软件:

#docker镜像加速:

#上传k8s软件包:

#安装k8s相关软件(1.20.6版本)

步骤五:部署模板虚拟机

步骤六:前端节点部署nginx集群

步骤七:前端节点部署keepalive

步骤八:k8s前端服务拉活-主备切换测试

步骤九:kubeadm对集群进行初始化


基于kubeadm安装

步骤一:文档查看

浏览器:https://kubernetes.io/

按照对应文档部署就好,虚拟机、物理机都可以

步骤二:梳理部署环境

前端集群(front-endpoint-cluster):两个节点

        front-endpoint1:192.168.194.21(主)

        front-endpoint2:192.168.194.22(备)

        前端节点虚拟ip(virtual ip):192.168.194.20

master集群(master cluster):三个节点

        master node1:192.168.194.31

        master node2:192.168.194.32

        master node3:192.168.194.33

worker node :四个节点

        worker node1:192.168.194.41

        worker node2:192.168.194.42

        worker node3:192.168.194.43

        worker node4:192.168.194.44

步骤三:部署虚拟机模板

为了部署节省时间,选择centos7克隆一个模板 ,名称自定义;

修改配置:两核,内存4g,网路NAT;配置结束后开机;

查看网卡:ip add show 查看网卡

进入目录文件配置网卡:vi /etc/sysconfig/network-scripts/ifcfg-ens33

修改以下参数:dns 设置为本地服务器的就行,我的是192.168.194.2(win+r => cmd => ip add show 可查);修改后systemctl restart network重启网络服务

补新增:DNS=192.168.194.2

保存后查看网卡是否更改

关闭防火墙服务

systemctl stop firewalld.service && systemctl disable firewalld.service

测试:ping网关,ping百度等地址;我这里域名解析文件待调整,但不影响访问外网,有空再调

如果ping不同外网,调整域名解析文件

添加:nameserver 8.8.8.8

保存退出

步骤四:k8s节点配置优化与必选项

1.关闭/swap分区

进入文件:vi /etc/fstab,然后红框信息前面加‘#’

2.调整k8s内核信息

modprobe br_netfilter

echo "modprobe br_netfilter" >> /etc/profile

#(调整k8s内核信息,允许连接iptables,去查iptables表规则条件,支持ip数据转发)

cat > /etc/sysctl.d/k8s.conf << end

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

end

#(允许桥接检查,增加ipv4的转发)

查看是否操作成功:sysctl -p /etc/sysctl.d/k8s.conf

3.关闭防火墙服务(已执行的忽略 )

systemctl stop firewalld.service && systemctl disable firewalld.service 
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

#查看是否关闭成功指令:getenforce (重启后生效)

4.k8s节点修改本地软件源

#repo配置备份

cd /etc/yum.repos.d/

mkdir repo_bak

mv *.repo repo_bak/

————————分隔符—————————

#修改base/epel软件源

wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo

【报错1】-bash: wget: 未找到命令

找他博客:Linux(CentOS 7)环境下安装wget_centos7没有wget-CSDN博客

【报错2】wget: 无法解析主机地址 “mirrors.aliyun.com”

找他博客:CentOS8 wget: 无法解析主机地址 “mirrors.aliyun.com”-CSDN博客

————————分隔符—————————

#修改docker软件源

vi /etc/yum.repos.d/docker-ce.repo

[docker-ce-stable]

name=Docker CE Stable - $basearch

baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7.9/x86_64/stable

enabled=1

gpgcheck=1

gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

#清除缓存并重新尝试安装Docker CE :yum clean all && yum install docker-ce

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

【报错1】-bash: yum-config-manager: 未找到命令

找他博客:-bash: yum-config-manager: command not found解决方法-CSDN博客

【报错2】Could not fetch/save url http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to file /etc/yum.repos.d/docker-re.repo: [Errno 14] HTTP Error 404 - Not Found

解决方案:访问次数多了,换个网络就好

————————分隔符—————————

#修改k8s软件源

vim kubernetes.repo

[kubernetes]

name=kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=0

yum clean all && yum makecache

————————分隔符—————————

5.安装iptables 

        yum install -y iptables-services

        清空规则:iptables -F

        查看规则:iptables -L

6.开启NTP时钟同步

yum install ntpdate -y 

ntpdate cn.pool.ntp.org

【报错】-bash: ntpdate: 未找到命令:

解决方案:yum -y install ntpdate

7.关联ipvs性能优化

ipvs(ip virtual server):软件负载均衡器

ipvs/iptables底层内核基于netfilter

ipvs底层进行转发查表性能优于iptable

到阿里云官网下载此文件,上传到/etc/sysconfig/modules/目录下

调整模式:chmod 755 ipvs.modules

执行:/etc/sysconfig/modules/ipvs.modules

查看信息:lsmod | grep ip_vs

8.安装k8s节点依赖软件包

yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfsutils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack telnet ipvsadm

9.k8s节点安装docker容器引擎

#安装docker软件:

yum install -y docker-ce docker-ce-cli containerd.io

systemctl start docker && systemctl enable docker && systemctl status docker 

#docker镜像加速:

vi /etc/docker/daemon.json

{ "registry-mirrors":["https://registry.dockercn.com","https://docker.mirrors.ustc.edu.cn","http://hub-mirror.c.163.com"], "exec-opts": ["native.cgroupdriver=systemd"] }

#上传k8s软件包:

到阿里云官网下载此文件,上传到~目录下

释放k8s软件包镜像:docker load -i  k8simage-1-20-6.tar.gz

查看本地镜像:docker images

#安装k8s相关软件(1.20.6版本)

kubeadm(安装 k8s master/worker node)

kubectl(命令行管理)

kubelet(代理)

yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6

步骤五:部署模板虚拟机

根据模板克隆相对应节点,并修改其对应ip地址

将以下域名写入每个节点的/etc/hosts文件中

vi /etc/hosts

192.168.194.21 front-endpoint1
192.168.194.22 front-endpoint2
192.168.194.31 k8s-master-node1
192.168.194.32 k8s-master-node2
192.168.194.33 k8s-master-node3
192.168.194.41 k8s-worker-node1
192.168.194.42 k8s-worker-node2
192.168.194.43 k8s-worker-node3
192.168.194.44 k8s-worker-node4

测试

步骤六:前端节点部署nginx集群

#front-endpoint1和front-endpoint2安装nginx

yum install -y nginx keepalived

#配置前端负载均衡Nginx服务(/etc/nginx/nginx.conf)

cp nginx.conf nginx.conf.bak

rm -rf nginx.conf

vi nginx.conf

以下代码直接复制粘贴到nginx.conf文件中

# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
 worker_connections 1024;
}
##### add to config start #####
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status
$upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.194.31:6443; # Master1 IP:port
server 192.168.194.32:6443; # Master2 IP:port
server 192.168.194.33:6443; # Master3 IP:port
}
server {
listen 6443; # nginx proxy port
proxy_pass k8s-apiserver;
}
}
##### add to config end #####
http {
 log_format main '$remote_addr - $remote_user [$time_local] "$request" '
 '$status $body_bytes_sent "$http_referer" '
 '"$http_user_agent" "$http_x_forwarded_for"';
 access_log /var/log/nginx/access.log main;
 sendfile on;
 tcp_nopush on;
 tcp_nodelay on;
 keepalive_timeout 65;
 types_hash_max_size 4096;
 include /etc/nginx/mime.types;
 default_type application/octet-stream;
 server {
 listen 80;
 server_name _;
 location / {
 }
 }
}

步骤七:前端节点部署keepalive

#创建Nginx检测脚本

vi /etc/keepalived/check_nginx.sh
#!/bin/bash
count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$")
if [ "$count" -eq 0 ]; then
 systemctl stop keepalived
fi
chmod +x /etc/keepalived/check_nginx.sh

#创建keepalive配置⽂件

主节点:

mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
 notification_email {
 acassen@firewall.loc
 failover@firewall.loc
 sysadmin@firewall.loc
 }
 notification_email_from Alexandre.Cassen@firewall.loc
 smtp_server 127.0.0.1
 smtp_connect_timeout 30
 router_id keepalive_master
 vrrp_skip_check_adv_addr
 vrrp_strict
 vrrp_garp_interval 0
 vrrp_gna_interval 0
}
vrrp_script keepalived_check_nginx {
 script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
 state MASTER
 interface ens33
 virtual_router_id 51
 priority 100
 advert_int 1
 authentication {
 auth_type PASS
 auth_pass 1111
 }
 virtual_ipaddress {
 192.168.194.20
 }
 
 track_script {
 keepalived_check_nginx }
 
}

备节点:

mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
 notification_email {
 acassen@firewall.loc
 failover@firewall.loc
 sysadmin@firewall.loc
 }
 notification_email_from Alexandre.Cassen@firewall.loc
 smtp_server 127.0.0.1 # # modify
 smtp_connect_timeout 30
 router_id keepalive_backup # # modify
 vrrp_skip_check_adv_addr
 vrrp_strict
 vrrp_garp_interval 0
 vrrp_gna_interval 0
}
vrrp_script keepalived_check_nginx {
 script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
 state BACKUP
 interface ens33 # modify
 virtual_router_id 51
 priority 90 # modify
 advert_int 1
 authentication {
auth_type PASS
 auth_pass 1111
 }
 virtual_ipaddress {
 192.168.194.20 # # modify
 }
 
 track_script {
 keepalived_check_nginx }
 
}

步骤八:k8s前端服务拉活-主备切换测试

systemctl daemon-reload
yum install -y nginx-mod-stream
systemctl start nginx && systemctl enable nginx && systemctl status nginx
systemctl start keepalived && systemctl enable keepalived && systemctl status
keepalived

ip add show 可查看被监控的浮动ip地址(192.168.194.20)

当主节点设备挂掉,会切换到备节点;当主节点nginx.service和keepalived.service同时拉活,才能切换到主节点。

步骤九:kubeadm对集群进行初始化

使用工具kubeadm对节点进行部署,部署过程中 需加载配置文件kubeadm-config.yaml

vi kubeadm-config.yaml

将以下配置复制粘贴到配置文件kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.20.6
controlPlaneEndpoint: 192.168.194.20:6443 #浮动ip
imageRepository: registry.aliyuncs.com/google_containers #镜像仓库
apiServer:
 certSANs: #节点ip
 - 192.168.194.20
 - 192.168.194.21
 - 192.168.194.22
 - 192.168.194.23
 - 192.168.194.31
 - 192.168.194.32
 - 192.168.194.33
 - 192.168.194.41
 - 192.168.194.42
 - 192.168.194.43
 - 192.168.194.44
 
networking:
 podSubnet: 10.244.0.0/16 #pod子网
 serviceSubnet: 10.10.0.0/16
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

#基于kubeadm环境初始化

kubeadm init --config kubeadm-config.yaml --ignore-preflight-errors=SystemVerfication

【注1】有问题可留言

【注2】后续继续会补充

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐