前言

本博客内容仅为记录博主思路,仅供参考,一切以自己实践结果为准。


一、要求

  • Kubernetes 区域可采用 Kubeadm 方式进行安装
  • 要求在 Kubernetes 环境中,通过yaml文件的方式,创建2个Nginx Pod分别放置在两个不同的节点上,Pod使用hostPath类型的存储卷挂载,节点本地目录共享使用 /data,2个Pod副本测试页面二者要不同,以做区分,测试页面可自己定义
  • 编写service对应的yaml文件,使用NodePort类型和TCP 30000端口将Nginx服务发布出去
  • 负载均衡区域配置Keepalived+Nginx,实现负载均衡高可用,通过VIP 192.168.10.100和自定义的端口号即可访问K8S发布出来的服务
  • iptables防火墙服务器,设置双网卡,并且配置SNAT和DNAT转换实现外网客户端可以通过12.0.0.1访问内网的Web服务

二、架构

节点IP地址安装服务
master192.168.13.10
node01192.168.13.20
node02192.168.13.30
nginx01192.168.13.40
nginx02192.168.13.50
iptables/网关192.168.13.60/12.0.0.1
client12.0.0.200(vim1仅主机)

三、搭建

3.1 初始化环境

#所有节点操作,关闭防火墙、selinux、swap交换、添加本地域名解析、调整内核参数、开启时间同步
systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config

swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab 

cat >> /etc/hosts << EOF
192.168.13.10 master01
192.168.13.20 node01
192.168.13.30 node02
192.168.13.40 nginx01
192.168.13.50 nginx02
EOF

cat > /etc/sysctl.d/k8s.conf << EOF
#开启网桥模式,可将网桥的流量传递给iptables链
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
#关闭ipv6协议
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOF

sysctl --system

yum install ntpdate -y
ntpdate time.windows.com

3.2 k8s集群搭建(kubeadm安装)

#所有节点安装docke
yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install -y docker-ce docker-ce-cli containerd.io

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  }
}
EOF

systemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service

docker info | grep "Cgroup Driver"
#成功则会显示:Cgroup Driver: systemd

#所有节点安装kubeadm,kubelet和kubectl
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#安装kubelet服务并设置为开机自启
yum install -y kubelet-1.20.11 kubeadm-1.20.11 kubectl-1.20.11
systemctl enable kubelet.service
#初始化环境配置结束



#在 master节点查看初始化需要哪些镜像
kubeadm config images list
cd /opt
#上传v1.20.11.zip压缩包至/opt目录(里面包含了所有需要的镜像,若没有则需要去官网下载)
unzip v1.20.11.zip -d /opt/k8s
cd /opt/k8s/v1.20.11
for i in $(ls *.tar); do docker load -i $i; done

#复制镜像和脚本到node节点
scp -r /opt/k8s root@192.168.13.20:/opt
scp -r /opt/k8s root@192.168.13.30:/opt

kubeadm config print init-defaults > /opt/kubeadm-config.yaml
cd /opt
vim kubeadm-config.yaml
11行:localAPIEndpoint:
12行:advertiseAddress: 192.168.13.10
34行:kubernetesVersion: v1.20.11
35行:networking:
36行:dnsDomain: cluster.local
37行:podSubnet: "10.244.0.0/16"
38行:serviceSubnet: 10.96.0.0/16
39行:scheduler: {}
#末尾再添加以下内容(注意---也要加,因为是yaml文件,作为分隔符不可缺少)
--- 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
#配置文件到此结束

#调用配置文件生成
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log

#执行下方三条命令(在上方命令的提示信息会出现)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#查看 kubeadm-init 日志,kubernetes配置文件目录,存放ca等证书和密码的目录
#less kubeadm-init.log
#ls /etc/kubernetes/
#ls /etc/kubernetes/pki	

#在所有node节点上导入镜像
cd /opt/k8s/v1.20.11
for i in $(ls *.tar); do docker load -i $i; done
#在所有node节点上执行(上方命令提示信息的最后一条信息,很长一段,在master节点上)
kubeadm join 192.168.13.10:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:很长一段,每次不同
#成功则会出现:Run 'kubectl get nodes'...this node join...的提示内容

#查看集群状态
kubectl get cs

#注:如果 kubectl get cs 发现集群不健康,更改以下两个文件(master节点)
#vim /etc/kubernetes/manifests/kube-scheduler.yaml 
#vim /etc/kubernetes/manifests/kube-controller-manager.yaml
#修改如下内容
#把--bind-address=127.0.0.1变成--bind-address=192.168.13.10		#修改成k8s的控制节点master01的ip
#把httpGet:字段下的hosts由127.0.0.1变成192.168.13.10(有两处)
#- --port=0					# 搜索port=0,把这一行注释掉
#systemctl restart kubelet

cd /opt
#所有节点上传flannel.tar到/opt目录中
docker load -i flannel.tar

#在master01节点上操作上传kube-flannel.yml文件到/opt目录中,部署CNI网络(39-44行可能需要改)
cd /opt
kubectl apply -f kube-flannel.yml
#此时可能需要等待几秒钟,才会出现ready
kubectl get nodes

3.3 编写yaml配置文件创建pod

  • 注意:yaml文件的格式要求非常严格,不要少任何一个字符,包括三个横杠
  • 注意:需要先为node节点打上标签,方可根据标签选择指定node
kubectl label node 192.168.13.20 disktype=han
kubectl label node 192.168.13.30 disktype=wang

---
apiVersion: v1
kind: Namespace
metadata:
  name: ns-nginx
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx
  name: nginx01
  namespace: ns-nginx
spec:
  volumes:
  - name: han
    hostPath:
      path: /data/pod/index.html

  containers:
  - image: nginx:1.14
    name: nginx01
    ports:
    - containerPort: 80
    volumeMounts:
    - name: han
      mountPath: /usr/share/nginx/html/index.html
      readOnly: true
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  nodeSelector:
    disktype: han
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  namespace: ns-nginx
spec:
  ports:
  - nodePort: 30000
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: NodePort

3.4 配置nginx负载均衡节点

#负载均衡nginx01节点操作
cat > /etc/yum.repos.d/nginx.repo << EOF
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/\$basearch/
gpgcheck=0
EOF
yum install -y nginx

vim /etc/nginx/nginx.conf
#添加如下内容(http模块同级别)
stream {
    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
 
    access_log  /var/log/nginx/k8s-access.log  main;
 
    upstream k8s-apiservers {
        server 192.168.13.20:30000;
        server 192.168.13.30:20000;
    }
    server {
        listen 1314;
        proxy_pass k8s-apiservers;
    }
}
#添加内容到此结束

nginx -t
systemctl enable --now nginx

yum install -y keepalived
vim /etc/keepalived/keepalived.conf
	10行:smtp_server 127.0.0.1
	12行:router_id NGINX_01
	13-16行:删除
	14行:插入周期性执行脚本
	vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
	}

	21行:interface ens33
	30行:192.168.13.100/32
	31-32:ip地址删除
	33行:留两个大括号,下面全部删除
	倒数第二行:最后一个大括号上方插入
    track_script {
        check_nginx
    }

systemctl enable --now keepalived.service
systemctl status keepalived.service
ip addr

#编写nginx检查脚本
vim /etc/nginx/check_nginx.sh
#!/bin/bash
#egrep -cv "grep|$$"用于过滤掉包含grep或者$$表示的当前shell进程ID
count=$(ps -ef | grep nginx | egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi

chmod +x /etc/nginx/check_nginx.sh
crontab -e
	*/1 * * * * /etc/nginx/check_nginx.sh

#负载均衡nginx02节点操作
cat > /etc/yum.repos.d/nginx.repo << EOF
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/\$basearch/
gpgcheck=0
EOF
yum install -y nginx
yum install -y keepalived

crontab -e
	*/1 * * * * /etc/nginx/check_nginx.sh

#负载均衡nginx01节点操作
scp /etc/nginx/nginx.conf root@192.168.13.50:/etc/nginx/nginx.conf
scp /etc/keepalived/keepalived.conf root@192.168.13.50:/etc/keepalived/keepalived.conf
#/etc/keepalived/keepalived.conf需要修改三项配置(就是高可用的那三项)
scp /etc/nginx/check_nginx.sh root@192.168.13.50:/etc/nginx/check_nginx.sh

systemctl enable --now nginx
systemctl enable --now keepalived.service

3.5 配置防火墙/网关策略

systemctl stop firewalld
setenforce 0

vim /etc/sysctl.conf
	net.ipv4.ip_forward = 1

yum install iptables -y

cd /etc/sysconfig/network-scripts/
ls
cp ifcfg-ens33 ifcfg-ens35
vim ifcfg-ens35
	修改网卡,基础知识
ifdown ifcfg-ens35 && ifup ifcfg-ens35

iptables -t nat -A PREROUTING -i ens35 -d 12.0.0.1 -p tcp --dport 80 -j DNAT --to 192.168.13.100:1314
iptables -t nat -I POSTROUTING -s 12.0.0.0/24 -o ens33 -j SNAT --to-source 192.168.13.60

#客户端Centos系统网卡配置
IPADDR="12.0.0.200"
GATEWAY="12.0.0.1"

四、结语

  • 注意点:yaml的配置文件格式要求非常严格
  • 注意点:防火墙的iptables规则,要注意进出都要配置(SNAT/DNAT)
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐