【运维】K8S部署极简教程



前言

本文主要用于记录K8S部署过程,各位读者有需求自行借鉴,不一定适用于所有场景
操作系统:CentOS7
机器配置:

iphostnamecpu内存用途
192.168.0.1hostname-1832Gk8smaster
192.168.0.2hostname-2832Gk8snode1
192.168.0.3hostname-3832Gk8snode2

部署K8S

前置操作

  1. 前置操作
systemctl status firewalld
#关闭防火墙
systemctl stop firewalld 
systemctl disable firewalld
#永久关闭 selinux 
sed -i 's/enforcing/disabled/' /etc/selinux/config
#永久关闭 swap 
sed -ri 's/.*swap.*/#&/' /etc/fstab
  1. 修改 /etc/hosts
vim /etc/hosts
#修改内容
192.168.0.1 hostname-1 hostname-1.hostname.com k8smaster
192.168.0.2 hostname-2 hostname-2.hostname.com k8snode1
192.168.0.3 hostname-3 hostname-3.hostname.com k8snode2

  1. 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

#使其生效
sysctl --system

安装docker

#默认安装最新版本
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
#开机启动
systemctl enable docker && systemctl start docker
docker --version

# 设置仓库镜像 cat > /etc/docker/daemon.json << EOF 
{ 
"registry-mirrors":["https://b9pmyelo.mirror.aliyuncs.com"] 
}
EOF

# docker ⽹络 warning 处理 
vi /etc/sysctl.conf 
# 添加以下两⾏ 配置 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 
# ⽣效 
sysctl -p

# ipv4 warning 
vim /usr/lib/sysctl.d/00-system.conf 
# 添加如下代码 
net.ipv4.ip_forward=1 
# 重启 ⽹络 和 docker 
systemctl restart network 
systemctl restart docker

安装k8s

#安装k8s组件
#配置k8s源
vim /etc/yum.repos.d/kubernetes.repo
#添加内容
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

#安装k8s组件
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0 systemctl enable kubelet

#初始化master节点
kubeadm init \
--apiserver-advertise-address=192.168.0.1 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.18.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/1 

#master添加配置
mkdir -p $HOME/.kube 
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
sudo chown $(id -u):$(id -g) $HOME/.kube/config 
kubectl get nodes

#在node1 node2节点执行命令加入k8s集群
kubeadm join 192.168.0.1:6443 --token 4gvq3k.guj3oesme1j4g101 --discovery-token-ca-cert-hash sha256:7843b49894cc4d09cdcc19e912dac066e59b333d42402fb42e21be8af6ab4bca

安装cni网络插件

#CNI网络插件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube- flannel.yml
#一般集群搭建好后master节点安装插件
kubectl apply -f  kube-flannel.yml
# 检查 kube-flannel 
kubectl get pod -n kube-system

安装nfs网络文件服务

#搭建nfs网络文件服务
yum -y install nfs-utils 
mkdir -p /data/nfs 
chmod -R 777 /data/nfs/ 
vim /etc/exports
#写入内容
/data/nfs *(rw,no_root_squash,sync)
#配置生效
exportfs -r
#查看配置
exportfs

systemctl restart rpcbind && systemctl enable rpcbind
systemctl restart nfs && systemctl enable nfs
# 查看 RPC 注册情况 
rpcinfo -p localhost 
# 测试 
showmount -e 192.168.0.1

创建PV

vim nfs-pv.yaml
#写入内容
apiVersion: v1 
kind: PersistentVolume 
metadata: 
  name: nfs-pv 
  labels: 
    pv: nfs-pv 
spec: 
  capacity: 
    storage: 1000Mi 
  accessModes: 
    - ReadWriteMany 
  persistentVolumeReclaimPolicy: Retain 
  storageClassName: nfs 
  nfs:
    server: 192.168.0.1 
    path: "/data/nfs"
    
    
#PVC 示例 
#kind: PersistentVolumeClaim 
#apiVersion: v1 
#metadata: 
#  name: demo-pvc 
#  namespace: 'demo' 
#  labels: 
#    app: demo 
#spec: 
#  accessModes: 
#    - 'ReadWriteMany' 
#  resources: 
#    requests: 
#      storage: '500m' 
#  storageClassName: nfs

总结

K8S安装流程至此基本已经完成,过程中可能会有部分环境不一致导致的问题还需要自行查看日志排查修复,欢迎大家一起交流。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐