手动搭建k8s集群
以下仅供参考,建议使用Rancher搭建。
·
以下仅供参考,建议使用Rancher搭建
1. 安装前准备
A compatible Linux host.
2 GB or more of RAM per machine (any less will leave little room for your apps).
2 CPUs or more.
Full network connectivity between all machines in the cluster (public or private network is fine).
Unique hostname, MAC address, and product_uuid for every node.
Certain ports are open on your machines.
Swap disabled. You MUST disable swap in order for the kubelet to work properly.
1.1 Verify the MAC address and product_uuid are unique for every node
You can get the MAC address of the network interfaces using the command ip link or ifconfig -a
centos查看mac地址:
nmcli device show #查看网卡mac地址
The product_uuid can be checked by using the command sudo cat /sys/class/dmi/id/product_uuid
1.2 Check required ports
nc 127.0.0.1 6443
1.3 主机时间同步
#yum install ntpdate ‐y
#ntpdate time.windows.com
# 设置时区
timedatectl set-timezone Asia/Shanghai
timedatectl
# 安装chrony同步工具
yum makecache fast
yum -y install chrony
systemctl start chroynd
systemctl enable --now chronyd
# 强制同步时间
chronyc -a makestep
date
1.4 关闭防火墙
关闭firewalld或iptables服务
# firewalld
systemctl stop firewalld
systemctl disable firewalld
# iptables
systemctl stop iptables
systemctl disable iptables
1.5 关闭selinux,禁用swap分区
关闭selinux
# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config # 永久关闭 7 setenforce 0 # 临时关闭
禁用swap分区
swapoff -a 临时禁用
vim /etc/fstab 注释掉swap的行,永久禁止
1.6 设置主机名,并为三台主机添加hosts文件内容,使其能互相通过主机名访问;
设置主机名:例:hostnamectl set-hostname centos1
并为三台主机添加hosts文件内容:
192.168.10.110 centos1
192.168.10.111 centos2
192.168.10.112 centos3
192.168.10.113 centos4
1.7 开启ip_forword转发
临时生效: echo “1” > /proc/sys/net/ipv4/ip_forward
永久生效:编辑/etc/rc.d/rc.local,将echo “1” > /proc/sys/net/ipv4/ip_forward加入该文件中
1.8 配置网桥转发
将桥接的IPv4流量传递到iptables的链
cat <<EOF >/etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# 重新加载
sysctl --system
# 加载过滤模块
modprobe br_netfilter
# 查看是否成功
lsmod | grep br_netfilter
1.9 配置ipvs功能
在K8s中kube-proxy有iptables和ipvs两种代理模型,ipvs性能较高,但需要手动载入ipset和ipvsadm模块
# ⚠️ 系统内核为4.19+时,执行此操作
yum install -y ipset ipvsadm
cat <<EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod|grep -e ip_vs -e nf_conntrack
# ⚠️ 系统内核低于4.19,执行此操作
yum install -y ipset ipvsadm
cat <<EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
nf_conntrack
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod|grep -e ip_vs -e nf_conntrack_ipv4
2 安装docker
docker是K8s集群的基础服务组件,需在每个集群节点上安装docker服务,安装流程可参考Docker安装教程。
K8s推荐的Cgroup是systemd, 修改Docker的Cgroup Driver:
# 查看docker的Cgroup Driver,默认为cgroupfs
docker info|grep Cgroup
# 修改Cgroup,若daemon.json不存在,则手动创建
cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
# 重启服务
systemctl daemon-reload
systemctl restart docker
3 Installing kubeadm, kubelet and kubectl
kubeadm: the command to bootstrap the cluster.
kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
kubectl: the command line util to talk to your cluster.
3.1 安装Node组件
在所有服务节点上安装kubeadm, kubelet和kubectl
# 配置阿里云yum仓库
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 查看可用的版本
yum list kubectl --showduplicates
# 注意⚠️:如果仅安装kubeadm,则会自动安装最新版本kubelet和kubectl
yum -y install kubeadm-1.22.6 kubelet-1.22.6 kubectl-1.22.6
# 注意⚠️:将添加到启动项,无需启动
systemctl enable kubelet
systemctl status kubelet
# 查看版本
kubeadm version
kubectl version
kubelet --version
修改kubelet的Cgroup:
# 修改kubelet的Cgroup
cat <<EOF > /etc/sysconfig/kubelet
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
EOF
3.2 配置Master服务
# 查看kubeadm使用的镜像
kubeadm config images list
# 可预先拉取镜像,或下一步init时自动拉取
# kubeadm config images pull
kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers
# 注意⚠️:只需修改第一项的IP,其余不用动
kubeadm init \
--apiserver-advertise-address=192.168.10.110 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.22.6 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12
# 🔔 此时,会得到「kubeadm join」的token信息😊😊😊
# 使用kubectl工具
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
3.3 加入Node节点
在Node节点执行kubeadm join命令,将node加入集群
# 在master节点获取token,同「kubeadm init」得到的token信息
kubeadm token create --print-join-command --ttl 0
kubeadm join 192.168.10.110:6443 --token ekbtpj.wna3s728uxhxzzue \
--discovery-token-ca-cert-hash sha256:3937419b60618394f4f65c193ba3b6dba086239b8833bdfb08d9ebf5b7831519
3.4 部署CNI网络组件
Kubernetes支持多种网络插件,如 flannel、calico、canal等,这里选择使用「flannel」
# master上操作
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 查看pods节点信息(所有节点的状态为:Running,说明正常)
kubectl get pods -n kube-system
kubectl get nodes
查看集群健康状态:
kubectl get cs
kubectl cluster-info
更多推荐
已为社区贡献2条内容
所有评论(0)