本文基于Ubuntu20.04 + kubeadm-1.25.0 + containerd.io-1.6.8来部署K8S测试集群。

首先,需要预先装好虚拟机软件,Windows系统可以用vmware ,MacOS用户可以用parallels。其次,k8s运行所在操作系统可以是Ubuntu或者CentOS,对应的下载地址在文章末尾。

在开始安装之前,我们对集群测试环境做个规划:

K8S角色控制节点工作节点1工作节点2
操作系统Ubuntu20.04Ubuntu20.04Ubuntu20.04
硬件2核4G  40G硬盘2核4G  40G硬盘2核4G  40G硬盘
网络nat(共享网络)nat(共享网络)nat(共享网络)
IP10.211.55.3010.211.55.3110.211.55.32
主机名k8s-master-1k8s-node-1k8s-node-2
安装K8S组件calico、docker、etcd、kubelet、kube-apiserver、controller-manager、kube-scheduler、kube-proxycalico、docker、kubelet、coredns、kube-proxycalico、docker、kubelet、coredns、kube-proxy
K8S版本1.25.01.25.01.25.0

注:控制和工作节点可以最低2G内存,但是余量充足的情况下,最好4G内存。

接下来,按照下面步骤来安装K8S测试集群。

第一步,对安装好的系统环境做一些初始化配置。

注:以下操作和配置,三个节点都要执行

1.配置静态IP

$ vim /etc/netplan/01-network-manager-all.yaml
# 粘贴如下配置内容,每个节点改下enp0s5.addresses改成自己的IPnetwork:  version: 2  renderer: NetworkManager  ethernets:     enp0s5:       addresses: [10.211.55.30/24]       gateway4: 10.211.55.1       nameservers:         addresses: [10.211.55.1]
​​​​
# 执行下面命令立即生效$ netplan apply

注意:这时候可能系统的网络配置会有问题,可以去系统设置->网络配置,把系统默认的网络配置删除掉。

2.配置主机名:

# 在10.211.55.30执行$ hostnamectl set-hostname k8s-master-1 && bash# 在10.211.55.31执行$ hostnamectl set-hostname k8s-node-1 && bash# 在10.211.55.32执行$ hostnamectl set-hostname k8s-node-2 && bash

3.配置主机之间无密码登录(可选)

root@k8s-master-1:~# ssh-keygen # 全都回车默认root@k8s-master-1:~# ssh-copy-id k8s-node-1root@k8s-master-1:~# ssh-copy-id k8s-node-2 

注:另外两个节点做同样的操作,ssh-copy-id的时候别忘了改下对方的主机名

4. 关闭交换分区swap

K8S为了提升性能,默认是不允许使用交换分区的。分别在三个节点执行下面命令

root@k8s-master-1:~#  swapoff -a

永久关闭,需要修改配置文件, 三个节点都需要修改

root@k8s-master-1:~# vim /etc/fstab# 找到下面这行,用#号进行注释# /swapfile    none   swap    sw       0     0

5.关闭ufw防火墙​​​​​​​

root@k8s-master-1:~# ufw disableroot@k8s-node-1:~# ufw disableroot@k8s-node-2:~# ufw disable

6.关闭selinux​​​​​​​

root@k8s-master-1:~# setenforce 0root@k8s-node-1:~# setenforce 0root@k8s-node-2:~# setenforce 0

7.优化内核参数​​​​​​​

$ modprobe br_netfilter$ cat > /etc/sysctl.d/k8s.conf <<EOFnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1EOF$ sysctl -p /etc/sysctl.d/k8s.conf

8. 为系统配置阿里云repo源,为安装软件提速。

图形界面配置: 

  系统设置 -> 软件和更新 选择下载服务器 -> "mirrors.aliyun.com"

自己动手修改:​​​​​​​

vim /etc/apt/sources.list# 替换默认的 http://us.archive.ubuntu.com/ # 为 http://mirrors.aliyun.com/

9. 配置kubernetes阿里云repo源​​​​​​​

$ apt-get update && apt-get install -y apt-transport-https$ curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - $ cat <<EOF >/etc/apt/sources.list.d/kubernetes.listdeb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial mainEOF$ apt-get update

10. 配置docker阿里云repo源​​​​​​​

# step 1: 安装必要的一些系统工具$ apt-get update$ apt-get -y install apt-transport-https ca-certificates curl software-properties-common# step 2: 安装GPG证书$ curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -# Step 3: 写入软件源信息$ add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"# Step 4: 更新Docker-CE$ apt-get -y update

11. 配置时间同步​​​​​​​

$ timedatectl set-timezone Asia/Shanghai$ timedatectl set-ntp true

12. 开启IPVS​​​​​​​

# 编写shell脚本ipvs.sh,执行内容如下#!/bin/bashipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"for k in ${ipvs_modules}; do /sbin/modinfo -F filename ${k} > /dev/null 2>&1 if [ "$?" -eq 0 ]; then /sbin/modprobe ${k} fidone
$ chmod +x ipvs.sh$ bash ipvs.sh$ lsmod | grep ip_vs# 看到如下内容代表已经加载到内核ip_vs_ftp              16384  0ip_vs_sed              16384  0ip_vs_nq               16384  0ip_vs_dh               16384  0ip_vs_lblcr            16384  0ip_vs_lblc             16384  0ip_vs_wlc              16384  0ip_vs_lc               16384  0ip_vs_sh               16384  0ip_vs_wrr              16384  0ip_vs_rr               16384  6ip_vs                 176128  30 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftpnf_nat                 49152  4 ip6table_nat,iptable_nat,xt_MASQUERADE,ip_vs_ftpnf_conntrack          172032  5 xt_conntrack,nf_nat,nf_conntrack_netlink,xt_MASQUERADE,ip_vsnf_defrag_ipv6         24576  2 nf_conntrack,ip_vslibcrc32c              16384  4 nf_conntrack,nf_nat,nf_tables,ip_vs

13. 安装containerd.io容器运行时​​​​​​​

$ apt-get install -y containerd.io=1.6.8-1$ mkdir -p /etc/containerd$ containerd config default > /etc/containerd/config.toml# 编辑文件$ vim /etc/containerd/config.toml# 修改内容如下# 找到SystemdCgroup键,修改成 SystemdCgroup = true# 找到sandbox_image键,修改成 sandbox_image="registry.aliyuncs.com/google_containers/pause:3.7"# 执行如下命令设置开机启动并运行$ systemctl enable containerd --now# 配置crictl命令默认用containerd来当运行时$ cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOF$ systemctl restart containerd​​​​​
# 配置阿里云镜像加速vim /etc/containerd/config.toml# 修改内容 config_path = "/etc/containerd/certs.d" # 创建目录$ mkdir /etc/containerd/certs.d/docker.io -p# 注意下面https://xxxxxxxx.mirror.aliyuncs.com 这个地址,换成你自己的# 没有的话可以去阿里云去免费申请。$ cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOFserver = "https://docker.io"[host."https://xxxxxxxx.mirror.aliyuncs.com",host."https://registry.docker-cn.com"]  capabilities = ["pull", "resolve"]EOF# 重启生效$ systemctl restart containerd

14. 安装K8S软件包

$ apt install -y kubelet=1.25.0-00 kubeadm=1.25.0-00 kubectl=1.25.0-00$ systemctl enable kubelet

第二步,使用kubeadm初始化K8S集群

  1. 初始化集群配置文件

$ crictl config runtime-endpoint /run/containerd/containerd.sock$ kubeadm config print init-defaults > /etc/kubeadm.yaml

编辑修改/etc/kubeadm.yaml内容,需要改几个重点配置项。​​​​​​​

apiVersion: kubeadm.k8s.io/v1beta3bootstrapTokens:- groups:  - system:bootstrappers:kubeadm:default-node-token  token: abcdef.0123456789abcdef  ttl: 24h0m0s  usages:  - signing  - authenticationkind: InitConfigurationlocalAPIEndpoint:  advertiseAddress: 10.211.55.30  #【重点配置】指定控制节点IP  bindPort: 6443 #【重点配置】指定控制节点端口nodeRegistration:  criSocket: unix:///run/containerd/containerd.sock #【重点配置】指定使用containerd作为容器运行时  imagePullPolicy: IfNotPresent  name: k8s-master-1 #【重点配置】指定控制节点主机名  taints: null---apiServer:  timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta3certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns: {}etcd:  local:    dataDir: /var/lib/etcdimageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #【重点配置】指定阿里云镜像仓库kind: ClusterConfigurationkubernetesVersion: 1.25.0networking:  dnsDomain: cluster.local  serviceSubnet: 10.96.0.0/12 #【重点配置】指定Service网段  podSubnet: 10.244.0.0/16 #【重点配置】指定POD网段scheduler: {}# 增加下面配置项指定ipvs模式---apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationmode: ipvs# 增加下面配置项指定systemd方式运行---apiVersion: kubelet.config.k8s.io/v1beta1kind: KubeletConfigurationcgroupDriver: systemd

2. 初始化集群

$ kubeadm init --config=/etc/kubeadm.yaml --ignore-preflight-errors=SystemVerification

如果出现这种提示,说明初始化成功​​​​​​​

Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authoritiesand service account keys on each node and then running the following as root:
  kubeadm join 10.211.55.30:6443 --token abcdef.0123456789abcdef \  --discovery-token-ca-cert-hash sha256:f4c3be85c1a6b6b2524ed8990f3de195f14f77c703762a6ce7695c5c75c6a967 \  --control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.211.55.30:6443 --token abcdef.0123456789abcdef \  --discovery-token-ca-cert-hash sha256:f4c3be85c1a6b6b2524ed8990f3de195f14f77c703762a6ce7695c5c75c6a967

我们按照提示,执行命令:​​​​​​​

root@k8s-master-1:~# mkdir -p $HOME/.kuberoot@k8s-master-1:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configroot@k8s-master-1:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config# 执行完上述命令后,来看看集群状态root@k8s-master-1:~# kubectl get nodesNAME           STATUS     ROLES           AGE    VERSIONk8s-master-1   NotReady   control-plane   4m5s   v1.25.0
# 我们看到控制节点已经初始化完成

接下来把工作节点加入集群,在俩个节点上都执行下面命令:​​​​​​​

$ kubeadm join 10.211.55.30:6443 --token abcdef.0123456789abcdef \  --discovery-token-ca-cert-hash sha256:f4c3be85c1a6b6b2524ed8990f3de195f14f77c703762a6ce7695c5c75c6a967 --ignore-preflight-errors=SystemVerification
# 如果成功会看到如下内容[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

你可能会遇到报错问题,按照下面解决​​​​​​​

# 如果看到如下报错,说明你没有把br_netfilter模块载入内核[preflight] Running pre-flight checkserror execution phase preflight: [preflight] Some fatal errors occurred:  [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`To see the stack trace of this error execute with --v=5 or higher
# 执行如下命,把br_netfilter载入内核$ modprobe br_netfilter# 重新执行join$ kubeadm join 10.211.55.30:6443 --token abcdef.0123456789abcdef \  --discovery-token-ca-cert-hash sha256:f4c3be85c1a6b6b2524ed8990f3de195f14f77c703762a6ce7695c5c75c6a967 --ignore-preflight-errors=SystemVerification

查看下集群初始化状态,我们看到集群已经安装好了。但是为什么是NotReady状态呢,别慌,继续第三步。

root@k8s-master-1:~# kubectl get nodesNAME           STATUS     ROLES           AGE     VERSIONk8s-master-1   NotReady   control-plane   15m     v1.25.0k8s-node-1   NotReady   <none>          6m18s   v1.25.0k8s-node-2     NotReady   <none>          2m46s   v1.25.0

第三步,安装calico网络插件

执行下面命令的小伙伴,可能需要科学上网。​​​​​​​

# 下载calico.yaml文件root@k8s-master-1:~# curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico.yaml -O# 执行以下命令,为集群安装calico网络插件root@k8s-master-1:~# kubectl apply -f calico.yaml
# 执行下面命令,看到集群节点都为Ready状态,恭喜,你已经成功完成了K8S集群的部署。root@k8s-master-1:~# kubectl get nodesNAME           STATUS     ROLES           AGE     VERSIONk8s-master-1   Ready   control-plane   20m     v1.25.0k8s-node-1   Ready   <none>          10m18s   v1.25.0k8s-node-2   Ready   <none>          6m46s   v1.25.0

到此,恭喜你已经完成了K8S学习环境的搭建。

接下来你需要学习K8S架构方面的知识,了解POD、控制器、命名空间、Service、Ingress、PV/PVC、网络策略、RBAC等进阶的内容。搜索公众号“k8s技术训练营”获取更多k8s技术干货。

加油~

虚拟机下载地址:

vmware 

https://customerconnect.vmware.com/en/downloads/info/slug/desktop_end_user_computing/vmware_workstation_pro/17_0

parallels

https://www.parallels.cn/products/desktop/trial/

镜像地址:

Ubuntu20.04 iso镜像

https://mirrors.aliyun.com/ubuntu-releases/focal/ubuntu-20.04.5-desktop-amd64.iso

centos8.5.21 ios镜像

https://mirrors.aliyun.com/centos/8.5.2111/isos/x86_64/CentOS-8.5.2111-x86_64-dvd1.iso

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐