K8S安装
k8s在ubuntu20.04上的部署过程
·
一、环境配置
内存:每台机器 2 GB 或更多 RAM(任何更少都会为您的应用程序留下很小的空间)。
CPU:2 个 CPU 或更多。
网络:集群中所有机器之间的完整网络连接(公共或专用网络都可以),确保各节点网络互通。
其他:每个节点的唯一主机名(uname -a)、MAC 地址(ifconfig)和 product_uuid(cat /sys/class/dmi/id/product_uuid)。
交换分区:必须禁用才能使 kubelet 正常工作。
防火墙配置
# enable ufw
sudo ufw enable
# shell
sudo ufw allow 22/tcp
# Kubernetes API server
sudo ufw allow 6443/tcp
# etcd server client API
sudo ufw allow 2379:2380/tcp
# Kubelet API
sudo ufw allow 10250/tcp
# kube-scheduler
sudo ufw allow 20259/tcp
# kube-controller-manager
sudo ufw allow 10257/tcp
# shell
sudo ufw allow 22/tcp
# Kubelet API
sudo ufw allow 10250/tcp
# NodePort Services
sudo ufw allow 30000:32767/tcp
修改hosts文件(所有节点)
sudo tee -a /etc/hosts<<EOF
172.17.0.2 k8s.demo.com
172.17.0.2 k8s-master-a k8s-master-a.demo.com
172.17.0.3 k8s-node-01 k8s-node-01.demo.com
172.17.0.4 k8s-node-02 k8s-node-02.demo.com
EOF
关闭selinux(所有节点) 原因
sudo setenforce 0
sudo sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
sudo sed -i 's#SELINUX=permissive#SELINUX=disabled#g' /etc/selinux/config
关闭交换分区(所有节点) 原因
sudo sed -i 's/^\(.*swap.*\)$/#\1/g' /etc/fstab
sudo swapoff -a
为了确保生效,请执行以下代码
sudo apt update
sudo apt -y upgrade && sudo systemctl reboot
二、安装Kubernetes(所有节点)
安装 kubelet、kubeadm 和 kubectl
sudo apt update
sudo apt -y install curl apt-transport-https
# 国内需要vpn才能访问一下url,所以建议使用下方阿里源
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt -y install vim git curl wget kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo apt update
sudo apt -y install curl apt-transport-https
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt -y install vim git wget
sudo apt -y install kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
检查是否安装成功
kubectl version --client && kubeadm version
有以下输出可判断为正确安装
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:44:59Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
kubeadm version: &version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:43:25Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}
启动内核模块并配置sysctl
# Enable kernel modules
sudo modprobe overlay
sudo modprobe br_netfilter
# Add some settings to sysctl
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# Reload sysctl
sudo sysctl --system
安装容器运行时
为了在 Pod 中运行容器,Kubernetes 使用容器运行时。 支持的容器运行时有:
- Docker(不推荐使用)
- CRI-O
- Containerd
- 其他
至于选择哪个:大家可以点击此链接
不推荐 :1.24之后的版本已经被取消支持
# Add repo and Install packages
sudo apt update
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install -y containerd.io docker-ce docker-ce-cli
# Create required directories
sudo mkdir -p /etc/systemd/system/docker.service.d
# Create daemon json config file
sudo tee /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
# Start and enable Services
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker
# Ensure you load modules
sudo modprobe overlay
sudo modprobe br_netfilter
# Set up required sysctl params
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# Reload sysctl
sudo sysctl --system
# Add Cri-o repo
sudo su -
OS="xUbuntu_20.04"
VERSION=1.22
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -
# Update CRI-O CIDR subnet
sudo sed -i 's/10.85.0.0/10.244.0.0/g' /etc/cni/net.d/100-crio-bridge.conf
# Install CRI-O
sudo apt update
sudo apt install cri-o cri-o-runc
# Start and enable Service
sudo systemctl daemon-reload
sudo systemctl restart crio
sudo systemctl enable crio
sudo systemctl status crio
# Configure persistent loading of modules(可能需要手动创建文件夹 /etc/modules-load.d)
sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
# Load at runtime
sudo modprobe overlay
sudo modprobe br_netfilter
# Ensure sysctl params are set
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# Reload configs
sudo sysctl --system
# Install required packages
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
# Add Docker repo
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Install containerd
sudo apt update
sudo apt install -y containerd.io
# Configure containerd and start service
sudo su -
mkdir -p /etc/containerd
containerd config default>/etc/containerd/config.toml
# Change image repository
sed -i 's/k8s.gcr.io/registry.aliyuncs.com\/google_containers/g' /etc/containerd/config.toml
# restart containerd
systemctl restart containerd
systemctl enable containerd
systemctl status containerd
开启支持systemd cgroup
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
初始化主节点(以下命令只在主节点上执行)
$ lsmod | grep br_netfilter
br_netfilter 28672 0
bridge 176128 1 br_netfilter
sudo systemctl enable kubelet
-
引导集群
-
使用IP
sudo kubeadm init \ --pod-network-cidr=10.244.0.0/16 \ # --cri-socket /var/run/crio/crio.sock \ #如果你使用的运行时cri-0,请取消前面的注释 # --cri-socket /var/run/docker.sock \ #如果你使用的运行时docker,请取消前面的注释 # --cri-socket unix:///run/containerd/containerd.sock \ #如果你使用的运行时containerd,请取消前面的注释 --image-repository=registry.aliyuncs.com/google_containers
-
使用DNS
- 设置集群端点 DNS 名称或将记录添加到 /etc/hosts 文件。
$ sudo cat /etc/hosts | grep k8s.demo.com 172.17.0.2 k8s.demo.com
- 创建集群
sudo kubeadm init \ --pod-network-cidr=10.244.0.0/16 \ # --cri-socket /var/run/crio/crio.sock \ #如果你使用的运行时cri-0,请取消前面的注释 # --cri-socket /var/run/docker.sock \ #如果你使用的运行时docker,请取消前面的注释 # --cri-socket unix:///run/containerd/containerd.sock \ #如果你使用的运行时containerd,请取消前面的注释 --upload-certs \ --control-plane-endpoint=k8s.demo.com \ --image-repository=registry.aliyuncs.com/google_containers \ --v 5
- 设置集群端点 DNS 名称或将记录添加到 /etc/hosts 文件。
-
注意:如果 10.244.0.0/16 已在网络中使用,必须选择不同的 pod 网络 CIDR,替换上述命令中的 10.244.0.0/16。
PS:kubeadm init 的可选项有:
--control-plane-endpoint : 值可以为DNS(推荐)或者IP, 为所有`control-plane`节点设置共享端点
--pod-network-cidr : 用于设置 Pod 网络CIDR
--cri-socket : 如果有多个容器运行时用于设置运行时套接字路径
--apiserver-advertise-address : Set advertise address for this particular control-plane node's API server
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join k8s.demo.com:6443 --token bdqsdw.2uf50yfvo3uwy93w \
--discovery-token-ca-cert-hash sha256:2a6f431cc99860ff6e15519e08e62f01b9b0cb051380031582bd5cc22efbc084 \
--control-plane --certificate-key 8e568ec9b181aace22496d3a1961d965179b65fb58567867fc498cfefbd0c4f0
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8s.demo.com:6443 --token bdqsdw.2uf50yfvo3uwy93w \
--discovery-token-ca-cert-hash sha256:2a6f431cc99860ff6e15519e08e62f01b9b0cb051380031582bd5cc22efbc084
mkdir -p $HOME/.kube
sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config #如果上一步成功才会产生/etc/kubernetes/admin.conf
sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl cluster-info
添加其他主节点(此内容从kubeadm init的输出中可以找到)仅仅在master中执行
kubeadm join k8s.demo.com:6443 --token bdqsdw.2uf50yfvo3uwy93w \
--discovery-token-ca-cert-hash sha256:2a6f431cc99860ff6e15519e08e62f01b9b0cb051380031582bd5cc22efbc084 \
--control-plane --certificate-key 8e568ec9b181aace22496d3a1961d965179b65fb58567867fc498cfefbd0c4f0
mkdir -p $HOME/.kube
sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
安装网络插件(在一个主节点中执行)
sudo kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
wget https://docs.projectcalico.org/manifests/custom-resources.yaml
sed -i 's/192.168.0.0/10.244.0.0/g' custom-resources.yaml
sudo kubectl create -f custom-resources.yaml
开放几个端口,所有节点上执行
sudo ufw allow 179/tcp
sudo ufw allow 5473/tcp
sudo ufw allow 4789/udp
添加工作节点(此内容从kubeadm init的输出中可以找到),所有的工作节点中执行
kubeadm join k8s.demo.com:6443 --token bdqsdw.2uf50yfvo3uwy93w \
--discovery-token-ca-cert-hash sha256:2a6f431cc99860ff6e15519e08e62f01b9b0cb051380031582bd5cc22efbc084
查看节点状态
$ kubectl get nodes -A
NAME STATUS ROLES AGE VERSION
k8s-master-a Ready control-plane,master 74m v1.23.2
k8s-node-01 Ready <none> 22m v1.23.2
k8s-node-02 Ready <none> 14m v1.23.2
更多推荐
已为社区贡献2条内容
所有评论(0)