Kubernetes1.25详细安装教程
复制k8s-master控制节点初始化时生成的join命令,并添加–cri-socket选项来指定容器运行时接口,分别在k8s-node1、k8s-node2上执行。在k8s-master控制节点执行初始化操作,在初始化完成后,会成生两条将节点加入群集的命令,第一条为添加控制节点的命令,第二条则为添加工作节点的命令。节点加入集群后,节点信息为NotReady,各主机节点及容器暂无法进行相互连接通信
1. 环境说明
主机 | IP地址 | 备注 |
---|---|---|
k8s-master | 192.168.3.104 | 控制节点 |
k8s-node1 | 192.168.3.105 | 工作节点 |
k8s-node2 | 192.168.3.106 | 工作节点 |
2. 准备工作(所有节点)
分别设置主机名
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
在三台主机上设置本地解析
cat >> /etc/hosts << 'EOF'
192.168.3.104 k8s-master
192.168.3.105 k8s-node1
192.168.3.106 k8s-node2
EOF
设置时间同步
yum install chrony -y
systemctl enable --now chronyd
停止并永久关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
#查看防火墙状态
systemctl status firewalld
关闭SElinux
setenforce 0
sed -i '/^SELINUX=/c SELINUX=disabled' /etc/selinux/config
设置内核网络参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
#加载设置参数
sysctl --system
关闭swap
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
3. 安装docker(所有节点)
添加国内安装源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
查看可用版本
yum list docker-ce --showduplicates
安装20.10.17版本
yum install docker-ce-20.10.17 -y
配置国内镜像加速,并更改cgroupdriver由systemd方式控制
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": [
"https://docker.mirrors.ustc.edu.cn",
"https://hub-mirror.c.163.com",
"https://reg-mirror.qiniu.com",
"https://registry.docker-cn.com"
],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
启动
systemctl enable --now docker
4. 安装容器运行时接口cri-docerd(所有节点)
下载
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.1/cri-dockerd-0.3.1-3.el7.x86_64.rpm
#如果以上下载不了,直接去网站下载地址如下:
https://github.com/Mirantis/cri-dockerd/releases/tag/v0.3.1
#选择 cri-dockerd-0.3.1-3.el7.x86_64.rpm
安装
rpm -ivh cri-dockerd-0.3.1-3.el7.x86_64.rpm
配置cri-dockerd使用国内镜像地址
sed -i 's#^ExecStart.*#& --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7#' /usr/lib/systemd/system/cri-docker.service
启动
systemctl enable --now cri-docker
5. 安装kubernetes组件(所有节点)
添加国内安装源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgchech=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装kubernetes1.25.0版本组件
如报下面的错误 ,则使用–nogpgcheck 命令格式跳过公钥检查
yum install kubectl-1.25.0 kubelet-1.25.0 kubeadm-1.25.0 -y --nogpgcheck
从 http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 检索密钥
导入 GPG key 0x13EDEF05:
用户ID : "Rapture Automatic Signing Key (cloud-rapture-signing-key-2022-03-07-08_01_01.pub)"
指纹 : a362 b822 f6de dc65 2817 ea46 b53d c80d 13ed ef05
来自 : http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
6ef583567f0b35dc39ef9f09efb02dbb75054f9fbf7189969a118b5051fa5a71-kubelet-1.25.0-0.x86_64.rpm 的公钥尚未安装
失败的软件包是:kubelet-1.25.0-0.x86_64
GPG 密钥配置为:http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
查看版本
kubelet --version
kubeadm version
6. 初始化kubernetes集群(控制节点)
在k8s-master控制节点执行初始化操作,在初始化完成后,会成生两条将节点加入群集的命令,第一条为添加控制节点的命令,第二条则为添加工作节点的命令
kubeadm init --control-plane-endpoint=k8s-master --kubernetes-version v1.25.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16 --image-repository registry.aliyuncs.com/google_containers --cri-socket unix:///run/cri-dockerd.sock --upload-certs --token-ttl=0 --v=5
Your Kubernetes control-plane has initialized successfully!
……(省略部分)
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join k8s-master:6443 --token x4p7jc.cf9dua744ixrnzrp \
--discovery-token-ca-cert-hash sha256:747b9c200e3fa77c70e93ccc4e148542281989e2838202c792cd7364cf353947 \
--control-plane --certificate-key 59d66a6fb61c4359e3fe40d6b3c9744e59c09ae064b5aeebebb74069ec1a2250
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8s-master:6443 --token ogbk3w.cohsia40h8n660y3 \
--discovery-token-ca-cert-hash sha256:8d19edea0951391599ab0867db134208829f6870c90ae033f428656b4dd5af44
根据提示,执行以下操作以使用集群服务
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
7. 将工作节点加入到集群(工作节点)
复制k8s-master控制节点初始化时生成的join命令,并添加–cri-socket选项来指定容器运行时接口,分别在k8s-node1、k8s-node2上执行
kubeadm join k8s-master:6443 --token ogbk3w.cohsia40h8n660y3 \
--discovery-token-ca-cert-hash sha256:8d19edea0951391599ab0867db134208829f6870c90ae033f428656b4dd5af44 --cri-socket unix:///run/cri-dockerd.sock
8. 为控制节点配置网络(控制节点)
8.1安装网络插件
节点加入集群后,节点信息为NotReady,各主机节点及容器暂无法进行相互连接通信,还需要安装网络插件集群才能正常通信。
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane 93s v1.25.0
k8s-node1 NotReady <none> 34s v1.25.0
k8s-node2 NotReady <none> 30s v1.25.0
kubernetes支持flannel、calico、canal等多种网络插件,本次选择为pod安装flannel网络插件
#如下载无反应,则添加解析到hosts文件,再进行下载
#echo "199.232.68.133 raw.githubusercontent.com" >> /etc/hosts
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
等待一会,再次查看各节点状态,状态已由原来的NotReady 变为Ready,这时集群网络已能正常通信
[root@localhost k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 5m19s v1.25.0
k8s-node1 Ready <none> 4m20s v1.25.0
k8s-node2 Ready <none> 4m16s v1.25.0
[root@localhost k8s]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready control-plane 5m27s v1.25.0 192.168.3.104 <none> CentOS Linux 7 (Core) 3.10.0-1160.62.1.el7.x86_64 docker://20.10.17
k8s-node1 Ready <none> 4m28s v1.25.0 192.168.3.105 <none> CentOS Linux 7 (Core) 3.10.0-1160.62.1.el7.x86_64 docker://20.10.17
k8s-node2 Ready <none> 4m24s v1.25.0 192.168.3.106 <none> CentOS Linux 7 (Core) 3.10.0-1160.62.1.el7.x86_64 docker://20.10.17
8.2开启ipvs(可选)
k8s的service资源默认情况下使用的是iptables,但效率低下,所以推荐使用性能更高的ipvs。
安装ipset和ipvsadm
yum install ipset ipvsadm -y
编写载入内核脚本
cat <<EOF> /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
授权并执行
chmod +x /etc/sysconfig/modules/ipvs.modules
/bin/bash /etc/sysconfig/modules/ipvs.modules
查询确认加载结果
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
修改kubelet配置文件,添加以下参数
vim /etc/sysconfig/kubelet
……(省略部分)
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
#或执行以下命令,找到mode字段,将值由空值改为ipvs,保存退出后删除kube-proxy的Pod,自动重启后生效。
kubectl edit configmap kube-proxy -n kube-system
重启kubelet服务
systemctl restart kubelet
9. 添加dashboard面板(控制节点)
下载资源文件,选择对应的版本(https://github.com/kubernetes/dashboard/tags)
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
修改类型service类型为NodePort,同时指定Nodeport端口(需在30000-32767范围内)
vim recommended.yaml
……
spec:
type: NodePort # 新增
ports:
- port: 443
targetPort: 8443
nodePort: 30000 # 新增
selector:
k8s-app: kubernetes-dashboard
……
安装dashboard
kubectl apply -f recommended.yaml
查看资源
kubectl get pod,svc -n kubernetes-dashboard
创建用户、绑定角色并授权
#创建资源文件
vim dashboard-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user-cluster-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
#资源创建
kubectl apply -f dashboard-admin.yaml
获取用户登录token
kubectl -n kubernetes-dashboard create token admin-user
登录使用
浏览器输入https://192.168.3.104:30000,登录方式选择Token后,输入获取到的Token登录即可。
更多推荐
所有评论(0)