飞天使-k8s知识点29-kubernetes安装1.28.0版本
k8s 1.24版本之后不支持docker,要想继续使用要用插件需要安装cri-docker作为Kubernetes容器。k8s 1.24版本之前还可以使用docker,很方便。执行完上面的重启reboot。
·
文章目录
选用版本
k8s 1.24版本之前还可以使用docker,很方便
k8s 1.24版本之后不支持docker,要想继续使用要用插件需要安装cri-docker与Kubernetes容器交互
初始化服务器,自己修改里面的ip
内核升级包的md5,本人已验证,只要是这个md5值,放心升级
1ea91ea41eedb35c5da12fe7030f4347 kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
01a6da596167ec2bc3122a5f30a8f627 kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
echo "172.17.200.40 k8s-master01" | sudo tee -a /etc/hosts
echo "172.17.200.41 k8s-master02" | sudo tee -a /etc/hosts
echo "172.17.200.42 k8s-master03" | sudo tee -a /etc/hosts
echo "172.17.200.43 k8s-node01" | sudo tee -a /etc/hosts
echo "172.17.200.44 k8s-node02" | sudo tee -a /etc/hosts
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
systemctl disable --now firewalld
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
yum install ntpdate -y
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
echo -e "* soft nofile 65536\n* hard nofile 131072\n* soft nproc 65535\n* hard nproc 655350\n* soft memlock unlimited\n* hard memlock unlimited" | sudo tee -a /etc/security/limits.conf
cd /root
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
cd /root && yum localinstall -y kernel-ml*
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
grubby --default-kernel
reboot
执行完上面的重启reboot
haproxy安装 ,可以参考我之前写的
https://blog.csdn.net/startfefesfe/article/details/135102854
内核参数调整,安装docker
yum install ipset ipvsadm -y
mkdir /etc/sysconfig/modules -p
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
vm.overcommit_memory = 0
EOF
sysctl -p /etc/sysctl.d/k8s.conf
cat > /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
yum install docker-ce -y
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl daemon-reload && systemctl enable --now docker
安装cri-dockerd
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4.amd64.tgz
tar xf cri-dockerd-0.3.4.amd64.tgz
mv cri-dockerd/cri-dockerd /usr/local/bin/
cat > /usr/lib/systemd/system/cri-dockerd.service << EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/local/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload && systemctl enable cri-dockerd --now
开始安装集群工具
yum install -y kubelet-1.28.0 kubeadm-1.28.0 kubectl-1.28.0
# 检查版本是否正确
kubeadm version
systemctl enable kubelet && systemctl start kubelet
下载镜像以及启用
kubeadm config images pull \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.0 \
--cri-socket=unix:///var/run/cri-dockerd.sock
kubeadm init \
--apiserver-advertise-address="172.17.200.40" \
--control-plane-endpoint="172.17.200.37" \
--apiserver-bind-port=6443 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.0 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket=unix:///var/run/cri-dockerd.sock \
--upload-certs \
--service-dns-domain=fly.local
初始化失败则
kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube
或者机器推倒重来
完毕之后
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 172.17.200.37:6443 --token 3b222o.irju12gtumfa8qee \
--discovery-token-ca-cert-hash sha256:aff0fa14842f3068d19c0883ddda4d668ba6a179d77sdfdd09258636f69bd518 \
--control-plane --certificate-key 7db7d5a7ef71c531a4f187sdfdfa81ssd912c38a3a97856f04cb594a13
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.17.200.37:6443 --token 3b222o.irju12gtumfa8qee \
--discovery-token-ca-cert-hash sha256:aff0fa14842f3068d19c0883ddda4d668ba6a179d77sdfdd09258636f69bd518
记得自己加上参数 --cri-socket=unix:///var/run/cri-dockerd.sock
此时的coredns 不通
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
由于文件中pod 网段为10.244
所以不必更改
直接应用
kubectl apply -f kube-flannel.yml
结果展示
[root@gcp-hongkong-k8s-master01-test install_k8s]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-724kq 1/1 Running 0 19s
kube-flannel kube-flannel-ds-cddsz 1/1 Running 0 19s
kube-flannel kube-flannel-ds-f72jb 1/1 Running 0 19s
kube-flannel kube-flannel-ds-g8pft 1/1 Running 0 19s
kube-flannel kube-flannel-ds-pb27w 1/1 Running 0 19s
kube-system coredns-66f779496c-59khw 1/1 Running 0 10m
kube-system coredns-66f779496c-szs7l 1/1 Running 0 10m
kube-system etcd-gcp-hongkong-k8s-master01-test 1/1 Running 0 10m
kube-system etcd-gcp-hongkong-k8s-master02-test 1/1 Running 0 7m12s
kube-system etcd-gcp-hongkong-k8s-master03-test 1/1 Running 0 8m13s
kube-system kube-apiserver-gcp-hongkong-k8s-master01-test 1/1 Running 0 10m
kube-system kube-apiserver-gcp-hongkong-k8s-master02-test 1/1 Running 1 (7m28s ago) 7m15s
kube-system kube-apiserver-gcp-hongkong-k8s-master03-test 1/1 Running 0 8m12s
kube-system kube-controller-manager-gcp-hongkong-k8s-master01-test 1/1 Running 1 (8m1s ago) 10m
kube-system kube-controller-manager-gcp-hongkong-k8s-master02-test 1/1 Running 0 6m13s
kube-system kube-controller-manager-gcp-hongkong-k8s-master03-test 1/1 Running 0 7m59s
kube-system kube-proxy-4jd4m 1/1 Running 0 8m13s
kube-system kube-proxy-5csq9 1/1 Running 0 10m
kube-system kube-proxy-7ttq5 1/1 Running 0 7m23s
kube-system kube-proxy-fk6zt 1/1 Running 0 2m54s
kube-system kube-proxy-pf8vk 1/1 Running 0 2m57s
kube-system kube-scheduler-gcp-hongkong-k8s-master01-test 1/1 Running 1 (7m58s ago) 10m
kube-system kube-scheduler-gcp-hongkong-k8s-master02-test 1/1 Running 0 6m12s
kube-system kube-scheduler-gcp-hongkong-k8s-master03-test 1/1 Running 0 8m11s
更多推荐
已为社区贡献34条内容
所有评论(0)