kubeadm部署K8s
修改主机名vim /etc/hostname修改hosts按照主机名规划,修改hosts文件cat >>/etc/hosts<<EOF192.168.2.90 master192.168.2.91 node1192.168.2.92 node2EOF关闭SELINUXvim /etc/selinux/configSELINUX=disabled禁用防火墙systemctl
·
修改主机名
vim /etc/hostname
修改hosts
按照主机名规划,修改hosts文件
cat >>/etc/hosts<<EOF
192.168.2.90 master
192.168.2.91 node1
192.168.2.92 node2
EOF
关闭SELINUX
vim /etc/selinux/config
SELINUX=disabled
禁用防火墙
systemctl stop firewalld
systemctl disable firewalld
禁用交换分区
swapoff -a
vim /etc/fstab
# /dev/mapper/centos-swap swap swap defaults 0 0
添加yum仓库
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
配置内核转发参数,创建/etc/sysctl.d/k8s.conf文件
cat >>/etc/sysctl.d/k8s.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
执行命令使内参参数修改生效
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
yum安装docker
yum -y install docker
systemctl start docker
systemctl enable docker
systemctl status docker
修改国内docker源
vim /etc/docker/daemon.json
{
"registry-mirrors": [
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}
systemctl restart docker
安装相关组件
yum -y install kubelet kubeadm kubectl ipvsadm ipset
设备kubelet服务开机自启
systemctl enable kubelet
kube-proxy开启ipvs的前置条件
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
确认一下iptables filter表
FOWARD链的默认策略(pllicy)为ACCEPT
iptables -nvL
以上操作在每个节点服务器都执行
部署master节点
kubeadm init --kubernetes-version=1.23.6 \
--apiserver-advertise-address=192.168.2.90 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
根据返回接口,分别在 mater 和 node 上执行上面命令, 将node节点加入到k8s集群
部署网络插件calico
curl https://projectcalico.docs.tigera.io/manifests/calico.yaml -O
编辑calico.yaml
修改为与安装master设置的网段相同
开始部署
kubectl apply -f calico.yaml
在node节点执行了加入的命令后,部署完成
以下内容是后期新加master或node节点使用
创建token
kubeadm token create --print-join-command
列出创建的token
kubeadm token list
添加新的master节点 生成证书
kubeadm init phase upload-certs --upload-certs
在新生成的Token后面加入以下参数,将新生成的证书替换
--control-plane --certificate-key a615c6fb67eff694085de21480c462de2bc65b545884957ae0d37b109000e281
添加Node节点
使用生成的Token在节点服务器上执行
扩展内容
修改NodePort端口范围
vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --service-node-port-range=1-65535
保存后过自动生效,重新拉起容器大概需要十几秒
开启IPV转发
kubectl edit configmap kube-proxy -n kube-system
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
kind: KubeProxyConfiguration
metricsBindAddress: ""
mode: "ipvs" #修改此处
nodePortAddresses: null
删除所有kube-proxy的pod
[root@k8s-master ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d56c8448f-bl6ds 1/1 Running 0 78m
coredns-6d56c8448f-g2scb 1/1 Running 0 78m
etcd-k8s-master 1/1 Running 1 78m
kube-apiserver-k8s-master 1/1 Running 1 78m
kube-controller-manager-k8s-master 1/1 Running 1 78m
kube-flannel-ds-5wwvj 1/1 Running 0 76m
kube-flannel-ds-9hcqz 1/1 Running 0 77m
kube-flannel-ds-ct6jr 1/1 Running 1 76m
kube-proxy-5ntj4 1/1 Running 0 76m
kube-proxy-82dk4 1/1 Running 0 78m
kube-proxy-s9jrw 1/1 Running 0 76m
kube-scheduler-k8s-master 1/1 Running 1 78m
[root@k8s-master ~]# kubectl delete pod kube-proxy-5ntj4 kube-proxy-82dk4 kube-proxy-s9jrw -n kube-system
pod "kube-proxy-5ntj4" deleted
pod "kube-proxy-82dk4" deleted
pod "kube-proxy-s9jrw" deleted
校验
日志出现Using ipvs Proxier
即可
[root@k8s-master ~]# kubectl logs kube-proxy-c2mxx -n kube-system
I0907 04:23:26.102780 1 node.go:136] Successfully retrieved node IP: 10.3.104.56
I0907 04:23:26.102846 1 server_others.go:111] kube-proxy node IP is an IPv4 address (10.3.104.56), assume IPv4 operation
I0907 04:23:26.133916 1 server_others.go:259] Using ipvs Proxier.
E0907 04:23:26.134077 1 proxier.go:381] can't set sysctl net/ipv4/vs/conn_reuse_mode, kernel version must be at least 4.1
W0907 04:23:26.134167 1 proxier.go:434] IPVS scheduler not specified, use rr by default
I0907 04:23:26.134396 1 server.go:650] Version: v1.19.0
I0907 04:23:26.134922 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0907 04:23:26.135295 1 config.go:224] Starting endpoint slice config controller
I0907 04:23:26.135324 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0907 04:23:26.135368 1 config.go:315] Starting service config controller
I0907 04:23:26.135373 1 shared_informer.go:240] Waiting for caches to sync for service config
I0907 04:23:26.235476 1 shared_informer.go:247] Caches are synced for service config
I0907 04:23:26.235488 1 shared_informer.go:247] Caches are synced for endpoint slice config
更多推荐
已为社区贡献1条内容
所有评论(0)