内网服务器加入k8s集群中——部署k8s集群
采用kubeadm快速部署,如果使用源码包部署需用注意多网卡的配置。一、环境准备1 所有主机关闭selinuxsetenforce 0vi /etc/selinux/config#将SELINUX=enforcing改为SELINUX=disabled2 所有主机关闭swapswapoff -ased -ri 's/.*swap.*/#&/' /etc/fstab3 所有主机将ipv4流量
·
采用kubeadm快速部署,如果使用源码包部署需用注意多网卡的配置。
一、环境准备
1 所有主机关闭selinux
setenforce 0
vi /etc/selinux/config #将SELINUX=enforcing改为SELINUX=disabled
2 所有主机关闭swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
3 所有主机将ipv4流量传递到iptables链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
EOF
sysctl --system #生效
二、所有主机安装Docker、kubelet kubeadm kubectl
cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker
## 配置阿里云yum源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
systemctl daemon-reload
systemctl restart docker
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
systemctl enable kubelet
三、master 初始化
kubeadm init \
--apiserver-advertise-address=192.168.1.50 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.18.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--v=6
按提示执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
四、node节点加入集群,执行命令前请确保node节点可以ping 通 192.168.1.50
kubeadm join 192.168.1.50:6443 --token xxxxx --node-name node1 \
--discovery-token-ca-cert-hash sha256:xxxxxx
在master节点上查看集群节点状态
kubectl get nodes -o wide
此时node节点在etcd注册的是主机的真实网卡IP,我们需要修改成虚拟局域网IP
分别编辑node1和node2的 /var/lib/kubelet/kubeadm-flags.env 文件,添加--node-ip 参数,然后重启kubelet
vi /var/lib/kubelet/kubeadm-flags.env
########
KUBELET_KUBEADM_ARGS="--cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.2 --node-ip=192.168.18.10"
########
systemctl daemon-reload && systemctl restart kubelet
稍等30秒后查看是否修改成功
五、部署CNI网络插件
如果不能科学上网,需要上传docker镜像,所有主机加载flanneld镜像
docker load < flanneld-v0.12.0-amd64.docker
master节点
kubectl apply -f ./kube-flannel.yaml
[root@master ~]# kubectl get all -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/coredns-7ff77c879f-8rvzc 1/1 Running 2 45h 10.244.0.6 master <none> <none>
pod/coredns-7ff77c879f-ffqg2 1/1 Running 2 45h 10.244.0.7 master <none> <none>
pod/etcd-master 1/1 Running 2 45h 192.168.1.50 master <none> <none>
pod/kube-apiserver-master 1/1 Running 2 39h 192.168.1.50 master <none> <none>
pod/kube-controller-manager-master 1/1 Running 3 45h 192.168.1.50 master <none> <none>
pod/kube-flannel-ds-amd64-pqf67 1/1 Running 5 17h 192.168.101.43 node2 <none> <none>
pod/kube-flannel-ds-amd64-vz7d9 1/1 Running 2 17h 192.168.18.10 node1 <none> <none>
pod/kube-flannel-ds-amd64-xr2pv 1/1 Running 2 17h 192.168.1.50 master <none> <none>
pod/kube-proxy-b7p8d 1/1 Running 8 40h 192.168.18.10 node1 <none> <none>
pod/kube-proxy-dbdg8 1/1 Running 3 17h 192.168.101.43 node2 <none> <none>
pod/kube-proxy-kqktm 1/1 Running 2 45h 192.168.1.50 master <none> <none>
pod/kube-scheduler-master 1/1 Running 3 45h 192.168.1.50 master <none> <none>
至此,集群已经步骤成功
更多推荐
已为社区贡献2条内容
所有评论(0)