一、实验前提:

rhel7.3版本:

server1:master节点,安装docker,并启动,内存大小大小不小于1024M,则集群初始化会失败

server2:node,安装docker,并启动

二、Docker k8s集群的搭建部署

1.需要下载的文件和安装包如下:

2.在server1/2上安装rpm包:

kubeadm-1.12.2-0.x86_64.rpm kubectl-1.12.2-0.x86_64.rpm 
kubelet-1.12.2-0.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm 
cri-tools-1.12.0-0.x86_64.rpm

[root@server1 k8s]# yum install -y kubeadm-1.12.2-0.x86_64.rpm kubectl-1.12.2-0.x86_64.rpm kubelet-1.12.2-0.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm cri-tools-1.12.0-0.x86_64.rpm
systemctl start kubelet.service #开启服务
[root@server1 k8s]# systemctl enable kubelet.service 

3.关闭swap分区

server1:

[root@server1 k8s]# swapoff -a
[root@server1 k8s]# vim /etc/fstab 

server2:

[root@server2 k8s]# swapoff -a
[root@server2 k8s]# vim /etc/fstab 
[root@server2 k8s]# systemctl enable kubelet.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.

4.列出需要加载的镜像

[root@server1 k8s]# kubeadm config images list

5.server1和server2加载镜像:

[root@server1 k8s]# docker load -i kube-apiserver.tar 
[root@server1 k8s]# docker load -i kube-controller-manager.tar 
[root@server1 k8s]# docker load -i kube-proxy.tar 
[root@server1 k8s]# docker load -i pause.tar 
[root@server1 k8s]# docker load -i etcd.tar 
[root@server1 k8s]# docker load -i coredns.tar 
[root@server1 k8s]# docker load -i kube-scheduler.tar 
[root@server1 k8s]# docker load -i flannel.tar 

server2操作相同。

kubeadm config images list
docker load -i kube-apiserver.tar 
docker load -i kube-controller-manager.tar 
docker load -i kube-proxy.tar 
docker load -i pause.tar 
docker load -i etcd.tar 
docker load -i coredns.tar 
docker load -i kube-scheduler.tar 
docker load -i flannel.tar 

6.初始化集群

在server1:

[root@server1 k8s]# vim kube-flannel.yml 

 

[root@server1 k8s]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.25.60.1

此时若报错:

解决办法:

[root@server1 k8s]# sysctl -a | grep net.*iptables
net.bridge.bridge-nf-call-iptables = 0

[root@server1 k8s]# sysctl -w net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-iptables = 1

显示下面的信息,表示集群初始化成功:

7.建立k8s 用户 设置环境

[root@server1 k8s]# useradd k8s
[root@server1 k8s]# chmod u+w /etc/sudoers
[root@server1 k8s]# vim /etc/sudoers
92 k8s     ALL=(ALL)       NOPASSWD: ALL

[root@server1 k8s]# su - k8s   #切换到普通用户进行下面的操作
Last login: Fri May 31 09:06:50 CST 2019 on pts/0
[k8s@server1 ~]$ mkdir -p $HOME/.kube
[k8s@server1 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[k8s@server1 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

解决kubectl补不全的问题:

[k8s@server1 ~]$ echo "source <(kubectl completion bash)" >> ./.bashrc
[k8s@server1 ~]$ logout
[root@server1 k8s]# su - k8s 
Last login: Fri May 31 09:18:23 CST 2019 on pts/0

8.server1:

将kube-flannel.yml文件发送到/home/k8s目录下。因为kube-flannel.yml文件原来的/root/k8s目录下,普通用户k8s无法访问。并根据初始化集群的提示,继续操作

[k8s@server1 ~]$ logout
[root@server1 k8s]# cp kube-flannel.yml /home/k8s/
[root@server1 k8s]# su - k8s 
Last login: Fri May 31 09:21:08 CST 2019 on pts/0
[k8s@server1 ~]$ kubectl apply -f kube-flannel.yml 

查看运行的容器:

[k8s@server1 ~]$ sudo docker ps

9.根据master节点初始化集群的结果提示,将server2节点加入集群。

[root@server2 ~]# modprobe ip_vs_sh
[root@server2 ~]# modprobe ip_vs_wrr
[root@server2 ~]# kubeadm join 172.25.60.1:6443 --token wggnuo.9npa1y0i4ahe1p42 --discovery-token-ca-cert-hash sha256:fb05dc0956a3e1d897a3113742498255e8d05c1c8d20c2c5479d2d658a97b9a8

查看server2节点上运行的容器:

[root@server2 ~]# docker ps

9.在server1上查看节点状态是否为ready:

[k8s@server1 ~]$ kubectl get nodes

查看所有namespace下的pod:

[k8s@server1 ~]$ kubectl get pod --all-namespaces 

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐