0,前言

最近在工作中有用到k8s,看了一点儿关于节点调度的知识,就想着自己做点儿小实验.所以要准备实验环境.所以就在自己的笔记本上搭建了一个k8s的集群,集群用到的镜像都是路由器科学上网下来的,所以为了方便各位大佬,已经手动上传到docker hub网站了,大家可以放心使用,这次使用到的kubernetes的版本是最新的v1.18.8(但我好像看见了v1.19.0版本),所以这些镜像都是为了v1.18.8准备的。一定要注意版本,否则中途可能会遇到问题。按照这篇文章一步步走,应该是可以在centos7上成功搭建集群的。

1,准备环境

我这里使用到的是三台centos7,三台centos7需要能互通网络,并且能正常上网。如果是windows10上使用vmware workstation运行的 centos7, 网络模式使用默认的nat即可。

1.1设置主机名称

# 三台机器分别执行
hostnamectl --static set-hostname master.k8s
hostnamectl --static set-hostname node1.k8s
hostnamectl --static set-hostname node2.k8s

# 三台机器每一台都需要添加域名解析,这里可以通过ifconfig查看ip信息
vim /etc/hosts

192.168.30.xxx master.k8s
192.168.30.xxx node1.k8s
192.168.30.xxx node2.k8s

1.2关闭Linux部分功能

1,禁用SELinux(打开文件修改)

vim /etc/selinux/config

SELINUX=enforcing --> SELINUX=permissive

2, 关闭防火墙 

systemctl disable firewalld && systemctl stop firewalld

 3,禁用交换分区(复制命令直接执行)

swapoff -a && sed -i '/ swap / s/^/#/' /etc/fstab

 4, Yum仓添加Kubernetes

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

1.3,安装docker,kubeadm,kubelet,kubectl,kubernetes-cni

这里要注意安装v1.18.8版本,因为上边的那些镜像都是这个版本的,要是使用不合适的版本,后边会报错 node xxx not found

yum install -y  docker  kubeadm-1.18.8 kubelet-1.18.8 kubectl-1.18.8 kubernetes-cni-0.8.6

 1.4,启动服务

systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet

1.5准备镜像,复制命令直接执行

docker pull naison/kube-proxy:v1.18.8
docker pull naison/kube-apiserver:v1.18.8
docker pull naison/kube-controller-manager:v1.18.8
docker pull naison/kube-scheduler:v1.18.8
docker pull naison/weave-npc:2.7.0
docker pull naison/weave-kube:2.7.0
docker pull naison/pause:3.2
docker pull naison/coredns:1.6.7
docker pull naison/etcd:3.4.3-0

docker tag  docker.io/naison/kube-proxy:v1.18.8                 k8s.gcr.io/kube-proxy:v1.18.8
docker tag  docker.io/naison/kube-apiserver:v1.18.8             k8s.gcr.io/kube-apiserver:v1.18.8
docker tag  docker.io/naison/kube-controller-manager:v1.18.8    k8s.gcr.io/kube-controller-manager:v1.18.8
docker tag  docker.io/naison/kube-scheduler:v1.18.8             k8s.gcr.io/kube-scheduler:v1.18.8
docker tag  docker.io/naison/weave-npc:2.7.0                    docker.io/weaveworks/weave-npc:2.7.0
docker tag  docker.io/naison/weave-kube:2.7.0                   docker.io/weaveworks/weave-kube:2.7.0
docker tag  docker.io/naison/pause:3.2                          k8s.gcr.io/pause:3.2
docker tag  docker.io/naison/coredns:1.6.7                      k8s.gcr.io/coredns:1.6.7
docker tag  docker.io/naison/etcd:3.4.3-0                       k8s.gcr.io/etcd:3.4.3-0

docker image rm  docker.io/naison/kube-proxy:v1.18.8
docker image rm  docker.io/naison/kube-apiserver:v1.18.8
docker image rm  docker.io/naison/kube-controller-manager:v1.18.8
docker image rm  docker.io/naison/kube-scheduler:v1.18.8
docker image rm  docker.io/naison/weave-npc:2.7.0
docker image rm  docker.io/naison/weave-kube:2.7.0
docker image rm  docker.io/naison/pause:3.2
docker image rm  docker.io/naison/coredns:1.6.7
docker image rm  docker.io/naison/etcd:3.4.3-0

直接复制命令到centos7上执行即可,建议每一台centos7都执行一下。上述镜像准备好了之后可以验证一下

这时候,执行输出应该是这样的。这些镜像都是从k8s.gcr.io下载来的,并且版本和kubernetes-v1.18.8兼容
可使用

kubeadm config images list

查看所需包是不是和下面的包版本一致。

docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.18.8             0fb7201f92d0        12 days ago         117 MB
k8s.gcr.io/kube-apiserver            v1.18.8             92d040a0dca7        12 days ago         173 MB
k8s.gcr.io/kube-controller-manager   v1.18.8             6a979351fe5e        12 days ago         162 MB
k8s.gcr.io/kube-scheduler            v1.18.8             6f7135fb47e0        12 days ago         95.3 MB
docker.io/weaveworks/weave-npc       2.7.0               db66692318fc        3 weeks ago         41 MB
docker.io/weaveworks/weave-kube      2.7.0               a8ef3e215aac        3 weeks ago         113 MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        6 months ago        683 kB
k8s.gcr.io/coredns                   1.6.7               67da37a9a360        7 months ago        43.8 MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        10 months ago       288 MB

2, 在master.k8s主节点上初始化集群

kubeadm init --kubernetes-version v1.18.8

3,如果不出意外,应该是执行完应该显示如下,这个提示很重要,

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.30.133:6443 --token 8u3dck.atl2en8jjfruhlch \
    --discovery-token-ca-cert-hash sha256:2f9e6392eed129fa99fb2198bffdd0562248e63fedc8ebe94832739e2f789cfa

 其中的命令需要再三台机器上执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubeadm join 192.168.30.133:6443 --token 8u3dck.atl2en8jjfruhlch \
    --discovery-token-ca-cert-hash sha256:2f9e6392eed129fa99fb2198bffdd0562248e63fedc8ebe94832739e2f789cfa

4,三台机器都做了上面的语句后,执行

kubectl get nodes

输入应该是这样的。

NAME         STATUS     ROLES    AGE     VERSION
master.k8s   NotReady   master   8m42s   v1.18.8
node1.k8s    NotReady   <none>   6m59s   v1.18.8
node2.k8s    NotReady   <none>   4m55s   v1.18.8

5,还有最后一步,部署网络,这里用到的image镜像已经在第一步前加载到本地了,所以这里应该是很快就好了。

kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')

这时候可以使用

kubectl get pods -A
NAMESPACE     NAME                                 READY   STATUS             RESTARTS   AGE
kube-system   coredns-66bff467f8-29c4z             1/1     Running            0          32m
kube-system   coredns-66bff467f8-qb7wt             1/1     Running            0          32m
kube-system   etcd-master.k8s                      1/1     Running            0          32m
kube-system   kube-apiserver-master.k8s            1/1     Running            0          32m
kube-system   kube-controller-manager-master.k8s   0/1     CrashLoopBackOff   6          32m
kube-system   kube-proxy-2j7hz                     1/1     Running            0          29m
kube-system   kube-proxy-dnlwc                     1/1     Running            0          31m
kube-system   kube-proxy-lp5lq                     1/1     Running            0          32m
kube-system   kube-scheduler-master.k8s            0/1     CrashLoopBackOff   4          32m
kube-system   weave-net-b8fg4                      2/2     Running            0          17m
kube-system   weave-net-cdcpc                      2/2     Running            0          17m
kube-system   weave-net-vwchq                      1/2     ImagePullBackOff   0          17m

 查看是否独使running状态
如果不是running状态,稍等一下就好了

NAMESPACE     NAME                                 READY   STATUS             RESTARTS   AGE
kube-system   coredns-66bff467f8-29c4z             1/1     Running            0          101m
kube-system   coredns-66bff467f8-qb7wt             1/1     Running            0          101m
kube-system   etcd-master.k8s                      1/1     Running            0          101m
kube-system   kube-apiserver-master.k8s            1/1     Running            0          101m
kube-system   kube-controller-manager-master.k8s   0/1     CrashLoopBackOff   17         101m
kube-system   kube-proxy-2j7hz                     1/1     Running            0          97m
kube-system   kube-proxy-dnlwc                     1/1     Running            0          99m
kube-system   kube-proxy-lp5lq                     1/1     Running            0          101m
kube-system   kube-scheduler-master.k8s            1/1     Running            16         101m
kube-system   weave-net-b8fg4                      2/2     Running            0          86m
kube-system   weave-net-cdcpc                      2/2     Running            0          86m
kube-system   weave-net-vwchq                      2/2     Running            0          86m
有一个一直不成功,检查健康状态的,不理它

6,查看集群状态

kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
master.k8s   Ready    master   99m   v1.18.8
node1.k8s    Ready    <none>   97m   v1.18.8
node2.k8s    Ready    <none>   95m   v1.18.8

大公告成,开始我们的kubernetes cluster探索吧!!!

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐