目录

  • 实验环境的清理
  • k8s软件的安装和集群的部署
  • falnnet网络插件的部署
  • 集群的相关配置和使用方法

 

前言:

前两篇博客已经描述了k8s的工作原理结构的搭建以及docker与k8s的关系,所以相关的理论反面的知识不需要再解释了,

k8s再创建k8s集群时仍然要用到之前配置好的docker私有镜像仓库。

相关资料的查阅Kubernetes中文社区:https://www.kubernetes.org.cn

 

实验所用到的主机

主机IP作用
reg.westos.org172.25.6.2安装有Docker(18.09.6) 、私有仓库管理节点
server1172.25.6.1安装有Docker(18.09.6)、k8s集群管理节点、可访问私有仓库
server3172.25.6.3安装有Docker(18.09.6)、k8s集群工作节点、可访问私有仓库
server4172.25.6.4安装有Docker(18.09.6)、k8s集群工作节点、可访问私有仓库

 

在k8s集群的节点(server1、server3、server4 节点上必须安装有dcoker软件 )

 

 

一、清理环境信息并检查环境信息

  • 删除docker集群
  • 设置防火墙规则
  • 让虚拟机上网
  • 关闭防火墙、selinux
  • 编辑daemon.json加速器的信息
  • 增加虚拟机的cpu和内存容量
  • 在每个节点关闭系统的交换分区
  • 各个节点上都要配置yum安装k8s软件的源文件
  •  

因为用到的是之前做实验的虚拟机所以地对之前实验的环境进行清理,也是对之前docker实验的回顾补充

1、删除docker集群

因为之前做过Docker Swarm的实验  所以要解散集群否则在做k8s 的时候可能会出现端口冲突的情况

 

1.1、在server3上:

[root@server3 ~]# docker swarm leave    ##离开集群
Node left the swarm.
[root@server3 ~]#

 

 

1.2、server4上:

 

1.3、应为server1是集群的主节点所以server1是最后离开的

[root@server1 ~]#
[root@server1 ~]#
[root@server1 ~]# docker node ls                        ## 查看集群节点的信息 
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
z3pmo8boiqlej99tf2o7s1n0x *   server1             Ready               Active              Leader              18.09.6
oqckndd4iy8v2jqg4q6l0x1me     server3             Down                Active                                  18.09.6
klzzlm44slqwjbbxelbfkhg3s     server4             Down                Active                                  18.09.6
[root@server1 ~]#

[root@server1 ~]# docker se
search   secret   service  
[root@server1 ~]# docker swarm leave --force            ##离开集群
Node left the swarm.
[root@server1 ~]#
[root@server1 ~]#
[root@server1 ~]# docker node ls
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
[root@server1 ~]#

 

 

测试:

再此查看节点信息发现集群已经被删除

 

2、在三个节点设置火墙规则

[root@server1 ~]# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

[root@server1 sysctl.d]# sysctl --system     ##加载环境信息 

****


* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1       ##看到这样的信息说明设置成功 
* Applying /etc/sysctl.conf ...

****

 

server3和server4上要进行的操作和以上的一致

 

3、查看daemon.json加速器的信息、查看网络是否可以ping通

[root@server1 ~]# cd /etc/docker/             
[root@server1 docker]#
[root@server1 docker]# cat daemon.json               ##查看daemon.json文件信息
{
  "registry-mirrors":["https://reg.westos.org"]
}

[root@server1 ~]# ping baidu.com                      ##检查是否可以联网 因为k8s的部署必须用加速器从外网拉取镜像 
ping: baidu.com: Name or service not known
[root@server1 ~]#
[root@server1 ~]#
[root@server1 ~]# ping baidu.com
PING baidu.com (220.181.38.148) 56(84) bytes of data.
64 bytes from 220.181.38.148 (220.181.38.148): icmp_seq=1 ttl=34 time=69.1 ms
64 bytes from 220.181.38.148 (220.181.38.148): icmp_seq=2 ttl=34 time=69.2 ms
64 bytes from 220.181.38.148 (220.181.38.148): icmp_seq=3 ttl=34 time=69.0 ms
64 bytes from 220.181.38.148 (220.181.38.148): icmp_seq=4 ttl=34 time=69.6 ms
64 bytes from 220.181.38.148 (220.181.38.148): icmp_seq=5 ttl=34 time=70.2 ms
64 bytes from 220.181.38.148 (220.181.38.148): icmp_seq=6 ttl=34 time=68.9 ms
^C
--- baidu.com ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5007ms
rtt min/avg/max/mdev = 68.977/69.390/70.290/0.466 ms
[root@server1 ~]#

 

 

3.1在真机上:

 

 

4、查看linux和防火墙是否已经关闭

[root@server1 ~]# cat /etc/sysconfig/selinux     ##查看selinux的状态

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted


[root@server1 ~]# systemctl stop iptables        ##关闭防火墙
Failed to stop iptables.service: Unit iptables.service not loaded.
[root@server1 ~]#
[root@server1 ~]#
[root@server1 ~]# systemctl stop firewalld
[root@server1 ~]#
[root@server1 ~]# systemctl disable firewalld
Created symlink from /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service to /usr/lib/systemd/system/firewalld.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/firewalld.service to /usr/lib/systemd/system/firewalld.service.

 

 

4.1、查看selinux的情况

 

4.2、关闭防火墙

(注:防火墙一定要关闭!!!!!)

 

 

 

 

5、编辑加速器文件

5.1、在server1上:

[root@server1 ~]# cd /etc/docker/
[root@server1 docker]# vim daemon.json                
 
[root@server1 docker]# cat daemon.json
{
  "registry-mirrors":["https://reg.westos.org"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },                                                   ##配置文加中写入的内容
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
]
}
[root@server1 docker]# systemctl daemon-reload        ##重加载daemon文加配置信息
[root@server1 docker]# systemctl restart docker       ##重器docker服务
[root@server1 docker]#

 

 

5.2、将编译好的加树器文件转给各个集群的节点:

[root@server1 docker]# scp daemon.json server3:/etc/docker/
root@server3's password:
daemon.json                                                                          100%  270   300.9KB/s   00:00    
[root@server1 docker]#
[root@server1 docker]# scp daemon.json server4:/etc/docker/
root@server4's password:
daemon.json                                                                          100%  270   322.5KB/s   00:00    
[root@server1 docker]#

 

 

在server3上:

[root@server3 ~]# cd /etc/docker/
[root@server3 docker]# cat daemon.json           ##在server3查看加速器的信息
{
  "registry-mirrors":["https://reg.westos.org"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
]
}
[root@server3 docker]# systemctl daemon-reload         ##重新加在加速器文件的信息
[root@server3 docker]# systemctl restart docker        ##重启动服务发现服务重启失败 
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
[root@server3 docker]#


在/etc/systemd/cd system/docker.service.d/路径下查看之前做docke——machine实验生成的加密认证文件10-machine.conf  把改文件移到其他地方即可
[root@server3 system]# cd /etc/systemd/cd system/docker.service.d/
[root@server3 docker.service.d]#
[root@server3 docker.service.d]# ls
10-machine.conf
[root@server3 docker.service.d]#
[root@server3 docker.service.d]# mv 10-machine.conf /tmp/        ##移动加密文件
[root@server3 docker.service.d]#
[root@server3 docker.service.d]# cd
[root@server3 ~]# systemctl daemon-reload           ##重新加载
[root@server3 ~]#
[root@server3 ~]# systemctl restart docker          ##重启动服务

 

在server4上进行的操作和server3上的一致:

 

 

6、增加虚拟机的cpu和内存容量 (否则在运行的时候会卡顿)

[root@server1 docker]#
[root@server1 docker]# docker info
Containers: 21
 Running: 1
 Paused: 0
 Stopped: 20
Images: 25
Server Version: 18.09.6
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: systemd
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-862.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.5 (Maipo)
OSType: linux
Architecture: x86_64
CPUs: 1                                 ##k8s配置官网的要求是cpu为2
Total Memory: 991.8MiB                  ##内存为2G
Name: server1
ID: LNDZ:VECZ:UWFP:7223:NGPG:YX4T:TVRM:2WLF:GKV3:ZSPH:PNTE:CTQD
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Registry Mirrors:
 https://reg.westos.org/
Live Restore Enabled: false
Product License: Community Engine

[root@server1 docker]#


 

 

在真机上输入: virt-manager  打开虚拟机管理命令

 

[root@foundation6 kiosk]# ssh 172.25.6.1
root@172.25.6.1's password:
Last login: Sun Feb 23 08:20:25 2020 from foundation6.ilt.example.com
[root@server1 ~]# docker info
Containers: 21
 Running: 1
 Paused: 0
 Stopped: 20
Images: 25
Server Version: 18.09.6
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: systemd
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-862.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.5 (Maipo)
OSType: linux
Architecture: x86_64
CPUs: 2                                                            ##修改后为2
Total Memory: 1.796GiB                                             ##内存容量变为原来的两倍
Name: server1
ID: LNDZ:VECZ:UWFP:7223:NGPG:YX4T:TVRM:2WLF:GKV3:ZSPH:PNTE:CTQD
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Registry Mirrors:
 https://reg.westos.org/
Live Restore Enabled: false
Product License: Community Engine

 

 

7、在每个节点关闭系统的交换分区

在server1上:

[root@server1 ~]# swapoff -a            ##关闭交换分区  
[root@server1 ~]#
[root@server1 ~]# vim /etc/fstab        ##在文件中将交换份分的命令注解掉

 

在server3上:


[root@server3 ~]# swapoff -a
[root@server3 ~]#
[root@server3 ~]#
[root@server3 ~]# vim /etc/fstab

 

 

在serve4 上:


[root@server4 ~]# swapoff -a
[root@server4 ~]# vim /etc/fstab

 

 

 

8、各个节点上都要配置yum安装k8s软件的源文件

[root@server1 ~]# cd /etc/yum.repos.d/
[root@server1 yum.repos.d]# ls
exam.repo  redhat.repo
[root@server1 yum.repos.d]# cp exam.repo k8s.repo
[root@server1 yum.repos.d]# vim k8s.repo

[root@server1 yum.repos.d]# cat k8s.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/   ##阿里云上镜像所在的路径
gpgcheck=0

[root@server1 yum.repos.d]#
[root@server1 yum.repos.d]# scp k8s.repo server3:/etc/yum.repos.d/   ##将yum源文件转给server3节点 
root@server3's password:
k8s.repo                                                                             100%  118   120.8KB/s   00:00    
[root@server1 yum.repos.d]#
[root@server1 yum.repos.d]# scp k8s.repo server4:/etc/yum.repos.d/
root@server4's password:
k8s.repo                                                                             100%  118    76.9KB/s   00:00    
[root@server1 yum.repos.d]#


 

 

7.1、在server1上:

 

将阿里云上镜像所在的路径写入yum源文件中(可以先拉取后再安装比较麻烦)

 

 

7.2将yum源文件转递给各个节点:

 

 

 

 

 

 

二、k8s软件的安装和集群的部署

  • 安装k8s软件包
  • 在阿里云上拉取相关的镜像
  • 初始化集群
  • 节点加入集群
  • 创建一个普通用户并且授权
  • 关于k8s命令的补齐
  •  

1、安装kubelet kubeadm kubectl 软件包

在各个节点上下载k8s软件包:


[root@server1 ~]# yum install -y kubelet kubeadm kubectl   ##下载kubelet kubeadm kubectl 三个包
[root@server1 ~]# systemctl enable --now kubelet.service   ##设置为开机自启状态

 

1.1、查看k8s服务的状态(发现有报错但是不影响服的部署和使用)

 

1.2、在server3上:

[root@server3 ~]# yum install -y kubelet kubeadm kubectl
[root@server3 ~]# systemctl enable --now kubelet.service

 

1.3、在server4上:

[root@server4 ~]# yum install -y kubelet kubeadm kubectl
[root@server4 ~]# systemctl enable --now kubelet.service

 


1.4、查看安装k8s集群所需要的镜像所在的位置
无法下载

[root@server1 ~]# kubeadm config images list         ##查看k8s集群创建所需要的软件
W0223 02:23:19.836032   13432 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
W0223 02:23:19.836150   13432 version.go:102] falling back to the local client version: v1.17.3
W0223 02:23:19.836381   13432 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0223 02:23:19.836399   13432 validation.go:28] Cannot validate kubelet config - no validator is available
k8s.gcr.io/kube-apiserver:v1.17.3                     ##默认的镜像下载的地方  外网无法访问在官网上一般不能下载除非可以翻墙 
k8s.gcr.io/kube-controller-manager:v1.17.3
k8s.gcr.io/kube-scheduler:v1.17.3
k8s.gcr.io/kube-proxy:v1.17.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5
[root@server1 ~]#

 

 

  2、要初始化集群要出阿里云上拉取相应的景象 

[root@serve[root@server1 ~]# kubeadm config images list --image-repository registry.aliyuncs.com/google_containers         ##查看阿里云上k8s所在的路径

[root@server1 ~]# kubeadm config images  --help   ##查看相关拉取命令的帮助

[root@server1 ~]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers                 ##在阿里云的镜像仓库上拉取相关的镜像 
W0223 02:43:14.638878   16527 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0223 02:43:14.639012   16527 validation.go:28] Cannot validate kubelet config - no validator is available
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.17.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.17.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.17.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.17.3
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.3-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.6.5
[root@server1 ~]#
[root@server1 ~]#
[root@server1 ~]#
[root@server1 ~]# docker images               ##查看拉取到的镜像
REPOSITORY                                                        TAG                        IMAGE ID            CREATED             SIZE
portainer/portainer                                               latest                     10383f5b5720        5 days ago          78.6MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.17.3                    ae853e93800d        11 days ago         116MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.17.3                    90d27391b780        11 days ago         171MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.17.3                    b0f1517c1f4b        11 days ago         161MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.17.3                    d109c0821a2b        11 days ago         94.4MB
nginx                                                             latest                     2073e0bcb60e        3 weeks ago         127MB
registry.aliyuncs.com/google_containers/coredns                   1.6.5                      70f311871ae1        3 months ago        41.6MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0                    303ce5db0e90        4 months ago        288MB
portainer-agent                                                   latest                     

 

 

2.1、列出需要安装镜像所在的位置

 

2.2、再阿里云上进行镜像的拉取、



2.3、查看镜像是否拉取成功

 

 

 

3、k8s集群的初始化

[root@server1 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers         ##开始初始化集群


>>>>>>>>>>


期间会生成和密钥等
      

>>>>>>>>>>>

[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!             ##看到这一步表示集群初始化成功

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.25.6.1:6443 --token uiyar9.wbvzoannfdi5n9h9 \       ##443端口说明集群的认证是采用加密认证的方式进行的
    --discovery-token-ca-cert-hash sha256:ee421c37195c50cb0ad9a7e278c6a9d6bc8db20c0927929af4178a0d65f8650b   ##生成的校验码

 

 

 

4、节点加入集群

server3加入集群:

root@server3 ~]# kubeadm join 172.25.6.1:6443 --token h9nrch.vl1egbpzs0cpxwft     --discovery-token-ca-cert-hash        ##和docker swarm 及入集群一致 需要输入认证信息   sha256:bd6e9c2f05bbf3a8d1f9264e1f2fa23edf6c2fa8d78494c65a5b17866ce989d7
W0223 05:41:11.564463    2453 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.   ##看到这样的提示说明改节点已经成功的加入集群了 

[root@server3 ~]# docker images             ##查看镜像信息
REPOSITORY                                           TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.17.3          ##在校验时主节点会自动把这两个镜像转给从节点         ae853e93800d        11 days ago         116MB
registry.aliyuncs.com/google_containers/pause        3.1                 da86e6ba6ca1        2 years ago         742kB
[root@server4 ~]#

 

4.2、在server3上会自动从server上取两个镜像

 

4.3、server4 上:

(和server3上进行的实验一致)

[root@server4 docker]# kubeadm join 172.25.6.1:6443 --token h9nrch.vl1egbpzs0cpxwft \
>     --discovery-token-ca-cert-hash sha256:bd6e9c2f05bbf3a8d1f9264e1f2fa23edf6c2fa8d78494c65a5b17866ce989d7
W0223 05:29:39.091363    8048 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.



[root@server4 ~]# docker images
REPOSITORY                                           TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.17.3             ae853e93800d        11 days ago         116MB
registry.aliyuncs.com/google_containers/pause        3.1                 da86e6ba6ca1        2 years ago         742kB
[root@server4 ~]#

 


5、按照创建一个普通用户并且授权


[root@server1 ~]# useradd kubeadm       ##创建用于k8s实验的普通用户 
[root@server1 ~]# visudo
[root@server1 ~]# su - kubeadm
[kubeadm@server1 ~]$
[kubeadm@server1 ~]$  mkdir -p $HOME/.kube         创建集群管理目录
[kubeadm@server1 ~]$   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[kubeadm@server1 ~]$   sudo chown $(id -u):$(id -g) $HOME/.kube/config
[kubeadm@server1 ~]$

 

 

6、关于k8s命令的补齐

6.1、输入以下命令

[kubeadm@server1 ~]$ echo "source <(kubectl completion bash)" >> ~/.bashrc    ##输入此命令
[kubeadm@server1 ~]$ logout         ##退出再登录(加载环境的配置信息)
[root@server1 ~]#
[root@server1 ~]# su - kubeadm
Last login: dom feb 23 05:58:08 EST 2020 on pts/0
[kubeadm@server1 ~]$
[kubeadm@server1 ~]$ kubectl          ##按Tab键验证  如果出现以下内容说明设置成功 
annotate       autoscale      cordon         drain          kustomize      port-forward   set
api-resources  certificate    cp             edit           label          proxy          taint
api-versions   cluster-info   create         exec           logs           replace        top
apply          completion     delete         explain        options        rollout        uncordon
attach         config         describe       expose         patch          run            version
auth           convert        diff           get            plugin         scale          wait
[kubeadm@server1 ~]$

 

6.2、测试:

 

 

6.3、查看cs和ns信息


[kubeadm@server1 ~]$ kubectl  get cs             ##查看cs信息
NAME                 STATUS    MESSAGE             ERROR
etcd-0               Healthy   {"health":"true"}   
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
[kubeadm@server1 ~]$
[kubeadm@server1 ~]$ kubectl get ns               ##查看ns信息 
NAME              STATUS   AGE
default           Active   45m
kube-node-lease   Active   45m
kube-public       Active   45m
kube-system       Active   45m
[kubeadm@server1 ~]$
[kubeadm@server1 ~]$

 

 

 

 

三、安装falnnel网络插件

  • 到官网上查看falnnel插件的配置信息
  • 创建fallnet的yml文件
  • 修改falnnet.yml文件中镜像的路径
  • 查看集群的是否创建成功

 

1、查看集群节点信息:

kubectl  get node   

(注:需要安装网络插件后集群才能正常工作)

 

在GitHuhb 官网上查看相关flannel的yum文件配置信息

flannel网络插件的配置文件链接地址:https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml
 

 

 

 

2、创建fallnet的yml文件

[root@server1 ~]# su - kubeadm
Last login: dom feb 23 06:04:04 EST 2020 on pts/0
[kubeadm@server1 ~]$ \vi kube-flannel.yml             ##将官网文档中的信息复制到此文件中用\vi的方式格式不会被打乱

[kubeadm@server1 ~]$ kubectl apply -f kube-flannel.yml   ##应用此文件中的信息 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
[kubeadm@server1 ~]$
[kubeadm@server1 ~]$

 

(一共602行)

 

 

2.1、应用yml文件

 

2.3、查看镜像的运行情况:

(发现镜像启动失败)

 

 

3、打开kube-flannel.yml文件 ,查看镜像存放的路径

发现用的时谷歌的下载地址

 

 

3.2、改用阿里云的镜像

 

3.3、

 

 


3.4、看到集群部署的所有服务全部都是Running才算部署成功

[kubeadm@server1 ~]$ kubectl get pod -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-9d85f5447-6nng2           1/1     Running   0          5h20m
coredns-9d85f5447-wl297           1/1     Running   0          5h20m
etcd-server1                      1/1     Running   0          5h20m
kube-apiserver-server1            1/1     Running   1          5h20m
kube-controller-manager-server1   1/1     Running   3          5h20m
kube-flannel-ds-amd64-h6mpc       1/1     Running   0          6m38s
kube-flannel-ds-amd64-h8k92       1/1     Running   0          6m44s
kube-flannel-ds-amd64-w4ws4       1/1     Running   0          6m43s
kube-proxy-8hc7t                  1/1     Running   0          5h20m
kube-proxy-ktxlp                  1/1     Running   0          5h5m
kube-proxy-w9jxm                  1/1     Running   0          4h58m
kube-scheduler-server1            1/1     Running   3          5h20m
[kubeadm@server1 ~]$

##看到以上的服务全部都是Running才算部署成功 

 

 

 3.5、再次查看集群的节点的状态信息

[kubeadm@server1 ~]$
[kubeadm@server1 ~]$ kubectl  get node          ##查看节点信息 
NAME      STATUS   ROLES    AGE     VERSION
server1   Ready    master   5h36m   v1.17.3
server3   Ready    <none>   5h14m   v1.17.3
server4   Ready    <none>   5h21m   v1.17.3
[kubeadm@server1 ~]$

 

3.6、在各个节点上查看fallnet镜像是否被成功的拉取

在server1上 :

[kubeadm@server1 ~]$
[kubeadm@server1 ~]$ sudo docker images
REPOSITORY                                                        TAG                        IMAGE ID            CREATED             SIZE
portainer/portainer                                               latest                     10383f5b5720        5 days ago          78.6MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.17.3                    ae853e93800d        11 days ago         116MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.17.3                    b0f1517c1f4b        11 days ago         161MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.17.3                    90d27391b780        11 days ago         171MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.17.3                    d109c0821a2b        11 days ago         94.4MB
nginx                                                             latest                     2073e0bcb60e        3 weeks ago         127MB
registry.aliyuncs.com/google_containers/coredns                   1.6.5                      70f311871ae1        3 months ago        41.6MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0                    303ce5db0e90        4 months ago        288MB

 

 

在server3上:

server3上:

[root@server3 ~]# docker images
REPOSITORY                                           TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.17.3             ae853e93800d        11 days ago         116MB
registry.aliyuncs.com/google_containers/coredns      1.6.5               70f311871ae1        3 months ago        41.6MB
quay-mirror.qiniu.com/coreos/flannel   v0.11.0-amd64   ff281650a721   13 months ago       52.6MB        ##server3上自动生成的镜像    
registry.aliyuncs.com/google_containers/pause        3.1                 da86e6ba6ca1        2 years ago         742kB
[root@server3 ~]#

 

在server4上:

server4上:

[root@server4 ~]# docker images
REPOSITORY                                           TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.17.3             ae853e93800d        11 days ago         116MB
registry.aliyuncs.com/google_containers/coredns      1.6.5               70f311871ae1        3 months ago        41.6MB
quay-mirror.qiniu.com/coreos/flannel                 v0.11.0-amd64       ff281650a721        13 months ago       52.6MB
registry.aliyuncs.com/google_containers/pause        3.1                 da86e6ba6ca1        2 years ago         742kB

 

 

 

四、集群的相关配置和使用方法

官网配置文档的链接:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

 

  • 让集群主节点也参加到集群的调度中
  • 新节点加入集群的哈希马验证失效i的问题
  • kubeadm reset 命令的使用

1、让集群管理节点server1也参加到调度中

[kubeadm@server1 ~]$ kubectl taint nodes --all node-role.kubernetes.io/master-    ##添加此命令让主节点也参加到调度中 
node/server1 untainted                             ##如果出现此内容说明节点添加成功
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
[kubeadm@server1 ~]$
[kubeadm@server1 ~]$

 

 

2.1、哈希码过期重建查看校验的哈希码

[kubeadm@server1 ~]$
[kubeadm@server1 ~]$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
>    openssl dgst -sha256 -hex | sed 's/^.* //'
bd6e9c2f05bbf3a8d1f9264e1f2fa23edf6c2fa8d78494c65a5b17866ce989d7
[kubeadm@server1 ~]$
[kubeadm@server1 ~]$ kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
h9nrch.vl1egbpzs0cpxwft   17h         2020-02-24T05:19:35-05:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token          ##哈希码的认证码
[kubeadm@server1 ~]$

 

 

2.2、校验哈希码密钥和证书存放的地方 :

[kubeadm@server1 ~]$
[kubeadm@server1 ~]$ cd /etc/kubernetes/
[kubeadm@server1 kubernetes]$ ls
admin.conf  controller-manager.conf  kubelet.conf  manifests  pki  scheduler.conf
[kubeadm@server1 kubernetes]$
[kubeadm@server1 kubernetes]$
[kubeadm@server1 kubernetes]$ cd pki/
[kubeadm@server1 pki]$
[kubeadm@server1 pki]$
[kubeadm@server1 pki]$ ls
apiserver.crt              apiserver.key                 ca.crt  front-proxy-ca.crt      front-proxy-client.key
apiserver-etcd-client.crt  apiserver-kubelet-client.crt  ca.key  front-proxy-ca.key      sa.key
apiserver-etcd-client.key  apiserver-kubelet-client.key  etcd    front-proxy-client.crt  sa.pub
[kubeadm@server1 pki]$

 

 

 


2.3、验证信息24小时后会自动时效 如果有新的节点加入集群需要重新创建验证信息

[kubeadm@server1 ~]$ kubeadm token list      ##已经过了24小时哈希验证码已经失效 查看验证码时发现验证码没有信息 
[kubeadm@server1 ~]$ 
[kubeadm@server1 ~]$ kubeadm token create    ##如果有新的节点需要加入集群要重新创建 
W0224 06:14:27.228087   11796 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0224 06:14:27.228169   11796 validation.go:28] Cannot validate kubelet config - no validator is available
szv7ow.uohpbjz3imskogx9
[kubeadm@server1 ~]$
[kubeadm@server1 ~]$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
>    openssl dgst -sha256 -hex | sed 's/^.* //'
bd6e9c2f05bbf3a8d1f9264e1f2fa23edf6c2fa8d78494c65a5b17866ce989d7
[kubeadm@server1 ~]$
[kubeadm@server1 ~]$

 

 

 

3、kubeadm reset 命令的使用

如果出现k8s初始化命令输错可以使用kubeadm reset 清空直接的环境信息


[root@server1 ~]# kubeadm reset            ##如果出现k8s初始化命令输错可以使用kubeadm reset 清空直接的环境信息 

[root@server1 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers



[root@server1 ~]# kubeadm token list       ##查看哈希验证码
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
h9nrch.vl1egbpzs0cpxwft   23h         2020-02-24T05:19:35-05:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
[root@server1 ~]#

 

 

 

 

 

 

 

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐