安装 nfs

yum -y install  nfs-utils rpcbind

kubectl、kubelet、 kubeadm安装


添加阿里kubernetes源信息

[root@k8s01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装

[root@k8s01 ~]# yum -y install kubectl kubelet kubeadm

安装成功

[root@k8s01 ~]# rpm -qa | grep kube
kubeadm-1.19.4-0.x86_64
kubelet-1.19.4-0.x86_64
kubectl-1.19.4-0.x86_64
kubernetes-cni-0.8.7-0.x86_64

克隆虚拟机

我这里用的是vm虚拟机,如若不是,需要安装集群环境的,需要重复以上环境准备,需要修改master主机名加ip地址信息,可直接跳至主节点安装

先关机

[root@k8s01 ~]# shutdown now

把它作为基础,从节点只需要克隆该镜像即可使用

完成即可

Kubernetes主节点安装


获取安装文件yml

[root@k8s01 ~]# kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml

修改安装配置文件

[root@k8s01 ~]# vi kubeadm.yml

修改以下注释内容

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
 # 修改主节点IP
  advertiseAddress: 192.168.8.81
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s01.ajake.com
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
# 国内不能访问 Google,修改为阿里云
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:
  dnsDomain: cluster.local
  # 配置 POD 所在网段为我们虚拟机不重叠的网段
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}

如图所示
在这里插入图片描述
可以查看所需镜像

[root@k8s01 ~]# kubeadm config images list --config kubeadm.yml

输出以下信息

registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0
registry.aliyuncs.com/google_containers/pause:3.2
registry.aliyuncs.com/google_containers/etcd:3.4.3-0
registry.aliyuncs.com/google_containers/coredns:1.6.7

拉取镜像,静等10来分钟,具体快和慢和网络相关

[root@k8s01 ~]# kubeadm config images pull --config kubeadm.yml

安装主节点

[root@k8s01 ~]# kubeadm init --config=kubeadm.yml --upload-certs | tee kubeadm-init.log

说明 :

  • init 命令是初始化
  • - -upload-certs 参数可以在后续执行加入节点时自动分发证书文件
  • tee kubeadm-init.log 用以输出日志
注意: 安装 kubernetes 版本和下载的镜像版本不统一则会出现
           timed out waiting for the condition 错误。
           想修改配置可以使用 kubeadm reset 命令重置配置,
           重新初始化操作即可。

安装成功
在这里插入图片描述
配置 kubectl

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.8.81:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:28d563522e8383960747afecbbc89265c78219f71818586a032b61a185d2bc39
[root@k8s01 ~]# mkdir -p $HOME/.kube
[root@k8s01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

验证是否成功

[root@k8s01 ~]# kubectl get node

显示如下信息

NAME    STATUS     ROLES    AGE    VERSION
k8s01   NotReady   master   112s   v1.19.4

需要关闭防火墙使得从节点可加入,也可以开放相关的端口6443

[root@k8s01 ~]# systemctl stop firewalld
[root@k8s01 ~]# systemctl disable firewalld

安装从节点


克隆一个刚才安装好的虚拟机,不能克隆已安装好的主节点
修改hostName

[root@k8s01 ~]#  vi /etc/hosts

新增如下信息

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.23.129 k8s01.ajake.com master-01
192.168.23.130 node01.ajake.com node-01

修改网卡静态ip

[root@k8s01 ~]#  vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=192.168.8.82
NETMASK=255.255.255.0
GATEWAY=192.168.8.1
DNS1=192.168.8.1
DEFROUTE=yesIPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=7f8adf87-e66a-49d7-8fde-44ceb342fc7a
DEVICE=ens33
ONBOOT=yes
[root@k8s01 ~]#  reboot -f

加入刚才的集群环境

主节点日志有怎么添加至集群环境
在这里插入图片描述
同时也需要开放相关端口号6443,我这里直接关闭防火墙

[root@k8s02 ~]# systemctl stop firewalld

我这里是这样,直接在node-01节点上执行即可

[root@node02 ~]# kubeadm join 192.168.8.81:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:28d563522e8383960747afecbbc89265c78219f71818586a032b61a185d2bc39
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

节点验证
返回主节点查看

[root@k8s01 ~]# kubectl get nodes

显示如下信息

NAME    STATUS     ROLES    AGE     VERSION
k8s01   NotReady   master   7m26s   v1.19.4
k8s02   NotReady   <none>   39s     v1.19.4

这里的STATUS是NotReady因为coredns,需要安装网络插件
在master节点上查看 Pods 状态

[root@k8s01 ~]# watch kubectl get pods -n kube-system -o wide

显示如下信息

NAME                            READY   STATUS              RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS
GATES
coredns-6d56c8448f-hrnkp        0/1     Pending             0          8m6s    <none>         <none>   <none>           <none>
coredns-6d56c8448f-nwbhx        0/1     Pending             0          8m6s    <none>         <none>   <none>           <none>
etcd-k8s01                      1/1     Running             0          8m16s   192.168.8.81   k8s01    <none>           <none>
kube-apiserver-k8s01            1/1     Running             0          8m16s   192.168.8.81   k8s01    <none>           <none>
kube-controller-manager-k8s01   1/1     Running             0          8m16s   192.168.8.81   k8s01    <none>           <none>
kube-proxy-fb4n2                1/1     Running             0          8m6s    192.168.8.81   k8s01    <none>           <none>
kube-proxy-fgrn8                0/1     ContainerCreating   0          98s     192.168.8.82   k8s02    <none>           <none>
kube-scheduler-k8s01            1/1     Running             0          8m16s   192.168.8.81   k8s01    <none>           <none>

网络插件安装


在使用使用容器的时候,只是提供一个CNI(Container Network Interface) 标准的通用的接口,容器网络解决方案 flannel,calico,Canal,weave,使用这些解决方案可以满足该协议的所有容器平台提供网络功能。

Calico链接 https://docs.projectcalico.org/introduction/

Flannel链接 https://github.com/coreos/flannel/

Weave链接 https://www.weave.works/oss/net/

Canal 链接 https://github.com/projectcalico/canal

我这里使用的是calico,因为支持网络策略、支持服务网格Istio集成
官方的安装文档:https://docs.projectcalico.org/getting-started/kubernetes/quickstart
获取yml文档

[root@k8s01 ~]# wget https://docs.projectcalico.org/manifests/calico.yaml

修改文档

[root@k8s01 ~]# vim calico.yaml
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"

使用vim 命令模式 输入:set number ,然后查询命令/CALICO_IPV4POOL_CIDR ,修改之前定义的网络端口号

在这里插入图片描述

在这里插入图片描述
安装calico.yml

[root@k8s01 ~]# kubectl apply -f calico.yaml

验证安装是否成功

[root@k8s01 ~]# watch kubectl get pods --all-namespaces

显示如下:

Every 2.0s: kubectl get pods --all-namespaces                                                             Tue Nov 24 16:18:12 2020

NAMESPACE     NAME                                       READY   STATUS     RESTARTS   AGE
kube-system   calico-kube-controllers-5dc87d545c-v8j89   0/1     Pending    0          29s
kube-system   calico-node-jhhtq                          0/1     Init:1/3   0          29s
kube-system   calico-node-r7xht                          0/1     Init:0/3   0          29s
kube-system   coredns-6d56c8448f-hrnkp                   0/1     Pending    0          26m
kube-system   coredns-6d56c8448f-nwbhx                   0/1     Pending    0          26m
kube-system   etcd-k8s01                                 1/1     Running    0          26m
kube-system   kube-apiserver-k8s01                       1/1     Running    0          26m
kube-system   kube-controller-manager-k8s01              1/1     Running    0          26m
kube-system   kube-proxy-fb4n2                           1/1     Running    0          26m
kube-system   kube-proxy-fgrn8                           1/1     Running    0          20m
kube-system   kube-scheduler-k8s01                       1/1     Running    0          26m

查看 nodes状态

[root@k8s01 ~]# kubectl get nodes

显示STATUS -Ready代表网络已经组成

NAME    STATUS   ROLES    AGE   VERSION
k8s01   Ready    master   27m   v1.19.4
k8s02   Ready    <none>   20m   v1.19.4

其他事项说明

问题一:重启之后出现以下问题是因为没有加开机自启动服务

[root@k8s01 ~]# kubectl get node
The connection to the server 192.168.23.129:6443 was refused - did you specify the right h                                                                                                                            ost or port?

解决办法,从节点也需要这样操作,重启之后自动加入主节点

[root@k8s01 ~]# systemctl start kubelet
## 添加开机启动
[root@k8s01 ~]# systemctl enable kubelet
[root@k8s01 ~]# systemctl enable docker

问题二:主节点如何删除从节点

[root@k8s01 ~]# kubeadm delete nodes <NAME>

问题三:Node节点加入Master配置有问题
解决办法:在 Node 节点上使用 kubeadm reset 重置配置再使用 kubeadm join

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐