一:部署k8s,依据kubeadm部署,master和node的相关组件以pod的方式运行。
准备两台主机

k8s-master10.5.100.102
k8s-node110.5.100.208

修改/etc/hosts文件,能够互相解析。

[root@k8s-master ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.5.100.208 k8s-node1
10.5.100.102 k8s-master
[root@k8s-master ~]# scp /etc/hosts 10.5.100.208:/etc/hosts

二:master节点和node节点都须操作。我只记录了master节点。
(1)配置所需yum源

[root@k8s-master ~]# cd /etc/yum.repos.d/
[root@k8s-master yum.repos.d]# ls
CentOS-Base.repo  docker-ce.repo  epel.repo  kubernetes.repo  zabbix.repo
[root@k8s-master yum.repos.d]# vim kubernetes.repo    国内有防火墙,采用阿里云
[kuberneten]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
~ 
[root@k8s-master yum.repos.d]# cat docker-ce.repo 
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-edge]
name=Docker CE Edge - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-edge-debuginfo]
name=Docker CE Edge - Debuginfo $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/debug-$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-edge-source]
name=Docker CE Edge - Sources
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/source/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg
 
[root@k8s-master yum.repos.d]# scp /etc/yum.repo.d/ 10.5.100.208:/etc/yum.repos.d/ 
[root@k8s-master yum.repos.d]# yum repolist
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
repo id                                                   repo name                                                                        status
base/7/x86_64                                             CentOS-7 - Base - mirrors.aliyun.com                                             10,070
docker-ce-stable/x86_64                                   Docker CE Stable - x86_64                                                            77
epel/x86_64                                               Extra Packages for Enterprise Linux 7 - x86_64                                   13,327
extras/x86_64                                             CentOS-7 - Extras - mirrors.aliyun.com                                              397
kuberneten                                                Kubernetes                                                                          505
updates/7/x86_64                                          CentOS-7 - Updates - mirrors.aliyun.com                                             759
zabbix/x86_64                                             Zabbix Official Repository - x86_64                                                 132
zabbix-non-supported/x86_64                               Zabbix Official Repository non-supported - x86_64                                     4
repolist: 25,271
[root@k8s-master yum.repos.d]# 

(2)安装ntp服务,实现时间同步。

root@k8s-master ~]# yum install ntpdate -y
[root@k8s-master ~]# ntpdate -u ntp.api.bz   -u实现穿透国内防火墙,ntp.api.bz  ntp的集群

(3)禁用swap,关闭防火墙。

[root@k8s-master ~]# systemctl stop firewalld
[root@k8s-master ~]# setenforce 0
[root@k8s-master ~]# vim /etc/selinux/config 
[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# vi /etc/fstab 
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

(4)安装所需软件包。k8s以kubeadm部署,每一个组件以pod方式运行。

[root@k8s-master ~]# yum install docker-ce -y
[root@k8s-master ~]# docker --version
Docker version 19.03.11, build 42e35e61f3
[root@k8s-master ~]# 
[root@k8s-master ~]# systemctl enable docker && systemctl start docker
[root@k8s-master ~]# yum install kubelet kubeadm kubectl -y
[root@k8s-master ~]# systemctl enable kubelet
[root@k8s-master ~]# kubelet --version
Kubernetes v1.18.3
[root@k8s-master ~]# vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"     禁用swap
~                                                 

三:master主节点配置。

[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.18.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=0.0.0.0 --image-repository=registry.aliyuncs.com/google_containers --ignore-preflight-errors=swap
W0618 15:52:29.701599   13676 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.5.100.102]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [10.5.100.102 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.5.100.102 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0618 15:57:21.179990   13676 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0618 15:57:21.180950   13676 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.502288 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: kazol9.6m7thla2dw3rq0af
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.5.100.102:6443 --token kazol9.6m7thla2dw3rq0af \
    --discovery-token-ca-cert-hash sha256:fda76801417b77891e846c9edf3d1614aa2aa671ffb3a3aafba910de244d23bb  
--token后面的hash码都是我们加入到集群的条件务必记住。

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/
anaconda-ks.cfg      .bash_logout         .bashrc              .kube                rpm-package-key.gpg  .tcshrc
.bash_history        .bash_profile        .cshrc               .pki                 .ssh                 
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
上述说要用普通用户运行,但我还是以管理员运行了这些命令
[root@k8s-master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"} 

[root@k8s-master kubernetes]# kubectl get nodes      
NAME         STATUS     ROLES    AGE     VERSION    
k8s-master   NotReady   master   3h21m   v1.18.3   Notready表示网络没有就绪
k8s-node1    NotReady   <none>   137m    v1.18.3

通过kubeclt部署flannel网络插件,因为我们初始化时指明cidr使用flannel网络插件。
[root@k8s-master ~]# vi /etc/hosts    
151.101.76.133 raw.githubusercontent.com
root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

[root@k8s-master ~]# kubectl get pods -n kube-system    查看kube-system名称空间默认运行的pod
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-4bcgq             1/1     Running   8          4d2h
coredns-7ff77c879f-mrpbj             1/1     Running   0          4d2h
etcd-k8s-master                      1/1     Running   8          4d2h
kube-apiserver-k8s-master            1/1     Running   34         4d2h
kube-controller-manager-k8s-master   1/1     Running   300        4d2h
kube-flannel-ds-amd64-bfv6w          1/1     Running   0          3d23h
kube-flannel-ds-amd64-fbsjt          1/1     Running   24         3d23h
kube-proxy-k599w                     1/1     Running   3          4d2h
kube-proxy-vjjm4                     1/1     Running   0          4d1h
kube-scheduler-k8s-master            1/1     Running   180        4d2h


[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION        状态为ready
k8s-master   Ready    master   4h13m   v1.18.3
k8s-node1    Ready    <none>   3h9m    v1.18.3                                                                                                    

node1节点加入集群的操作:

[root@k8s-node1 ~]# kubeadm join 10.5.100.102:6443 --token 3lqdd8.sykm00d9g4cgq1cv \
>  --discovery-token-ca-cert-hash sha256:204de6be65c9b9e763e7d0d6befa4da2d8802f66cff7da34760e0695119f037d
W0624 20:56:31.201251   47997 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node1 ~]# docker image ls -a   查看node1节点加入到集群中拉去的镜像
REPOSITORY                                           TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.18.3             3439b7546f29        5 weeks ago         117MB
quay.io/coreos/flannel                               v0.12.0-amd64       4e9f801d2217        3 months ago        52.8MB
registry.aliyuncs.com/google_containers/pause        3.2                 80d28bedfe5d        4 months ago        683kB
registry.aliyuncs.com/google_containers/coredns      1.6.7               67da37a9a360        4 months ago        43.8MB
[root@k8s-node1 ~]# 

node加入集群中遇到的问题:
(1)token过期。

解决办法:
kubeadm  token  create  3lqdd8.sykm00d9g4cgq1cv    重新得到token
得到discovery-token-ca-cert-hash
opennssl  x509  -pubkey  -in /etc/kubernetes/pki/ca.crt | openssl  rsa -pubin  -outform  der 2>/dev/null |openssl  dgst -sha256 -hex |sed 's/^.* //'

k8s部署遇到的错误

[root@k8s-master ~]# kubectl get pods
Unable to connect to the server: net/http: TLS handshake timeout    当出现这种情况时说明我们给的内存小了如果时虚拟机我们就把内存调整为3G即可。

k8s执行kubectl get cs时报错

[root@k8s-master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
etcd-0               Healthy     {"health":"true"}    

解决办法:修改不健康的相关配置文件,将--port进行注释 
[root@k8s-master manifests]# pwd
/etc/kubernetes/manifests
[root@k8s-master manifests]# vim kube-scheduler.yaml 
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
    #- --port=0                 这项进行注释即可。
 
[root@k8s-master manifests]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
scheduler            Healthy     ok                                                                                            
etcd-0               Healthy     {"health":"true"}                                                                                                                  
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐