安装前准备

ubuntu环境设置

(1)操作系统版本

ubuntu@node1:~$ cat /proc/version
Linux version 5.8.0-45-generic (buildd@lcy01-amd64-024) (gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #51~20.04.1-Ubuntu SMP Tue Feb 23 13:46:31 UTC 2021

(2)内核版本

ubuntu@node1:~$ uname -a
Linux node1 5.8.0-45-generic #51~20.04.1-Ubuntu SMP Tue Feb 23 13:46:31 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

节点IP设置

Ubuntu集群中共有3个节点,其中node1设为master节点,node2和node3设置为node节点。IP地址规划如下:

角色主机名称IP地址
Masternode1192.168.23.126
Nodenode2192.168.23.127
Nodenode3192.168.23.128

节点名称设置

(1)修改hostname

ubuntu@node1:~$ vim /etc/hostname
node1
ubuntu@node2:~$ vim /etc/hostname
node2
ubuntu@node3:~$ vim /etc/hostname
node3

(2)修改/etc/hosts

ubuntu@node1:~$ vim /etc/hosts
192.168.23.126 node1
192.168.23.127 node2
192.168.23.128 node3

配置集群内部免密

(1)安装ssh并生成RSA密钥

$ sudo apt install ssh
$ ssh-keygen -t rsa

(2)将id_rsa.pub的内容追加到authorized_keys中(在node1上操作)

$ cd /home/ubuntu/.ssh/
$ cat id_rsa.pub >> authorized_keys

将node2和node3上生成的id_rsa.pub拷贝到node1的authorized_keys中。
(3)将authorized_keys复制到node1和node2中

$ scp authorized_keys ubuntu@node2:/home/ubuntu/.ssh/
$ scp authorized_keys ubuntu@node3:/home/ubuntu/.ssh/

配置集群内部时间同步

(1)安装chrony并配置

$ sudo apt install chrony

(2)配置ntp时间服务器(node1)

$ sudo systemctl status chronyd.service
$ sudo vim /etc/chrony/chrony.conf
#pool ntp.ubuntu.com        iburst maxsources 4
#pool 0.ubuntu.pool.ntp.org iburst maxsources 1
#pool 1.ubuntu.pool.ntp.org iburst maxsources 1
#pool 2.ubuntu.pool.ntp.org iburst maxsources 2
# 新增
server 210.72.145.44 iburst #国家授时中心
server ntp.aliyun.com iburst #阿里云时间服务器
# 如果没有则在最后添加
allow 192.168.23.0/24	#去掉注释,允许192.168.23.0/24网段的节点访问
local stratum 10		#去掉注释,开启同步层

$ sudo systemctl restart chronyd.service
$ sudo systemctl enable chronyd.service
$ sudo chronyc sourcestats -v
210 Number of sources = 2
                             .- Number of sample points in measurement set.
                            /    .- Number of residual runs with same sign.
                           |    /    .- Length of measurement set (time).
                           |   |    /      .- Est. clock freq error (ppm).
                           |   |   |      /           .- Est. error in freq.
                           |   |   |     |           /         .- Est. offset.
                           |   |   |     |          |          |   On the -.
                           |   |   |     |          |          |   samples. \
                           |   |   |     |          |          |             |
Name/IP Address            NP  NR  Span  Frequency  Freq Skew  Offset  Std Dev
==============================================================================
210.72.145.44               0   0     0     +0.000   2000.000     +0ns  4000ms
203.107.6.88                0   0     0     +0.000   2000.000     +0ns  4000ms

(3)配置客户端

$ sudo vim /etc/chrony/chrony.conf
#pool ntp.ubuntu.com        iburst maxsources 4
#pool 0.ubuntu.pool.ntp.org iburst maxsources 1
#pool 1.ubuntu.pool.ntp.org iburst maxsources 1
#pool 2.ubuntu.pool.ntp.org iburst maxsources 2
server 192.168.23.126 iburst
$ sudo systemctl restart chronyd.service
$ sudo systemctl enable chronyd.service
$ sudo chronyc sourcestats -v
$ timedatectl
               Local time: 三 2021-03-03 22:52:35 CST
           Universal time: 三 2021-03-03 14:52:35 UTC
                 RTC time: 三 2021-03-03 14:52:36
                Time zone: Asia/Shanghai (CST, +0800)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no

关闭防火墙

$ sudo ufw disable
$ sudo ufw reload

禁用SELINUX

首先介绍SELINUX是如何启用的:
(1)The first step is to install SELinux. Use the apt command to install the following packages:

$ sudo apt install policycoreutils selinux-utils selinux-basics

(2)Activate SELinux:

$ sudo selinux-activate

(3)Next, set SELinux to enforcing mode:

$ sudo selinux-config-enforcing

(4)Reboot your system. The relabelling will be triggered after you reboot your system. When finished the system will reboot one more time automatically.
(5)Check SELinux status:

$ sestatus
SELinux status:                 disabled

禁用SELINUX:

(1)To disable SELinux open up the /etc/selinux/config configuration file and change the following line:

$ vim /etc/selinux/config 
SELINUX=disabled

(2)Reboot your system.

更换阿里源

(1)备份原始源文件

$ sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak

(2)替换源文件
打开编辑源文件

$ sudo vim  /etc/apt/source.list

复制以下内容替换掉文件中所有内容。
注:此为ubuntu20.4的阿里云源

deb http://mirrors.aliyun.com/ubuntu/ focal main restricted
deb http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted
deb http://mirrors.aliyun.com/ubuntu/ focal universe
deb http://mirrors.aliyun.com/ubuntu/ focal-updates universe
deb http://mirrors.aliyun.com/ubuntu/ focal multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-updates multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-security main restricted
deb http://mirrors.aliyun.com/ubuntu/ focal-security universe
deb http://mirrors.aliyun.com/ubuntu/ focal-security multiverse

(3)让修改后的源文件生效

$ sudo  apt update
$ sudo  apt upgrade

关闭交换分区

Running Kubernetes requires that you disable swap.
(1)Check if swap is enabled.

ubuntu@node1:~$ swapon --show
NAME      TYPE      SIZE USED PRIO
/dev/dm-1 partition 976M   0B   -2

If there is no output, then swap is not enabled. If it is enabled as shown in the output above, run the command below to disable it.
(2)disable swap
切换到root用户下执行:

root@node1:~# swapoff -a && sysctl -w vm.swappiness=0
vm.swappiness = 0

To permanently disable swap, comment out or remove the swap line on /etc/fstab file.

root@node1:~# sudo sed -i 's/.*swap.*/#&/' /etc/fstab

配置内核调优

在/etc/sysctl.conf中添加如下内容:

root@node1:~# cat > /etc/sysctl.conf <<EOF
vm.max_map_count=262144
net.ipv4.ip_forward = 1
EOF

使配置文件生效:

root@node1:~# sysctl -p 

修改Linux 资源配置文件,调高ulimit最大打开数和systemctl管理的服务文件最大打开数:

root@node1:~# echo "* soft nofile 655360" >> /etc/security/limits.conf
root@node1:~# echo "* hard nofile 655360" >> /etc/security/limits.conf
root@node1:~# echo "* soft nproc 655360"  >> /etc/security/limits.conf
root@node1:~# echo "* hard nproc 655360"  >> /etc/security/limits.conf
root@node1:~# echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
root@node1:~# echo "* hard memlock  unlimited"  >> /etc/security/limits.conf
root@node1:~# echo "DefaultLimitNOFILE=1024000"  >> /etc/systemd/system.conf
root@node1:~# echo "DefaultLimitNPROC=1024000"  >> /etc/systemd/system.conf

安装docker

Both of the nodes will need to have Docker installed on them, as Kubernetes relies on it. Open a terminal and type the following commands on both the master and the worker node to install Docker:

$ sudo apt update
$ sudo apt install docker.io

Once Docker has finished installing, use the following commmands to start the service and to make sure it starts automatically after each reboot:

$ sudo systemctl start docker
$ sudo systemctl enable docker
$ sudo systemctl status docker
root@node1:~# docker --version
Docker version 19.03.8, build afacb8b7f0

安装其他常用软件包

root@node1:~# apt install build-essential

安装Kubernetes

配置docker

修改Cgroupfs 为 Systemd(docker文件驱动默认由cgroupfs 改成 systemd,与k8s保持一致避免conflict):

sudo vim /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
sudo systemctl restart docker.service

配置docker的镜像加速。当您下载安装的Docker Version不低于1.10时,建议直接通过daemon config进行配置。使用配置文件/etc/docker/daemon.json(没有时新建该文件)。

{
    "registry-mirrors": ["<your accelerate address>"]
}   

这里的地址查看方法为:登录容器镜像服务控制台,在左侧导航栏选择镜像工具 > 镜像加速器,在镜像加速器页面的操作指引中查看,如下图所示。
阿里云镜像加速
最终配置如下所示:

{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://1ehd86nk.mirror.aliyuncs.com"]
}

查看文件驱动:

root@node1:/etc/docker# docker info | grep Driver
 Storage Driver: overlay2
 Logging Driver: json-file
 Cgroup Driver: systemd

添加K8S安装源

以下的操作只在master宿主机上执行,适合中国大陆地区使用(因为弃用谷歌的源和repo,转而使用阿里云的镜像)。
(1)安装相关软件

$ sudo apt-get update && sudo apt-get install -y ca-certificates curl software-properties-common apt-transport-https curl

(2)Install Kubernetes Repository GPG Signing Key
Run the command below to install Kubernetes repo GPG key.

root@node1:/etc/docker# curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

(3)Install Kubernetes Repository on Ubuntu 20.04
Next install the Kubernetes repository;

root@node1:~# sudo tee /etc/apt/sources.list.d/kubernetes.list <<EOF
> deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
> EOF

(4)更新软件

root@node1:~# sudo apt update

安装K8S工具

在所有节点上安装kubectl、kubelet、kubeadm,设置kubelet开机启动,启动kubelet。

  • kubeadm: the command to bootstrap the cluster.
  • kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
  • kubectl: the command line util to talk to your cluster.

Run the following commands on all 3 nodes to install kubectl , kubelet and kubeadm utility

$ sudo apt install kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl

查看K8S版本:

$ kubeadm version
$ kubectl version --client
$ kubelet --version

Kubernetes版本
可以看到kubelet的版本为1.20.4。

配置转发

root@node1:~# sudo vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
root@node1:~# sysctl --system # 使配置生效

注意,其中可能出现如下错误:

sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument

注意:以上步骤均在三台节点上执行!

部署Kubernetes

部署Master节点

在Master节点上执行:

root@node1:~# kubeadm init --kubernetes-version=1.20.4 --apiserver-advertise-address=192.168.23.126 --pod-network-cidr=172.16.0.0/16 --image-repository registry.aliyuncs.com/google_containers --service-cidr=172.168.0.0/16
[init] Using Kubernetes version: v1.20.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node1] and IPs [172.168.0.1 192.168.23.126]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node1] and IPs [192.168.23.126 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node1] and IPs [192.168.23.126 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 64.002793 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 2ohis8.70iyemgmuh39e7rk
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.23.126:6443 --token 2ohis8.70iyemgmuh39e7rk \
    --discovery-token-ca-cert-hash sha256:459bf2781254b705232fbc898c99712a89a10c76088d2142640ae409be1ff460

其中关于kubeadm init的参数可以参考K8S官方文档——kubeadm-init
这里简要说明:

--apiserver-advertise-address=192.168.23.126          			#Master组件监听的api地址,必须能被其他节点所访问到
--image-repository registry.aliyuncs.com/google_containers    	#使用阿里云镜像
--kubernetes-version 1.20.4   						#kubernetes的版本
--service-cidr=172.168.0.0/16  						#services的网络范围
--pod-network-cidr=172.16.0.0/16         			#Pod的网络

注意:部署之前先规划好网络防止后面部署出错,网络部署错误导致后面porxy等问题;
记录kubeadm init生成的最后部分内容,此内容需要在其它节点加入Kubernetes集群之前就执行。根据init后的提示,创建kubectl配置文件:

root@node1:~# mkdir -p $HOME/.kube
root@node1:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@node1:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

部署Node节点

将Node节点加入到集群中,根据Master执行Kubeadm init命令最后的结果,在Node节点上执行:

root@node2:~# kubeadm join 192.168.23.126:6443 --token 2ohis8.70iyemgmuh39e7rk \
>     --discovery-token-ca-cert-hash sha256:459bf2781254b705232fbc898c99712a89a10c76088d2142640ae409be1ff460
root@node3:~# kubeadm join 192.168.23.126:6443 --token 2ohis8.70iyemgmuh39e7rk \
>     --discovery-token-ca-cert-hash sha256:459bf2781254b705232fbc898c99712a89a10c76088d2142640ae409be1ff460

验证节点状态

执行命令kubectl get nodes:

root@node1:~# kubectl get nodes
NAME    STATUS     ROLES                  AGE     VERSION
node1   NotReady   control-plane,master   16m     v1.20.4
node2   NotReady   <none>                 8m8s    v1.20.4
node3   NotReady   <none>                 8m43s   v1.20.4

我们可以看到,工作节点和主节点都加入了集群,但是每个节点的状态都是“Not Ready”。为了使状态“就绪”,我们必须部署基于Container Network Interface (CNI)的Pod网络附加组件,如calicokube-routerweave net。顾名思义,pod网络插件允许pod相互通信。

部署Pod网络

在Master节点上执行以下命令,安装Calico pod网络插件:

root@node1:~# kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
configmap/calico-config created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

等待一会儿后(大约5分钟),再次查看节点状态:

root@node1:~# kubectl get nodes
NAME    STATUS   ROLES                  AGE   VERSION
node1   Ready    control-plane,master   20m   v1.20.4
node2   Ready    <none>                 12m   v1.20.4
node3   Ready    <none>                 12m   v1.20.4

注意,在等待过程中,可以查看pods的状态:

root@node1:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-6dfcd885bf-dxm9w   0/1     ContainerCreating   0          87s
kube-system   calico-node-8ddc6                          0/1     PodInitializing     0          87s
kube-system   calico-node-f4gcr                          0/1     PodInitializing     0          87s
kube-system   calico-node-vcxzc                          0/1     Init:0/3            0          87s
kube-system   coredns-7f89b7bc75-c8kdd                   0/1     ContainerCreating   0          19m
kube-system   coredns-7f89b7bc75-k7n8v                   0/1     ContainerCreating   0          19m
kube-system   etcd-node1                                 1/1     Running             0          19m
kube-system   kube-apiserver-node1                       1/1     Running             0          19m
kube-system   kube-controller-manager-node1              1/1     Running             0          19m
kube-system   kube-proxy-49dqt                           1/1     Running             0          12m
kube-system   kube-proxy-kj454                           1/1     Running             0          19m
kube-system   kube-proxy-x6vmx                           1/1     Running             0          12m
kube-system   kube-scheduler-node1                       1/1     Running             0          19m

如果没有出现异常,则节点状态都会为Ready。否则根据异常提示查找相关解决办法。
最终,运行下面的命令来验证来自所有名称空间的pod的状态:

root@node1:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-6dfcd885bf-dxm9w   1/1     Running   0          26m
kube-system   calico-node-8ddc6                          1/1     Running   0          26m
kube-system   calico-node-f4gcr                          1/1     Running   0          26m
kube-system   calico-node-vcxzc                          1/1     Running   0          26m
kube-system   coredns-7f89b7bc75-c8kdd                   1/1     Running   0          45m
kube-system   coredns-7f89b7bc75-k7n8v                   1/1     Running   0          45m
kube-system   etcd-node1                                 1/1     Running   0          45m
kube-system   kube-apiserver-node1                       1/1     Running   0          45m
kube-system   kube-controller-manager-node1              1/1     Running   0          45m
kube-system   kube-proxy-49dqt                           1/1     Running   0          37m
kube-system   kube-proxy-kj454                           1/1     Running   0          45m
kube-system   kube-proxy-x6vmx                           1/1     Running   0          38m
kube-system   kube-scheduler-node1                       1/1     Running   0          45m

至此,我们已经成功安装了Kubernetes集群。

检查安装情况

(1)查看docker镜像:

root@node1:~# sudo docker ps
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS              PORTS               NAMES
e1f68a068785        calico/kube-controllers                             "/usr/bin/kube-contr…"   27 minutes ago      Up 27 minutes                           k8s_calico-kube-controllers_calico-kube-controllers-6dfcd885bf-dxm9w_kube-system_cb57666a-0b74-44a7-bbe4-25ff3f24ed25_0
bb80ee9e4a81        bfe3a36ebd25                                        "/coredns -conf /etc…"   27 minutes ago      Up 27 minutes                           k8s_coredns_coredns-7f89b7bc75-c8kdd_kube-system_fce8c4f8-f82e-4dea-9d90-ef00182729d8_0
3beb28c2ccf8        bfe3a36ebd25                                        "/coredns -conf /etc…"   27 minutes ago      Up 27 minutes                           k8s_coredns_coredns-7f89b7bc75-k7n8v_kube-system_0f35ef72-23b7-4159-8e48-4f168090e729_0
e4ca2731cd0a        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 27 minutes ago      Up 27 minutes                           k8s_POD_coredns-7f89b7bc75-c8kdd_kube-system_fce8c4f8-f82e-4dea-9d90-ef00182729d8_26
68cdea6ea896        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 27 minutes ago      Up 27 minutes                           k8s_POD_calico-kube-controllers-6dfcd885bf-dxm9w_kube-system_cb57666a-0b74-44a7-bbe4-25ff3f24ed25_30
2a13cbffce26        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 27 minutes ago      Up 27 minutes                           k8s_POD_coredns-7f89b7bc75-k7n8v_kube-system_0f35ef72-23b7-4159-8e48-4f168090e729_23
030826dfc78d        calico/node                                         "start_runit"            27 minutes ago      Up 27 minutes                           k8s_calico-node_calico-node-f4gcr_kube-system_f9a788ad-24b0-43ba-bcaa-b987f5bf5830_0
30a354681c8d        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 29 minutes ago      Up 29 minutes                           k8s_POD_calico-node-f4gcr_kube-system_f9a788ad-24b0-43ba-bcaa-b987f5bf5830_0
13002f3de0f6        c29e6c583067                                        "/usr/local/bin/kube…"   48 minutes ago      Up 48 minutes                           k8s_kube-proxy_kube-proxy-kj454_kube-system_87260453-f90c-4f97-8e97-aebad930638c_0
f978fa238fa7        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 48 minutes ago      Up 48 minutes                           k8s_POD_kube-proxy-kj454_kube-system_87260453-f90c-4f97-8e97-aebad930638c_0
a6c5b0ed56b4        0a41a1414c53                                        "kube-controller-man…"   48 minutes ago      Up 48 minutes                           k8s_kube-controller-manager_kube-controller-manager-node1_kube-system_df138eb45f41ad9c7f610ad32e36f93b_0
58439e8ac745        5f8cb769bd73                                        "kube-scheduler --au…"   48 minutes ago      Up 48 minutes                           k8s_kube-scheduler_kube-scheduler-node1_kube-system_508678702a50a123a7ea654f623a0cfe_0
b93802b95a42        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 48 minutes ago      Up 48 minutes                           k8s_POD_kube-controller-manager-node1_kube-system_df138eb45f41ad9c7f610ad32e36f93b_0
ae423c9a0fb4        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 48 minutes ago      Up 48 minutes                           k8s_POD_kube-scheduler-node1_kube-system_508678702a50a123a7ea654f623a0cfe_0
ed2623c3ce72        ae5eb22e4a9d                                        "kube-apiserver --ad…"   48 minutes ago      Up 48 minutes                           k8s_kube-apiserver_kube-apiserver-node1_kube-system_a05f5db8cefc7b66e205253597fb6d2b_0
c91278f5360a        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 48 minutes ago      Up 48 minutes                           k8s_POD_kube-apiserver-node1_kube-system_a05f5db8cefc7b66e205253597fb6d2b_0
76c5c758432d        0369cf4303ff                                        "etcd --advertise-cl…"   48 minutes ago      Up 48 minutes                           k8s_etcd_etcd-node1_kube-system_530503fbb3a7a96b29518d1019ed1cdd_0
429557e62fd7        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 48 minutes ago      Up 48 minutes                           k8s_POD_etcd-node1_kube-system_530503fbb3a7a96b29518d1019ed1cdd_0
root@node1:~#

(2)查看可用服务列表
使用从Kubernetes maser节点发出的以下命令,您可以看到集群中运行的所有可用服务的运行列表:

root@node1:~# kubectl get svc
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   172.168.0.1   <none>        443/TCP   49m

(3)获取集群信息:

root@node1:~# kubectl cluster-info
Kubernetes control plane is running at https://192.168.23.126:6443
KubeDNS is running at https://192.168.23.126:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

(4)在集群中部署一个应用
We need to validate that our cluster is working by deploying an application.

root@node1:~# kubectl apply -f https://k8s.io/examples/pods/commands.yaml
pod/command-demo created

Check to see if pod started:

root@node1:~# kubectl get pods
NAME           READY   STATUS              RESTARTS   AGE
command-demo   0/1     ContainerCreating   0          26s
root@node1:~# kubectl get pods
NAME           READY   STATUS      RESTARTS   AGE
command-demo   0/1     Completed   0          68s

注意:要在主节点上启用bash完成特性,请执行以下操作

root@node1:~# echo 'source <(kubectl completion bash)' >>~/.bashrc
root@node1:~# source .bashrc

参考资料

  1. https://linuxconfig.org/how-to-install-kubernetes-on-ubuntu-20-04-focal-fossa-linux
  2. https://www.linuxtechi.com/install-kubernetes-k8s-on-ubuntu-20-04/
  3. https://computingforgeeks.com/how-to-install-kubernetes-dashboard-with-nodeport/
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐