目录

virtualbox新建虚拟机

配置主机名解析(所有节点)

关闭防火墙(所有节点)

关闭selinux(所有节点)

禁用swap,默认情况下系统的swap都是打开的(所有节点)

配置ntp服务器

安装docker(所有节点)

k8s集群初始化(master节点操作)

初始化

查看集群信息

部署calico网络插件(master节点)

node节点加入集群

namespace

POD

Label用于给某个资源对象定义标识

Deployment

Service

pod扩缩容

查看POD日志


virtualbox新建虚拟机

虚拟机ip
k8s-master10.242.0.180
k8s-node110.242.0.181
k8s-node210.242.0.182

节点CPU核数必须是 :>= 2核 ,否则k8s无法启动

网络:这里虚拟机设置的双网卡,之前设置桥接时不能连外网,设置NAT时不能连内网,因此直接给虚拟机设置的双网卡:NAT+桥接。

配置主机名解析(所有节点)

a.master节点

[root@localhost ~]# vi /etc/hosts
10.242.0.180 k8s-master
10.242.0.181 k8s-node1
10.242.0.182 k8s-node2
hostnamectl --static set-hostname k8s-master

b.node1节点

[root@localhost ~]# vi /etc/hosts
10.242.0.180 k8s-master
10.242.0.181 k8s-node1
10.242.0.182 k8s-node2
hostnamectl --static set-hostname k8s-node1

c.node2节点

[root@localhost ~]# vi /etc/hosts
10.242.0.180 k8s-master
10.242.0.181 k8s-node1
10.242.0.182 k8s-node2
hostnamectl --static set-hostname k8s-node2

关闭防火墙(所有节点)

[root@k8s-master ~]# systemctl stop firewalld.service
[root@k8s-master ~]# systemctl disable firewalld.service
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

关闭selinux(所有节点)

[root@k8s-master ~]# sed -i.bak 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
[root@k8s-master ~]# setenforce 0

禁用swap,默认情况下系统的swap都是打开的(所有节点)

[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# echo 'swapoff -a' >>/etc/rc.local
[root@k8s-master ~]#

配置ntp服务器

a.master节点

[root@k8s-master ~]# rpm -qa|grep ntp
fontpackages-filesystem-1.44-8.el7.noarch
ntp-4.2.6p5-29.el7.centos.2.x86_64
ntpdate-4.2.6p5-29.el7.centos.2.x86_64
python-ntplib-0.3.2-1.el7.noarch
[root@k8s-master ~]# systemctl status ntpd
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: di                                                                                                                 sabled)
   Active: inactive (dead)
[root@k8s-master ~]# systemctl stop chronyd
[root@k8s-master ~]# systemctl disable chronyd
Removed symlink /etc/systemd/system/multi-user.target.wants/chronyd.service.
[root@k8s-master ~]# systemctl enable ntpd
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /us                                                                                                                 r/lib/systemd/system/ntpd.service.
[root@k8s-master ~]# systemctl start ntpd
[root@k8s-master ~]# vi /etc/ntp.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 127.127.1.0 iburst
​
[root@k8s-master ~]# systemctl restart ntpd
[root@k8s-master ~]# systemctl status ntpd
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since 四 2022-08-04 16:16:51 CST; 13s ago
  Process: 6228 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 6229 (ntpd)
    Tasks: 1
   CGroup: /system.slice/ntpd.service
           └─6229 /usr/sbin/ntpd -u ntp:ntp -g
​
8月 04 16:16:51 k8s-master ntpd[6229]: Listen normally on 5 virbr0 192.168.122.1 UDP 123
8月 04 16:16:51 k8s-master ntpd[6229]: Listen normally on 6 lo ::1 UDP 123
8月 04 16:16:51 k8s-master ntpd[6229]: Listen normally on 7 enp0s3 fe80::5be8:e38:b68e:a30d UDP 123
8月 04 16:16:51 k8s-master ntpd[6229]: Listen normally on 8 enp0s8 fe80::2dde:dad2:57cc:433b UDP 123
8月 04 16:16:51 k8s-master ntpd[6229]: Listening on routing socket on fd #25 for interface updates
8月 04 16:16:51 k8s-master ntpd[6229]: 0.0.0.0 c016 06 restart
8月 04 16:16:51 k8s-master ntpd[6229]: 0.0.0.0 c012 02 freq_set kernel 0.000 PPM
8月 04 16:16:51 k8s-master ntpd[6229]: 0.0.0.0 c011 01 freq_not_set
8月 04 16:16:51 k8s-master systemd[1]: Started Network Time Service.
8月 04 16:16:52 k8s-master ntpd[6229]: 0.0.0.0 c514 04 freq_mode
[root@k8s-master ~]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*LOCAL(0)        .LOCL.           5 l   53   64    1    0.000    0.000   0.000
​

b.node节点

[root@k8s-node1 ~]# vi /etc/ntp.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 10.242.0.180 iburst
restrict 10.242.0.180 nomodify notrap noquery
​
[root@k8s-node1 ~]# ntpdate -u 10.242.0.180
 4 Aug 16:49:48 ntpdate[5850]: adjust time server 10.242.0.180 offset 0.009751 sec
[root@k8s-node1 ~]# systemctl enable ntpd
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
[root@k8s-node1 ~]# systemctl start ntpd
[root@k8s-node1 ~]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*k8s-master      LOCAL(0)         6 u    -   64    1    0.511   -5.559   0.027
[root@k8s-node1 ~]#
​

安装docker(所有节点)

​
[root@k8s-node2 ~]# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-node2 ~]# yum -y install docker-ce
[root@k8s-node1 ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8s-node1 ~]# systemctl start docker
​
# 配置镜像加速器。如果是中途更换的镜像加速器,需要先systemctl daemon-reload,然后再systemctl restart docker。
[root@k8s-master ~]# vi /etc/docker/daemon.json
​
{
  "registry-mirrors": ["https://p4y8tfz4.mirror.aliyuncs.com"]
}
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker
​
配置文件:/etc/sysctl.d/kubernetes.conf
[root@k8s-node2 ~]# vi /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@k8s-node2 ~]# sysctl -p /etc/sysctl.d/kubernetes.conf
​

配置k8s的yum仓库,安装kubeadm、kubelet、kubectl组件

[root@k8s-master ~]# vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
# 组件的版本要与下面kubernetes初始化时指定的版本保持一致。
[root@k8s-node2 ~]# yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
[root@k8s-node2 ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
# PS:此处无需启动kubelet,后续将节点join到集群时,会自动拉起。

k8s集群初始化(master节点操作)

[root@k8s-master ~]# kubeadm --help
Usage:
  kubeadm [command]
Available Commands:
  alpha             # 处于测试中不太完善的命令
  config            # 显示当前配置
  init              # 初始化集群
  join              # 各Node节点加入集群中时使用
  reset             # 每个节点都可以用,把配置还原到最初始的状态。
  upgrade           # 升级集群的版本。
# print参数也是有子命令的,使用下面命令可以查看集群初始化时的预设配置,其中一些与我们真实环境不匹配,可以在初始化时手动指定修改
[root@k8s-master ~]# kubeadm config print init-defaults
imageRepository: k8s.gcr.io         # 默认加载镜像的仓库,需要梯子才能访问,如果知道国内别的仓库有需要的镜像,初始化时可以手动指定仓库地址。
kind: ClusterConfiguration
kubernetesVersion: v1.18.0          # k8s版本,这是初始化会加载的配置,如果与你预期的版本不符,自行修改。
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12       # Service网络默认网段。
scheduler: {}
# 至于Pod网络间通讯,k8s只提供CNI,真正实现网络通信需要借助第三方插件,如flannel、calico,两者各有优劣。flannel的默认地址是10.244.0.0/16,calico的默认地址是192.168.0.0/16。不使用默认地址也可以,只要保证部署网路插件时yaml中指定的网段与k8s部署时指定的Pod网络网段一致就可。

初始化

kubeadm init \
--apiserver-advertise-address=10.242.0.180 \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.18.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=all
Your Kubernetes control-plane has initialized successfully!
​
To start using your cluster, you need to run the following as a regular user:
​
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
​
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
​
Then you can join any number of worker nodes by running the following on each as root:
​
kubeadm join 10.242.0.180:6443 --token cyjwpu.xip6rpk1begx12lm \
    --discovery-token-ca-cert-hash sha256:12e2fdea8a88cf1e67f25e438fb3da4d871a43e5db0d869aa43908df0387488c
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

查看集群信息

a.查看集群健康状态

[root@k8s-master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
scheduler            Healthy   ok

b.查看集群版本

[root@k8s-master ~]# kubectl version --short
Client Version: v1.18.0
Server Version: v1.18.0

c.查看集群信息

[root@k8s-master ~]# kubectl cluster-info
Kubernetes master is running at https://10.242.0.180:6443
KubeDNS is running at https://10.242.0.180:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
​
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

d.查看集群节点

[root@k8s-master ~]#  kubectl get node
NAME         STATUS     ROLES    AGE    VERSION
k8s-master   NotReady   master   6m1s   v1.18.0

上条命令表示master节点是NotReady的,通过看日志

tail -f /var/log/messages

发现是缺少网络插件,下面会安装。

[root@k8s-master ~]# tail -f /var/log/messages
Aug  4 18:51:32 k8s-master kubelet: E0804 18:51:32.195996   13422 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

PS:我们能够在master上查看集群信息,主要就是因为家目录下的.kube/config文件(admin.conf),也就是将这个文件拷贝到别的主机后,别的主机也可以使用查看集群的相关信息,并不建议这么做,存在一定的风险性。

部署calico网络插件(master节点)

  1. 上面在集群初始化时指定了Pod网络网段为10.244.0.0/16,calico的默认网段为192.168.0.0/16,所以我们需要先修改下配置文件

[root@k8s-master ~]# mkdir -p /server/k8s
[root@k8s-master ~]# cd /server/k8s/
[root@k8s-master k8s]# wget https://docs.projectcalico.org/manifests/calico.yaml --no-check-certificate
[root@k8s-master k8s]# vi calico.yaml
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16

     2.  安装calico网络插件。

[root@k8s-master k8s]# kubectl apply -f calico.yaml
error: unable to recognize "calico.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1"

以上报错是由于k8s不支持当前calico版本的原因,可以在在官网查看版本是否兼容

[root@k8s-master k8s]# curl https://docs.projectcalico.org/v3.18/manifests/calico.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  184k  100  184k    0     0   133k      0  0:00:01  0:00:01 --:--:--  133k
[root@k8s-master k8s]# ll
总用量 188
-rw-r--r--. 1 root root 189190 8月   4 19:03 calico.yaml
[root@k8s-master k8s]# vi calico.yaml
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16
[root@k8s-master k8s]# kubectl apply -f calico.yaml
configmap/calico-config created
​

查看插件安装状态(需要等几分钟)

[root@k8s-master k8s]# kubectl get pod -n kube-system -w
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7b5bcff94c-lkchg   1/1     Running   0          3m54s
calico-node-jngnv                          1/1     Running   0          3m54s
coredns-7ff77c879f-72zdb                   1/1     Running   0          25m
coredns-7ff77c879f-f224k                   1/1     Running   0          25m
etcd-k8s-master                            1/1     Running   0          26m
kube-apiserver-k8s-master                  1/1     Running   0          26m
kube-controller-manager-k8s-master         1/1     Running   1          26m
kube-proxy-xvzl6                           1/1     Running   0          25m
kube-scheduler-k8s-master                  1/1     Running   1          26m

## -n:指定名称空间 ## -w:实时查看pod状态。 ## READY状态取决于网速。

[root@k8s-master k8s]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   58m   v1.18.0

node节点加入集群

kubeadm join 10.242.0.180:6443 --token cyjwpu.xip6rpk1begx12lm \
    --discovery-token-ca-cert-hash sha256:12e2fdea8a88cf1e67f25e438fb3da4d871a43e5db0d869aa43908df0387488c
​
[root@k8s-node1 ~]# kubeadm join 10.242.0.180:6443 --token cyjwpu.xip6rpk1begx12lm \
>     --discovery-token-ca-cert-hash sha256:12e2fdea8a88cf1e67f25e438fb3da4d871a43e5db0d869aa43908df0387488c
W0804 19:44:16.219681    2990 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
​
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
​
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
​
[root@k8s-master k8s]# kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   Ready      master   61m   v1.18.0
k8s-node1    NotReady   <none>   93s   v1.18.0

Todo:暂时还未找到node节点状态为NotReady的原因

安装dashboard

[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommendeyaml
[root@k8s-master ~]# vi recommended.yaml
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001           # 添加此行,此处端口可用范围为30000-32767。
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort        # 添加此行
[root@k8s-master ~]# kubectl apply -f recommended.yaml
[root@k8s-master ~]# kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-6b4884c9d5-xk45h   1/1     Running   0          4m37s
kubernetes-dashboard-7f99b75bf4-wwkxb        1/1     Running   0          4m37s
​
​

生成登录token

[root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-66h75
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: c5e02604-5fe7-406d-b72a-25d1b48f1372
​
Type:  kubernetes.io/service-account-token
​
Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImdGbVFEQVd5eVZMR2pNN1M4TnZ5aTdFOU1rZzU3NmhUYm92enVoc054Y2cifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNjZoNzUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYzVlMDI2MDQtNWZlNy00MDZkLWI3MmEtMjVkMWI0OGYxMzcyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.s-ubWGu9-YR03eN5PEvuoss8w4GdPkA3Ud8bt313SATdTaW6Q1rY4NMC3nfA2FQ4PhyNOL0vmTsOWlsLqNPwEHltDaEVtw2M6RLIYMD2hofZm6pmu1KASnqmYKFaR0Ch3aTh4fA1jrnNQgnUS5lgJoDAtQPhwCNjq4CPllgXVkXROaXdiga0qmyF7TlzOFj57AYiwP2OkD9-xL081oXjvp9kjYQphWhMsaPSg48S3r24qBeybDzXqv_vwnltIYS0ieHD31Zp60DD4YFKY3jnBJghqgH8wvy5ABkVBIwVHf4_FSWDvWncVbZHId0vC_TdDG2krsYu80wyDFWeOUUtGw
[root@k8s-master ~]#
​

namespace

创建namespace

[root@master ~]# kubectl create ns dev namespace/dev created

删除namespace

[root@master ~]# kubectl delete ns dev namespace "dev" deleted

POD

查看Pod基本信息

[root@master ~]# kubectl get pods -n dev

查看Pod的详细信息

[root@master ~]# kubectl describe pod nginx -n dev

删除指定Pod

# 删除指定Pod
[root@master ~]# kubectl delete pod nginx -n dev
pod "nginx" deleted
​
# 此时,显示删除Pod成功,但是再查询,发现又新产生了一个 
[root@master ~]# kubectl get pods -n dev
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          21s
​
# 这是因为当前Pod是由Pod控制器创建的,控制器会监控Pod状况,一旦发现Pod死亡,会立即重建
# 此时要想删除Pod,必须删除Pod控制器
​
# 先来查询一下当前namespace下的Pod控制器
[root@master ~]# kubectl get deploy -n  dev
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           9m7s
​
# 接下来,删除此PodPod控制器
[root@master ~]# kubectl delete deploy nginx -n dev
deployment.apps "nginx" deleted
​
# 稍等片刻,再查询Pod,发现Pod被删除了
[root@master ~]# kubectl get pods -n dev
No resources found in dev namespace.

创建一个pod-nginx.yaml

内容如下

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: dev
spec:
  containers:
  - image: registry.cn-hangzhou.aliyuncs.com/zoeyqq/nginx
    name: pod
    ports:
    - name: nginx-port
      containerPort: 82
      protocol: TCP
​

创建:kubectl create -f pod-nginx.yaml

删除:kubectl delete -f pod-nginx.yaml

部署nginx

​
[root@k8s-master ~]# kubectl describe pod nginx -n dev
Name:         nginx
Namespace:    dev
Priority:     0
Node:         <none>
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:
IPs:          <none>
Containers:
  pod:
    Image:        registry.cn-hangzhou.aliyuncs.com/zoeyqq/nginx
    Port:         82/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7tkps (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  default-token-7tkps:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-7tkps
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
[root@k8s-master ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
node/k8s-master untainted
[root@k8s-master ~]# kubectl get pods -n dev
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          5m26s
  • 允许master节点部署pod

kubectl taint nodes --all node-role.kubernetes.io/master-
  • 设置不允许调度

kubectl taint nodes master1 node-role.kubernetes.io/master=:NoSchedule

污点可选参数

  • NoSchedule: 一定不能被调度

  • PreferNoSchedule: 尽量不要调度

  • NoExecute: 不仅不会调度, 还会驱逐Node上已有的Pod

Label

用于给某个资源对象定义标识

1.增加标识
[root@k8s-master ~]# kubectl label pod nginx version=1.0 -n dev
pod/nginx labeled
[root@k8s-master ~]# kubectl get pod nginx -n dev --show-labels
NAME    READY   STATUS    RESTARTS   AGE     LABELS
nginx   1/1     Running   0          2m51s   version=1.0
2.更新标识
[root@k8s-master ~]# kubectl label pod nginx version=2.0 -n dev --overwrite
pod/nginx labeled
[root@k8s-master ~]# kubectl get pod nginx -n dev --show-labels
NAME    READY   STATUS    RESTARTS   AGE     LABELS
nginx   1/1     Running   0          3m45s   version=2.0
kubectl label pod nginx version:2.1 -n dev --overwrite

配置方式:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: dev
  labels:
    version: "3.0" 
    env: "test"
spec:
  containers:
  - image: nginx:latest
    name: pod
    ports:
    - name: nginx-port
      containerPort: 80
      protocol: TCP

Deployment

在kubernetes中,Pod是最小的控制单元,但是kubernetes很少直接控制Pod,一般都是通过Pod控制器来完成的。Pod控制器用于pod的管理,确保pod资源符合预期的状态,当pod的资源出现故障时,会尝试进行重启或重建pod。

在kubernetes中Pod控制器的种类有很多,本章节只介绍一种:Deployment。

# 命令格式: kubectl create deployment 名称  [参数] 
# --image  指定pod的镜像
# --port   指定端口
# --replicas  指定创建pod数量
# --namespace  指定namespace
[root@master ~]# kubectl create deploy nginx --image=nginx:latest --port=80 --replicas=3 -n dev

创建一个deploy-nginx.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      run: nginx
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/zoeyqq/nginx
        name: nginx
        ports:
        - containerPort: 80
          protocol: TCP

创建:kubectl create -f deploy-nginx.yaml

删除:kubectl delete -f deploy-nginx.yaml

查看创建的Pod
[root@k8s-master ~]# kubectl get pods -n dev
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7f99fc475c-hx9js   1/1     Running   0          71s
nginx-7f99fc475c-jkzvm   1/1     Running   0          71s
nginx-7f99fc475c-zhfkg   1/1     Running   0          71s
查看deployment的信息
[root@k8s-master ~]# kubectl get deploy -n dev
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   3/3     3            3           2m4s
​
[root@k8s-master ~]# kubectl get deploy -n dev -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                                           SELECTOR
nginx   3/3     3            3           2m25s   nginx        registry.cn-hangzhou.aliyuncs.com/zoeyqq/nginx   run=nginx
# UP-TO-DATE:成功升级的副本数量
# AVAILABLE:可用副本的数量
​
查看deployment的详细信息
[root@k8s-master ~]# kubectl describe deploy nginx -n dev
Name:                   nginx
Namespace:              dev
CreationTimestamp:      Fri, 05 Aug 2022 14:37:11 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               run=nginx
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  run=nginx
  Containers:
   nginx:
    Image:        registry.cn-hangzhou.aliyuncs.com/zoeyqq/nginx
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-7f99fc475c (3/3 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  3m22s  deployment-controller  Scaled up replica set nginx-7f99fc475c to 3
​
# 删除 
[root@master ~]# kubectl delete deploy nginx -n dev
deployment.apps "nginx" deleted

Service

利用Deployment可以创建一组Pod来提供具有高可用性的服务。

虽然每个Pod都会分配一个单独的Pod IP,然而却存在如下两问题:

Pod IP 会随着Pod的重建产生变化 Pod IP 仅仅是集群内可见的虚拟IP,外部无法访问 这样对于访问这个服务带来了难度。因此,kubernetes设计了Service来解决这个问题。

Service可以看作是一组同类Pod对外的访问接口。借助Service,应用可以方便地实现服务发现和负载均衡。

操作一:创建集群内部可访问的Service

# 暴露Service
[root@master ~]# kubectl expose deploy nginx --name=svc-nginx1 --type=ClusterIP --port=80 --target-port=80 -n dev
service/svc-nginx1 exposed
​
# 查看service
[root@master ~]# kubectl get svc svc-nginx1 -n dev -o wide
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE     SELECTOR
svc-nginx1   ClusterIP   10.109.179.231   <none>        80/TCP    3m51s   run=nginx
​

操作二:创建集群外部也可访问的Service

# 上面创建的Service的type类型为ClusterIP,这个ip地址只用集群内部可访问
# 如果需要创建外部也可以访问的Service,需要修改type为NodePort
[root@master ~]# kubectl expose deploy nginx --name=svc-nginx2 --type=NodePort --port=80 --target-port=80 -n dev
service/svc-nginx2 exposed
​
# 此时查看,会发现出现了NodePort类型的Service,而且有一对Port(80:31928/TC)
[root@master ~]# kubectl get svc  svc-nginx2  -n dev -o wide
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE    SELECTOR
svc-nginx2    NodePort    10.100.94.0      <none>        80:31928/TCP   9s     run=nginx
​

删除Service

[root@master ~]# kubectl delete svc svc-nginx-1 -n dev service "svc-nginx-1" deleted

创建一个svc-nginx.yaml,内容如下

apiVersion: v1
kind: Service
metadata:
  name: svc-nginx
  namespace: dev
spec:
  clusterIP: 10.109.179.231 #固定svc的内网ip
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx
  type: ClusterIP

创建:kubectl create -f svc-nginx.yaml

删除:kubectl delete -f svc-nginx.yaml

kubectl get svc svc-nginx -n dev -o wide

apiVersion: v1
kind: Service
metadata:
  name: svc-nginx
  namespace: dev 
spec:
  clusterIP: 10.109.179.231 #固定svc的内网ip
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30099
  selector:
    run: nginx
  type: NodePort

kubectl create -f nginx-service.yaml

【配置说明】:

kind: Service表示yaml文件创建的是一个Service

metadata表示这个Service的元信息

metadata.name 是Service的名称 nginx-deployment1

metadata.labels 是Service的标签 即:app=nginx

metadata.namespace 是Service的命名空间,此处选择的是第一步创建的命名空间nginx

sepc是Service的详细配置说明

sepc.type 取值NodePort 表示这个Service的类型是一个节点端口转发类型

sepc.selector 表示这个Service是将带标签的哪些pods做为一个集合对外通过服务

sepc.ports.port 是Service绑定的端口

sepc.ports.name: nginx-service80 表示Service服务的名称 sepc.ports.protocol: TCP 表示Service转发请求到容器的协议是TCP,我们部署的http的nginx服务,因此选择协议为TCP sepc.ports.targetPort: 80 表示Service转发外部请求到容器的目标端口80,即deployment的pod容器对外开放的容器端口80

sepc.ports.nodePort: 31099 表示Service对外开放的节点端口

[root@k8s-master ~]# kubectl create -f deploy-nginx.yaml
deployment.apps/nginx created
[root@k8s-master ~]# kubectl create -f svc-nginx.yaml
service/svc-nginx created
[root@k8s-master ~]# kubectl get pod -o wide -n dev
NAME                     READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
nginx-7f99fc475c-9jvsp   1/1     Running   0          26s   10.244.235.213   k8s-master   <none>           <none>
nginx-7f99fc475c-svrck   1/1     Running   0          26s   10.244.235.212   k8s-master   <none>           <none>
nginx-7f99fc475c-vs9nm   1/1     Running   0          26s   10.244.235.211   k8s-master   <none>           <none>
[root@k8s-master ~]# kubectl get service -n dev
NAME        TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
svc-nginx   NodePort   10.109.179.231   <none>        80:30099/TCP   28s
[root@k8s-master ~]# kubectl describe service svc-nginx -n dev
Name:                     svc-nginx
Namespace:                dev
Labels:                   <none>
Annotations:              <none>
Selector:                 run=nginx
Type:                     NodePort
IP:                       10.109.179.231
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  30099/TCP
Endpoints:                10.244.235.211:80,10.244.235.212:80,10.244.235.213:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
[root@k8s-master ~]#

查询服务列表:kubectl get service -n dev

查询服务详情: kubectl describe service svc-nginx -n dev

pod扩缩容

手动扩缩容:kubectl scale deployment deployname -n namespace --replicas=x;

也可用kubectl edit对deployment进行编辑后apply

​
[root@k8s-master ~]# kubectl get deploy -n dev
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   3/3     3            3           105m
[root@k8s-master ~]# kubectl scale deployment nginx -n dev --replicas=2
deployment.apps/nginx scaled
[root@k8s-master ~]# kubectl get deploy -n dev
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   2/2     2            2           105m
[root@k8s-master ~]# kubectl scale deployment nginx -n dev --replicas=3
deployment.apps/nginx scaled
[root@k8s-master ~]# kubectl get deploy -n dev
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   3/3     3            3           3h20m
[root@k8s-master ~]# kubectl get pod -n dev
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7f99fc475c-694nf   1/1     Running   0          35s
nginx-7f99fc475c-svrck   1/1     Running   0          3h20m
nginx-7f99fc475c-vs9nm   1/1     Running   0          3h20m
​

查看POD日志

[root@k8s-master ~]# kubectl get pod -n dev
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7f99fc475c-694nf   1/1     Running   0          35s
nginx-7f99fc475c-svrck   1/1     Running   0          3h20m
nginx-7f99fc475c-vs9nm   1/1     Running   0          3h20m
[root@k8s-master ~]# kubectl logs -f nginx-7f99fc475c-694nf -n dev
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
​
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐