1、K8s是如何运行容器的。

答:k8s是通过定义一个Pod的资源,然后在Pod里面运行容器的。K8s最小的资源单位Pod。

 

2、如何创建一个Pod资源呢?

答:在K8s中,所有的资源单位都可以使用一个yaml配置文件来创建,创建Pod也可以使用yaml配置文件。

 

3、开始,创建一个Pod,先创建一个k8s目录,然后在k8s里面创建一个pod目录,然后创建vim nginx_pod.yaml。

1 [root@k8s-master ~]# mkdir k8s
2 [root@k8s-master ~]# cd k8s/
3 [root@k8s-master k8s]# ls
4 [root@k8s-master k8s]# mkdir pod
5 [root@k8s-master k8s]# ls
6 pod
7 [root@k8s-master k8s]# cd pod/
8 [root@k8s-master pod]# vim nginx_pod.yaml
9 [root@k8s-master pod]# 

配置内容,如下所示:

nginx_pod.yaml的内容,如下所示:

 1 # 声明api的版本。
 2 apiVersion: v1
 3 # kind代表资源的类型,资源是Pod。
 4 kind: Pod
 5 # 资源叫什么名字,是在其属性metadata里面的。
 6 metadata:
 7   # 第一个属性name的值是nginx,即Pod的名字就叫做Nginx。
 8   name: nginx
 9   # 给Pod贴上了一个标签,标签是app: web,标签是有一定的作用的。
10   labels:
11     app: web
12 # spec是详细,详细里面定义了一个容器。    
13 spec:
14   # 定义一个容器,可以声明多个容器的。  
15   containers:
16     # 容器的名称叫做nginx
17     - name: nginx
18       # 使用了什么镜像,可以使用官方公有的,也可以使用私有的。
19       image: nginx:1.13
20       # ports定义容器的端口。
21       ports:
22         # 容器的端口是80,如果容器有多个端口,可以在后面接着写一行即可。
23         - containerPort: 80

在k8s中,所有的资源单位,只要使用配置文件声明之后,使用create -f指定nginx_pod.yaml的位置,就可以被创建了。

1 [root@k8s-master pod]# kubectl create -f nginx_pod.yaml 
2 Error from server (ServerTimeout): error when creating "nginx_pod.yaml": No API token found for service account "default", retry after the token is automatically created and added to the service account
3 [root@k8s-master pod]# 

报错了,需要修改api-server的配置文件,需要将ServiceAccount禁用掉即可。

1 [root@k8s-master pod]# vim /etc/kubernetes/apiserver 

将ServiceAccount禁用掉即可。

由于修改了api-server的配置文件,现在需要重启api-server。

1 [root@k8s-master pod]# systemctl restart kube-apiserver.service 
2 [root@k8s-master pod]# 

重启api-server完毕之后,再次使用命令进行创建。

1 [root@k8s-master pod]# kubectl create -f nginx_pod.yaml 
2 pod "nginx" created
3 [root@k8s-master pod]# 

现在可以查看创建了那些Pod,get命令是查看资源的列表,如下所示:

1 [root@k8s-master pod]# kubectl get pod
2 NAME      READY     STATUS              RESTARTS   AGE
3 nginx     0/1       ContainerCreating   0          1m
4 [root@k8s-master pod]# kubectl get pod nginx
5 NAME      READY     STATUS              RESTARTS   AGE
6 nginx     0/1       ContainerCreating   0          1m
7 [root@k8s-master pod]# 

查看组件的状态。

1 [root@k8s-master pod]# kubectl get componentstatus 
2 NAME                 STATUS    MESSAGE             ERROR
3 controller-manager   Healthy   ok                  
4 scheduler            Healthy   ok                  
5 etcd-0               Healthy   {"health":"true"}   
6 [root@k8s-master pod]# 

查看node的命令,如下所示:

 1 [root@k8s-master pod]# kubectl get node
 2 NAME         STATUS    AGE
 3 k8s-master   Ready     22h
 4 k8s-node2    Ready     22h
 5 k8s-node3    Ready     21h
 6 [root@k8s-master pod]# kubectl get nodes
 7 NAME         STATUS    AGE
 8 k8s-master   Ready     22h
 9 k8s-node2    Ready     22h
10 k8s-node3    Ready     21h
11 [root@k8s-master pod]# 

仔细观察这个Pod一直处于ContainerCreating状态,一直都没有1/1准备好。

1 [root@k8s-master pod]# kubectl get pod nginx
2 NAME      READY     STATUS              RESTARTS   AGE
3 nginx     0/1       ContainerCreating   0          4m
4 [root@k8s-master pod]# 

可以使用命令kubectl describe pod nginx,查看具体卡在那里,如下所示:

 1 [root@k8s-master pod]# kubectl describe pod nginx
 2 Name:        nginx
 3 Namespace:    default
 4 Node:        k8s-node3/192.168.110.135
 5 Start Time:    Fri, 05 Jun 2020 21:17:18 +0800
 6 Labels:        app=web
 7 Status:        Pending
 8 IP:        
 9 Controllers:    <none>
10 Containers:
11   nginx:
12     Container ID:        
13     Image:            nginx:1.13
14     Image ID:            
15     Port:            80/TCP
16     State:            Waiting
17       Reason:            ContainerCreating
18     Ready:            False
19     Restart Count:        0
20     Volume Mounts:        <none>
21     Environment Variables:    <none>
22 Conditions:
23   Type        Status
24   Initialized     True 
25   Ready     False 
26   PodScheduled     True 
27 No volumes.
28 QoS Class:    BestEffort
29 Tolerations:    <none>
30 Events:
31   FirstSeen    LastSeen    Count    From            SubObjectPath    Type        Reason        Message
32   ---------    --------    -----    ----            -------------    --------    ------        -------
33   7m        7m        1    {default-scheduler }            Normal        Scheduled    Successfully assigned nginx to k8s-node3
34   6m        1m        6    {kubelet k8s-node3}            Warning        FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request.  details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"
35 
36   6m    5s    25    {kubelet k8s-node3}        Warning    FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""
37 
38 [root@k8s-master pod]# 

可以看到scheduler调度到k8s-node3节点上去了。

也可以使用kubectl get pod nginx -o wide命令,查看调度到那个节点上去了。

1 [root@k8s-master pod]# kubectl get pod nginx -o wide
2 NAME      READY     STATUS              RESTARTS   AGE       IP        NODE
3 nginx     0/1       ContainerCreating   0          10m       <none>    k8s-node3
4 [root@k8s-master pod]# 

可以看到是pull镜像的时候,就出错了。从这个地址registry.access.redhat.com/rhel7/pod-infrastructure:latest拉取的镜像。

可以看到是在k8s-node3节点pull这个镜像。在k8s-node3节点使用docker pull这个镜像报错了,报错显示没有这个文件open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory。

1 [root@k8s-node3 ~]# docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
2 Trying to pull repository registry.access.redhat.com/rhel7/pod-infrastructure ... 
3 open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory
4 [root@k8s-node3 ~]# 

但是这个证书文件是存在的,但是为什么打不开呢,因为这个证书文件是一个软链接。软链接就不存在,所以就打不开。

1 [root@k8s-node3 ~]# ls /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt
2 /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt
3 [root@k8s-node3 ~]# 

那么解决这个证书问题就可以解决这个问题了,但是其实并不用解决它,因为你思考一个,为什么启动一个Pod资源的时候,需要下载这么一个镜像地址呢,为什么不从别的地方下载呢,这个是由配置文件决定的。

1 [root@k8s-node3 ~]# vim /etc/kubernetes/kubelet

在这个配置文件中定义的镜像地址是registry.access.redhat.com/rhel7/pod-infrastructure:latest。这个镜像地址,由于证书错误,下载不了,但是可以从其他地方进行下载。可以使用docker search搜索一下这个镜像,这个是在Docker官方仓库进行搜索的。

 1 [root@k8s-node3 ~]# docker search pod-infrastructure
 2 INDEX       NAME                                          DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
 3 docker.io   docker.io/neurons/pod-infrastructure          k8s pod 基础容器镜像                                  2                    
 4 docker.io   docker.io/tianyebj/pod-infrastructure         registry.access.redhat.com/rhel7/pod-infra...   2                    
 5 docker.io   docker.io/w564791/pod-infrastructure          latest                                          1                    
 6 docker.io   docker.io/xiaotech/pod-infrastructure         registry.access.redhat.com/rhel7/pod-infra...   1                    [OK]
 7 docker.io   docker.io/092800/pod-infrastructure                                                           0                    
 8 docker.io   docker.io/812557942/pod-infrastructure                                                        0                    
 9 docker.io   docker.io/cnkevin/pod-infrastructure                                                          0                    
10 docker.io   docker.io/fungitive/pod-infrastructure        registry.access.redhat.com/rhel7/pod-infra...   0                    
11 docker.io   docker.io/jqka/pod-infrastructure             redhat pod                                      0                    [OK]
12 docker.io   docker.io/k189189/pod-infrastructure                                                          0                    
13 docker.io   docker.io/meitham/pod-infrastructure          registry.access.redhat.com/rhel7/pod-infra...   0                    
14 docker.io   docker.io/oudi/pod-infrastructure             pod-infrastructure                              0                    [OK]
15 docker.io   docker.io/panshx/pod-infrastructure           FROM registry.access.redhat.com/rhel7/pod-...   0                    
16 docker.io   docker.io/pkcsloye/pod-infrastructure         docker pull registry.access.redhat.com/rhe...   0                    [OK]
17 docker.io   docker.io/shadowalker911/pod-infrastructure                                                   0                    
18 docker.io   docker.io/singlestep/pod-infrastructure                                                       0                    
19 docker.io   docker.io/statemood/pod-infrastructure        Automated build from registry.access.redha...   0                    [OK]
20 docker.io   docker.io/wangdjtest/pod-infrastructure       pod-infrastructure:latest                       0                    [OK]
21 docker.io   docker.io/william198689/pod-infrastructure                                                    0                    
22 docker.io   docker.io/xielongzhiying/pod-infrastructure   pod-infrastructure                              0                    [OK]
23 docker.io   docker.io/zdwork/pod-infrastructure                                                           0                    
24 docker.io   docker.io/zengshaoyong/pod-infrastructure     pod-infrastructure                              0                    [OK]
25 docker.io   docker.io/zhanghongyang/pod-infrastructure                                                    0                    
26 docker.io   docker.io/zhangspook/pod-infrastructure       registry.access.redhat.com/rhel7/pod-infra...   0                    [OK]
27 docker.io   docker.io/zm274310577/pod-infrastructure                                                      0                    
28 [root@k8s-node3 ~]# 

将镜像地址docker.io/tianyebj/pod-infrastructure复制拷贝到/etc/kubernetes/kubelet配置文件中。

由于修改了配置文件,所以要重启让其kubelet生效。

1 [root@k8s-node3 ~]# systemctl restart kubelet.service 
2 [root@k8s-node3 ~]# 

重启结束之后,再次回到Master节点上,看看它的信息描述,看看它有没有重试。

1 [root@k8s-master pod]# kubectl describe pod nginx

不断使用上述命令进行观察,也可以在k8s-node3节点,看看docker的临时目录,查看其有没有重试。

 1 [root@k8s-node3 ~]# ls /var/lib/docker/tmp/
 2 GetImageBlob232005897  GetImageBlob649330130  GetImageBlob688223444
 3 [root@k8s-node3 ~]# ls /var/lib/docker/tmp/
 4 GetImageBlob232005897  GetImageBlob649330130  GetImageBlob688223444
 5 [root@k8s-node3 ~]# ls /var/lib/docker/tmp/
 6 GetImageBlob232005897  GetImageBlob649330130  GetImageBlob688223444
 7 [root@k8s-node3 ~]# ll /var/lib/docker/tmp/
 8 total 16324
 9 -rw-------. 1 root root 9750959 Jun  5 21:49 GetImageBlob649330130
10 -rw-------. 1 root root     201 Jun  5 21:48 GetImageBlob688223444
11 [root@k8s-node3 ~]# 

这个/var/lib/docker/tmp/是Docker下载的临时目录。可以看到已经超时了,换了下载镜像的地址也是超时了。

如何解决Docker的IO超时问题呢,熟悉Docker的应该知道Docker国内可以做Docker镜像的加速。加速方法,如下所示:

由于我的Docker的版本是1.13.1,其加速方法跟最新的1809/1806是不一样的。

1 [root@k8s-node3 ~]# vim /etc/sysconfig/docker

具体操作,如下所示:

1 # 信任私有仓库,镜像加速
2 OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false
3 --registry-mirror=https://registry.docker-cn.com --insecure-registry=192.168.110.133:5000'

然后重启一下Docker,如下所示:

1 [root@k8s-node3 ~]# systemctl restart docker
2 [root@k8s-node3 ~]# 

重启完Docker之后,Master主节点过段时间会再次重试。可以在主节点使用kubectl describe pod nginx命令查看,在k8s-node3节点使用命令ll -h /var/lib/docker/tmp/进行查看。

我本地是下载下来了,如果下载不下来,也可以将安装包上传到服务器。可以使用命令kubectl get pod nginx进行查看,自己的Nginx已经跑起来了。

使用Docker的导入命令,将所需的镜像导入进去。

1 [root@k8s-node3 ~]# docker load -i pod-infrastructure-latest.tar.gz
1 [root@k8s-node3 ~]# docker load -i docker_nginx1.13.tar.gz

如果刚才的未下载完毕,然后你又将镜像上传到了服务器,此时可以使用重启Docker的命令,然后去主节点Master使用命令进行查看kubectl describe pod nginx,可以看到已经识别出来了。可以使用命令kubectl get pod nginx -o wide,可以看到容器已经跑起来了。此时解决了k8s-node3可以启动这个容器。

1 [root@k8s-master pod]# kubectl get pod nginx -o wide
2 NAME      READY     STATUS    RESTARTS   AGE       IP            NODE
3 nginx     1/1       Running   1          1h        172.16.13.2   k8s-node3
4 [root@k8s-master pod]#

但是,此时将这个Pod进行删除,然后再创建这个Pod。

1 [root@k8s-master pod]# kubectl delete pod nginx
2 pod "nginx" deleted
3 [root@k8s-master pod]# kubectl get pod nginx -o wide
4 Error from server (NotFound): pods "nginx" not found
5 [root@k8s-master pod]# 

此时,发现这个Pod调度到了k8s-node2节点了。

1 [root@k8s-master pod]# kubectl create -f nginx_pod.yaml 
2 pod "nginx" created
3 [root@k8s-master pod]# kubectl get pod nginx -o wide
4 NAME      READY     STATUS              RESTARTS   AGE       IP        NODE
5 nginx     0/1       ContainerCreating   0          20s       <none>    k8s-node2
6 [root@k8s-master pod]# kubectl get pod nginx -o wide
7 NAME      READY     STATUS              RESTARTS   AGE       IP        NODE
8 nginx     0/1       ContainerCreating   0          37s       <none>    k8s-node2
9 [root@k8s-master pod]# 

此时,需要修改k8s-node2的镜像地址。因为它还是会从红帽那里下载,对应本地没有的镜像还是再次pull一遍的。这样对于我们来说,启动一个容器时间很长,如果网络不稳定,这个节点上的容器就启动不起来。如果节点非常多,那么这样的情况会非常麻烦。如果Node节点很多的时候,应该使用一个私有仓库,使用私有仓库可以将已经有的镜像可以从自己的私有仓库进行下载,节省时间和网络资源。

1 [root@k8s-node2 ~]# vim /etc/kubernetes/kubelet

操作,如下所示:

具体内容,如下所示:

 1 ###
 2 # kubernetes kubelet (minion) config
 3 
 4 # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
 5 # 修改自己的监听地址,将127.0.0.1修改为192.168.110.134
 6 KUBELET_ADDRESS="--address=192.168.110.134"
 7 
 8 # The port for the info server to serve on
 9 # kube-let的端口是10250
10 KUBELET_PORT="--port=10250"
11 
12 # You may leave this blank to use the actual hostname
13 # 修改自己的主机名称,将127.0.0.1修改为k8s-node2
14 KUBELET_HOSTNAME="--hostname-override=k8s-node2"
15 
16 # location of the api-server
17 # 连接master节点的api-server端口
18 KUBELET_API_SERVER="--api-servers=http://192.168.110.133:8080"
19 
20 # pod infrastructure container
21 # KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
22 KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=docker.io/tianyebj/pod-infrastructure:latest"
23 
24 # Add your own!
25 KUBELET_ARGS=""

可以先让Node节点从内网Pull镜像,如果内网没有镜像,可以先从外网进行Pull镜像,镜像Pull下来之后传到我们的私有仓库,仅需要Pull一次镜像,后来的节点直接从自己的私有仓库进行Pull镜像,极大节省时间和流量带宽。解决这个问题就是自己启用一个私有仓库。

1 # 终极解决方法,自己启用自己的私有仓库。为了节约硬件配置,使用官方提供的registry私有仓库。也可以使用其他
2 docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry registry

首先搜索一下这个镜像。这个是官方的镜像。

 1 [root@k8s-master pod]# docker search registry
 2 INDEX       NAME                                           DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
 3 docker.io   docker.io/registry                             The Docker Registry 2.0 implementation for...   2980      [OK]       
 4 docker.io   docker.io/distribution/registry                WARNING: NOT the registry official image!!...   57                   [OK]
 5 docker.io   docker.io/stefanscherer/registry-windows       Containerized docker registry for Windows ...   31                   
 6 docker.io   docker.io/budry/registry-arm                   Docker registry build for Raspberry PI 2 a...   18                   
 7 docker.io   docker.io/deis/registry                        Docker image registry for the Deis open so...   12                   
 8 docker.io   docker.io/anoxis/registry-cli                  You can list and delete tags from your pri...   9                    [OK]
 9 docker.io   docker.io/jc21/registry-ui                     A nice web interface for managing your Doc...   8                    
10 docker.io   docker.io/vmware/registry                                                                      6                    
11 docker.io   docker.io/allingeek/registry                   A specialization of registry:2 configured ...   4                    [OK]
12 docker.io   docker.io/pallet/registry-swift                Add swift storage support to the official ...   4                    [OK]
13 docker.io   docker.io/arm32v6/registry                     The Docker Registry 2.0 implementation for...   3                    
14 docker.io   docker.io/goharbor/registry-photon                                                             2                    
15 docker.io   docker.io/concourse/registry-image-resource                                                    1                    
16 docker.io   docker.io/conjurinc/registry-oauth-server      Docker registry authn/authz server backed ...   1                    
17 docker.io   docker.io/ibmcom/registry                      Docker Image for IBM Cloud private-CE (Com...   1                    
18 docker.io   docker.io/metadata/registry                    Metadata Registry is a tool which helps yo...   1                    [OK]
19 docker.io   docker.io/webhippie/registry                   Docker images for Registry                      1                    [OK]
20 docker.io   docker.io/convox/registry                                                                      0                    
21 docker.io   docker.io/deepsecurity/registryviews           Deep Security Smart Check                       0                    
22 docker.io   docker.io/dwpdigital/registry-image-resource   Concourse resource type                         0                    
23 docker.io   docker.io/gisjedi/registry-proxy               Reverse proxy of registry mirror image gis...   0                    
24 docker.io   docker.io/kontena/registry                     Kontena Registry                                0                    
25 docker.io   docker.io/lorieri/registry-ceph                Ceph Rados Gateway (and any other S3 compa...   0                    
26 docker.io   docker.io/pivnet/registry-gcloud-image                                                         0                    
27 docker.io   docker.io/upmcenterprises/registry-creds                                                       0                    
28 [root@k8s-master pod]# 

可以直接将这个镜像pull下来,也可以进行上传。

 1 [root@k8s-master pod]# docker pull docker.io/registry
 2 Using default tag: latest
 3 Trying to pull repository docker.io/library/registry ... 
 4 latest: Pulling from docker.io/library/registry
 5 486039affc0a: Pull complete 
 6 ba51a3b098e6: Pull complete 
 7 8bb4c43d6c8e: Pull complete 
 8 6f5f453e5f2d: Pull complete 
 9 42bc10b72f42: Pull complete 
10 Digest: sha256:7d081088e4bfd632a88e3f3bcd9e007ef44a796fddfe3261407a3f9f04abe1e7
11 Status: Downloaded newer image for docker.io/registry:latest
12 [root@k8s-master pod]# 

接下来,就可以启动我们的私有仓库了。

1 [root@k8s-master pod]# docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry registry
2 a27987d97039c8596ad2a2150cee9e3fbe7580c8131e9f258aea8a922c22a237
3 [root@k8s-master pod]# 

我们的私有仓库已经起来了,可以使用docker ps命令进行查看。

1 [root@k8s-master pod]# docker ps
2 CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
3 a27987d97039        registry            "/entrypoint.sh /e..."   39 seconds ago      Up 37 seconds       0.0.0.0:5000->5000/tcp   registry
4 6d459781a3e5        busybox             "sh"                     10 hours ago        Up 10 hours                                  gracious_nightingale
5 [root@k8s-master pod]# 

此时,可以试着想这个私有仓库上传我们的镜像,如下所示:

 1 [root@k8s-node3 ~]# docker images 
 2 REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE
 3 docker.io/busybox                       latest              1c35c4412082        2 days ago          1.22 MB
 4 docker.io/nginx                         1.13                ae513a47849c        2 years ago         109 MB
 5 docker.io/tianyebj/pod-infrastructure   latest              34d3450d733b        3 years ago         205 MB
 6 [root@k8s-node3 ~]# docker tag docker.io/tianyebj/pod-infrastructure:latest 192.168.110.133:5000/pod-infrastructure:latest
 7 [root@k8s-node3 ~]# docker push 192.168.110.133:5000/pod-infrastructure
 8 The push refers to a repository [192.168.110.133:5000/pod-infrastructure]
 9 Get https://192.168.110.133:5000/v1/_ping: http: server gave HTTP response to HTTPS client
10 [root@k8s-node3 ~]# 

由于我的这里报错了,这个问题可能是由于客户端采用https,docker registry未采用https服务所致。一种处理方式是把客户对地址“192.168.110.133:5000”请求改为http。解决方法:在“/etc/docker/”目录下,创建"daemon.json"文件。在文件中写入:

1 [root@k8s-node3 ~]# cd /etc/docker/
2 [root@k8s-node3 docker]# echo '{ "insecure-registries":["192.168.110.133:5000"] }' > /etc/docker/daemon.json
3 [root@k8s-node3 docker]# 

重启docker。问题解决:

1 [root@k8s-node3 docker]# systemctl restart docker
2 [root@k8s-node3 docker]# docker tag docker.io/tianyebj/pod-infrastructure:latest 192.168.110.133:5000/pod-infrastructure:latest
3 [root@k8s-node3 docker]# docker push 192.168.110.133:5000/pod-infrastructure:latest 
4 The push refers to a repository [192.168.110.133:5000/pod-infrastructure]
5 ba3d4cbbb261: Pushed 
6 0a081b45cb84: Pushed 
7 df9d2808b9a9: Pushed 
8 latest: digest: sha256:a378b2d7a92231ffb07fdd9dbd2a52c3c439f19c8d675a0d8d9ab74950b15a1b size: 948
9 [root@k8s-node3 docker]# 

这里,顺便将另外两台机器也配置一下,避免出现这种错误。

1 [root@k8s-master pod]# echo '{ "insecure-registries":["192.168.110.133:5000"] }' > /etc/docker/daemon.json
1 [root@k8s-node2 ~]# echo '{ "insecure-registries":["192.168.110.133:5000"] }' > /etc/docker/daemon.json

如果其他Node节点需要使用这个私有仓库都是需要修改Docker的配置文件的。

1 [root@k8s-node2 ~]# vim /etc/sysconfig/docker
1 OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false 
2 --registry-mirror=https://registry.docker-cn.com --insecure-registry=192.168.110.133:5000'

具体操作,如下所示:

然后需要重启Docker。

1 [root@k8s-node2 ~]# systemctl restart docker

为了Node节点从私有仓库Pull镜像,还需要修改/etc/kubernetes/kubelet

1 [root@k8s-node2 ~]# vim /etc/kubernetes/kubelet 

然后重启kubelet。

1 [root@k8s-node2 ~]# systemctl restart kubelet.service 
2 [root@k8s-node2 ~]# 

这里将Ngnix也上传到私有仓库里面,如下所示:

1 [root@k8s-node3 docker]# docker tag docker.io/nginx:1.13 192.168.110.133:5000/ngnix:1.13
2 [root@k8s-node3 docker]# docker push 192.168.110.133:5000/ngnix:1.13 
3 The push refers to a repository [192.168.110.133:5000/ngnix]
4 7ab428981537: Pushed 
5 82b81d779f83: Pushed 
6 d626a8ad97a1: Pushed 
7 1.13: digest: sha256:e4f0474a75c510f40b37b6b7dc2516241ffa8bde5a442bde3d372c9519c84d90 size: 948
8 [root@k8s-node3 docker]# 

此时,可以将k8s-master、k8s-node3节点的kubelet修改为从私有仓库Pull镜像。

1 [root@k8s-node3 docker]# vim /etc/kubernetes/kubelet
1 [root@k8s-master pod]# vim /etc/kubernetes/kubelet

具体操作,如下所示:

然后重启kubelet。

1 [root@k8s-node3 docker]# systemctl restart kubelet.service 
2 [root@k8s-node3 docker]# 
1 [root@k8s-master pod]# systemctl restart kubelet.service 
2 [root@k8s-master pod]# 

总结,步骤很多,但是这里需要注意的是,在三台节点上面,都需要修改docker的配置和kubelet的配置。修改完毕需要进行重启即可。

1 [root@k8s-master pod]# vim /etc/sysconfig/docker
2 [root@k8s-master pod]# systemctl restart docker

修改内容,如下所示:

1 # 信任私有仓库,镜像加速
2 OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false
3 --registry-mirror=https://registry.docker-cn.com --insecure-registry=192.168.110.133:5000'

修改kubelet的配置。

1 [root@k8s-master pod]# vim /etc/kubernetes/kubelet 
2 [root@k8s-master pod]# systemctl restart kubelet.service 

修改内容,如下所示:

1 KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.110.133:5000/pod-infrastructure:latest"

最后需要,修改一下Pod,让其从私有仓库上Pull镜像。如下所示:

修改内容,如下所示:

看到下面两个文件的不同了吗,折腾了一大圈。睡觉去。

 1 apiVersion: v1
 2 kind: Pod
 3 metadata:
 4   name: nginx
 5   labels:
 6     app: web
 7 spec:
 8   containers:
 9     - name: nginx
10       image: 192.168.110.133:5000/nginx:1.13
11       ports:
12         - containerPort: 80
 1 apiVersion: v1
 2 kind: Pod
 3 metadata:
 4   name: nginx
 5   labels:
 6     app: web
 7 spec:
 8   containers:
 9     - name: nginx
10       image: 192.168.110.133:5000/ngnix:1.13
11       ports:
12         - containerPort: 80

最后这里我测试了好几遍,由于快凌晨了,最后测试成功了,先贴一下吧。我把三台机器的docker、kubelet都重启了一遍,因为中间错了好几次,不能从私有仓库下载镜像。

 1 [root@k8s-master pod]# vim nginx_pod.yaml 
 2 [root@k8s-master pod]# kubectl delete pod nginx
 3 pod "nginx" deleted
 4 [root@k8s-master pod]# kubectl describe pod nginx
 5 Error from server (NotFound): pods "nginx" not found
 6 [root@k8s-master pod]# kubectl create -f nginx_pod.yaml 
 7 pod "nginx" created
 8 [root@k8s-master pod]# kubectl describe pod nginx
 9 Name:        nginx
10 Namespace:    default
11 Node:        k8s-master/192.168.110.133
12 Start Time:    Fri, 05 Jun 2020 23:55:23 +0800
13 Labels:        app=web
14 Status:        Pending
15 IP:        
16 Controllers:    <none>
17 Containers:
18   nginx:
19     Container ID:        
20     Image:            192.168.110.133:5000/ngnix:1.13
21     Image ID:            
22     Port:            80/TCP
23     State:            Waiting
24       Reason:            ContainerCreating
25     Ready:            False
26     Restart Count:        0
27     Volume Mounts:        <none>
28     Environment Variables:    <none>
29 Conditions:
30   Type        Status
31   Initialized     True 
32   Ready     False 
33   PodScheduled     True 
34 No volumes.
35 QoS Class:    BestEffort
36 Tolerations:    <none>
37 Events:
38   FirstSeen    LastSeen    Count    From            SubObjectPath        Type        Reason            Message
39   ---------    --------    -----    ----            -------------        --------    ------            -------
40   3s        3s        1    {default-scheduler }                Normal        Scheduled        Successfully assigned nginx to k8s-master
41   3s        3s        1    {kubelet k8s-master}                Warning        MissingClusterDNS    kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
42   2s        2s        1    {kubelet k8s-master}    spec.containers{nginx}    Normal        Pulling            pulling image "192.168.110.133:5000/ngnix:1.13"
43 [root@k8s-master pod]# kubectl get pod nginx
44 NAME      READY     STATUS    RESTARTS   AGE
45 nginx     1/1       Running   0          20s
46 [root@k8s-master pod]# kubectl get pod nginx -o wide
47 NAME      READY     STATUS    RESTARTS   AGE       IP            NODE
48 nginx     1/1       Running   0          24s       172.16.77.3   k8s-master
49 [root@k8s-master pod]# 

由于上面将Nginx拼写成了Ngnix造成的问题,我这里将上传到私有仓库的镜像删除一下,然后将K8s创建的Nginx Pod也删除了,这里将拼写正确的上传到私有仓库,再从私有仓库下载一遍。

首先将k8s创建的Nginx Pod删除掉。

1 [root@k8s-master pod]# kubectl get pod
2 NAME      READY     STATUS    RESTARTS   AGE
3 nginx     1/1       Running   0          2d
4 [root@k8s-master pod]# kubectl delete pod nginx 
5 pod "nginx" deleted
6 [root@k8s-master pod]# kubectl get pod
7 No resources found.
8 [root@k8s-master pod]# 

具体操作,如下所示:

注意:删除Docker镜像的时候,需要注意的是Docker有两个命令的,查看docker的帮助会发现有两个与删除有关的命令rm和rmi。rm Remove one or more containers,rmi Remove one or more images,对于images很好理解,跟平常使用的虚拟机的镜像一个意思,相当于一个模版,而container则是images运行时的的状态,docker对于运行过的image都保留一个状态(container),可以使用命令docker ps来查看正在运行的container,对于已经退出的container,则可以使用docker ps -a来查看。 如果你退出了一个container而忘记保存其中的数据,你可以使用docker ps -a来找到对应的运行过的container使用docker commit命令将其保存为image然后运行。

由于image被某个container引用(拿来运行),如果不将这个引用的container销毁(删除),那image肯定是不能被删除。所以想要删除运行过的images必须首先删除它的container。

1 container正在运行中(运行docker ps查看),先将其关闭。
2 docker ps -a
3 docker rm CONTAINER ID
4 docker stop CONTAINER ID
5 docker rm CONTAINER ID
6 docker rmi -f IMAGE ID
7 docker images

由于我这里是k8s起Pod用到的,没有使用Docker部署Nginx,所以我直接将它干掉了。

现在将Docker的镜像上传到私有仓库。

 1 [root@k8s-node2 ~]# docker images
 2 REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
 3 docker.io/busybox                         latest              1c35c4412082        5 days ago          1.22 MB
 4 docker.io/nginx                           1.13                ae513a47849c        2 years ago         109 MB
 5 192.168.110.133:5000/pod-infrastructure   latest              34d3450d733b        3 years ago         205 MB
 6 docker.io/tianyebj/pod-infrastructure     latest              34d3450d733b        3 years ago         205 MB
 7 [root@k8s-node2 ~]# docker tag docker.io/nginx:1.13 192.168.110.133:5000/nginx:1.13
 8 [root@k8s-node2 ~]# docker push 192.168.110.133:5000/nginx:1.13 
 9 The push refers to a repository [192.168.110.133:5000/nginx]
10 7ab428981537: Pushed 
11 82b81d779f83: Pushed 
12 d626a8ad97a1: Pushed 
13 1.13: digest: sha256:e4f0474a75c510f40b37b6b7dc2516241ffa8bde5a442bde3d372c9519c84d90 size: 948
14 [root@k8s-node2 ~]# 

记得修改自己的nginx_pod.yaml配置文件,下次创建Nginx可以直接从私有仓库下载的,速度很快。

 1 apiVersion: v1
 2 kind: Pod
 3 metadata:
 4   name: nginx
 5   labels:
 6     app: web
 7 spec:
 8   containers:
 9     - name: nginx
10       image: 192.168.110.133:5000/nginx:1.13
11       ports:
12         - containerPort: 80

可以看到这次创建的Pod非常的快。

1 [root@k8s-master pod]# kubectl create -f nginx_pod.yaml 
2 pod "nginx" created
3 [root@k8s-master pod]# kubectl get pod
4 NAME      READY     STATUS    RESTARTS   AGE
5 nginx     1/1       Running   0          8s
6 [root@k8s-master pod]# 

 

4、使用docker ps可以查看运行了多少个容器。

1 [root@k8s-node3 ~]# docker ps
2 CONTAINER ID        IMAGE                                            COMMAND                  CREATED             STATUS              PORTS               NAMES
3 3df24ca19115        192.168.110.133:5000/nginx:1.13                  "nginx -g 'daemon ..."   18 minutes ago      Up 18 minutes                           k8s_nginx.536c04d1_nginx_default_c8a6f3d8-a959-11ea-8dbd-000c2919d52d_875fe334
4 652f57e1b9a9        192.168.110.133:5000/pod-infrastructure:latest   "/pod"                   18 minutes ago      Up 18 minutes                           k8s_POD.cbd802f1_nginx_default_c8a6f3d8-a959-11ea-8dbd-000c2919d52d_de21241c
5 [root@k8s-node3 ~]# 

使用命令查看容器的ip地址是多少。需要注意的是Nginx容器和pod是共用ip地址的。

 1 [root@k8s-node3 ~]# docker ps
 2 CONTAINER ID        IMAGE                                            COMMAND                  CREATED             STATUS              PORTS               NAMES
 3 3df24ca19115        192.168.110.133:5000/nginx:1.13                  "nginx -g 'daemon ..."   24 minutes ago      Up 24 minutes                           k8s_nginx.536c04d1_nginx_default_c8a6f3d8-a959-11ea-8dbd-000c2919d52d_875fe334
 4 652f57e1b9a9        192.168.110.133:5000/pod-infrastructure:latest   "/pod"                   24 minutes ago      Up 24 minutes                           k8s_POD.cbd802f1_nginx_default_c8a6f3d8-a959-11ea-8dbd-000c2919d52d_de21241c
 5 [root@k8s-node3 ~]# docker inspect 652f57e1b9a9 | grep -i ipaddress
 6             "SecondaryIPAddresses": null,
 7             "IPAddress": "172.16.13.2",
 8                     "IPAddress": "172.16.13.2",
 9 [root@k8s-node3 ~]# docker inspect 3df24ca19115 | grep -i ipaddress
10             "SecondaryIPAddresses": null,
11             "IPAddress": "",
12 [root@k8s-node3 ~]# 

可以看到只有pod有ip地址,nginx没有ip地址的。nginx和pod共用ip地址。

这里可以查看Nginx的网络类型,可以发现Nginx的网络类型是container类型的,它和652f57e1b9a9这个容器共用ip地址。

1 [root@k8s-node3 ~]# docker inspect 3df24ca19115 | grep -i network
2             "NetworkMode": "container:652f57e1b9a9d71453d39c40f48c90738b53a66a42888a72f4885b0a69c4a233",
3         "NetworkSettings": {
4             "Networks": {}
5 [root@k8s-node3 ~]# 

此处有坑,需要注意,SELinux是「Security-Enhanced Linux」的简称,是美国国家安全局「NSA=The National Security Agency」 和SCC(Secure Computing Corporation)开发的 Linux的一个扩张强制访问控制安全模块。

问题,我使用k8s创建的nginx pod,使用curl -I 172.16.101.2命令一直卡住不动,应该是那里不通,这里需要将SELinux关闭。

1 # 方法一,查看selinux的状态
2 [root@k8s-master ~]# /usr/sbin/sestatus -v
3 SELinux status:                 disabled
4 # 方法二,查看selinux的状态
5 [root@k8s-master ~]# getenforce
6 Disabled
7 [root@k8s-master ~]# 

不关机临时变更状态为关闭setenforce 0,这个方法好像不好使的样子,反正我是不好使。需要关机永久变更状态为关闭,将SELINUX从enforcing变更为disabled,修改配置文件vim /etc/selinux/config,将SELINUX=enforcing修改为SELINUX=disabled即可。

重启三台机器,由于之前将k8s的组件都是设置为开机自启动了,这里检测一下他们的状态,看看创建的Pod是否正常运行,看看k8s的组件是否健康,然后就可以使用curl进行测试了。

刚才在Master主节点使用curl -I 172.16.101.2是可以访问Nginx的。使用docker ps查看容器列表的时候发现了两个容器,一个是pod的容器,它的ip地址是172.16.101.2,还有一个是在Pod配置文件中定义的一个容器,它的ip地址是没有的,它的网络模式是container共用网络模式,它和pod容器共用网络,container类型是共用容器,即他们两个的ip地址都是172.16.101.2,唯一的差别就是其中一个占用的端口,另一个容器不用占用这个端口了,端口不可以冲突,先到先得。
 

5、k8s中的pod资源到底是什么?

答:k8s中创建一个pod资源,这个pod资源会控制kuelet,kubelet控制docker至少启动两个容器,一个容器是业务容器Nginx,一个容器是pod容器。kubernetes核心功能有自我修复,服务发现和负载均衡,自动部署和回滚,弹性伸缩。

  如果这些高级功能只是靠一个普通的业务容器Nginx,这是不可能做到的,如果想要普通业务容器支持这些功能,就需要进行定制,为了降低你的制作容器镜像的成本,k8s做好了一个容器,这个容器就是pod,这个pod容器支持k8s的高级功能,那么这个pod容器如何和普通容器绑到一起呢,这里使用的就是网络类型container。

  k8s的高级功能由pod容器提供,nginx业务容器只需要提供80端口访问即可,通过container类型将他们绑定到一起,访问80端口可以正常走nginx容器,其他的k8s的高级功能由pod提供,他们之间相辅相成,最终他们合在一起构建了一个资源,这个资源就叫做pod。在k8s中经常提到pod是一个资源叫做pod,这个pod资源会启动两个容器,一个是nginx业务容器,一个是基础的pod容器。

 

6、K8s中Pod的常用操作。

  K8s的Pod的配置文件是yaml格式的文件,yaml格式里面如果冒号属性的前面是短横线的话,就代表这是一个列表资源,可以有多个,这个也就是说k8s中创建一个pod资源,这个pod资源会控制kuelet,kubelet控制docker至少启动两个容器,也可以启动多个容器,只要容器的端口不冲突即可。

 1 apiVersion: v1
 2 kind: Pod
 3 metadata:
 4   name: test1
 5   labels:
 6     app: web
 7 spec:
 8   containers:
 9     # 使用键盘4yy,然后使用pp就可以粘贴复制的4行。
10     - name: nginx
11       image: 192.168.110.133:5000/nginx:1.13
12       ports:
13         - containerPort: 80
14     # 一个Pod可以启动至少两个容器。
15     - name: busybox
16       # 记得加上版本号的哦,这里使用docker里面的镜像
17       image: docker.io/busybox:latest
18       # 如果是docker默认命令是夯不住的,夯不住就会死掉了,这里使用一些命令让它夯住。
19       command: ["sleep","3600"]
20       ports:
21         - containerPort: 80

配置如下所示:

然后使用kubectl创建资源,如下所示:

1 [root@k8s-master pod]# vim nginx_pod.yaml 
2 [root@k8s-master pod]# kubectl create -f nginx_pod.yaml 
3 pod "test1" created
4 [root@k8s-master pod]# kubectl get pod -o wide
5 NAME      READY     STATUS              RESTARTS   AGE       IP             NODE
6 nginx     1/1       Running             1          4h        172.16.101.2   k8s-node3
7 test      1/1       Running             0          3m        172.16.52.2    k8s-node2
8 test1     0/2       ContainerCreating   0          4s        <none>         k8s-master

发现test1这个Pod里面的两个没有全部启动起来。可以使用kubectl describe pod test1命令查看详情。

 1 [root@k8s-master pod]# kubectl get pod -o wide
 2 NAME      READY     STATUS             RESTARTS   AGE       IP             NODE
 3 nginx     1/1       Running            1          4h        172.16.101.2   k8s-node3
 4 test      1/1       Running            0          6m        172.16.52.2    k8s-node2
 5 test1     1/2       ImagePullBackOff   0          3m        172.16.29.2    k8s-master
 6 [root@k8s-master pod]# kubectl describe pod test1
 7 Name:        test1
 8 Namespace:    default
 9 Node:        k8s-master/192.168.110.133
10 Start Time:    Mon, 08 Jun 2020 19:32:18 +0800
11 Labels:        app=web
12 Status:        Pending
13 IP:        172.16.29.2
14 Controllers:    <none>
15 Containers:
16   nginx:
17     Container ID:        
18     Image:            192.168.110.133:5000/nginx:1.13
19     Image ID:            
20     Port:            80/TCP
21     State:            Waiting
22       Reason:            ImagePullBackOff
23     Ready:            False
24     Restart Count:        0
25     Volume Mounts:        <none>
26     Environment Variables:    <none>
27   busybox:
28     Container ID:    docker://adb4a9f14d1b0d6ee390923eeabd9269bfa1683f0ef02f094c5a24d4b204db64
29     Image:        docker.io/busybox:latest
30     Image ID:        docker-pullable://docker.io/busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209
31     Port:        80/TCP
32     Command:
33       sleep
34       3600
35     State:            Running
36       Started:            Mon, 08 Jun 2020 19:32:45 +0800
37     Ready:            True
38     Restart Count:        0
39     Volume Mounts:        <none>
40     Environment Variables:    <none>
41 Conditions:
42   Type        Status
43   Initialized     True 
44   Ready     False 
45   PodScheduled     True 
46 No volumes.
47 QoS Class:    BestEffort
48 Tolerations:    <none>
49 Events:
50   FirstSeen    LastSeen    Count    From            SubObjectPath            Type        Reason            Message
51   ---------    --------    -----    ----            -------------            --------    ------            -------
52   4m        4m        1    {default-scheduler }                    Normal        Scheduled        Successfully assigned test1 to k8s-master
53   4m        4m        1    {kubelet k8s-master}    spec.containers{busybox}    Normal        Pulling            pulling image "docker.io/busybox:latest"
54   4m        3m        2    {kubelet k8s-master}                    Warning        MissingClusterDNS    kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
55   3m        3m        1    {kubelet k8s-master}    spec.containers{busybox}    Normal        Pulled            Successfully pulled image "docker.io/busybox:latest"
56   3m        3m        1    {kubelet k8s-master}    spec.containers{busybox}    Normal        Created            Created container with docker id adb4a9f14d1b; Security:[seccomp=unconfined]
57   3m        3m        1    {kubelet k8s-master}    spec.containers{busybox}    Normal        Started            Started container with docker id adb4a9f14d1b
58   4m        1m        5    {kubelet k8s-master}    spec.containers{nginx}        Normal        Pulling            pulling image "192.168.110.133:5000/nginx:1.13"
59   4m        1m        5    {kubelet k8s-master}    spec.containers{nginx}        Warning        Failed            Failed to pull image "192.168.110.133:5000/nginx:1.13": Error while pulling image: Get http://192.168.110.133:5000/v1/repositories/nginx/images: dial tcp 192.168.110.133:5000: connect: connection refused
60   3m        1m        5    {kubelet k8s-master}                    Warning        FailedSync        Error syncing pod, skipping: failed to "StartContainer" for "nginx" with ErrImagePull: "Error while pulling image: Get http://192.168.110.133:5000/v1/repositories/nginx/images: dial tcp 192.168.110.133:5000: connect: connection refused"
61 
62   3m    11s    15    {kubelet k8s-master}    spec.containers{nginx}    Normal    BackOff        Back-off pulling image "192.168.110.133:5000/nginx:1.13"
63   3m    0s    16    {kubelet k8s-master}                Warning    FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "nginx" with ImagePullBackOff: "Back-off pulling image \"192.168.110.133:5000/nginx:1.13\""
64 
65 [root@k8s-master pod]# 

其实,我的三台节点上面是有busybox的镜像的,可以使用docker images进行查看,但是还是失败了。这里需要配置一下镜像的默认更新策略,在自己的nginx_pod.yaml配置默认的更新策略imagePullPolicy,默认是Always,可以设置为IfNotPresent如果有就不更新。

 1 apiVersion: v1
 2 kind: Pod
 3 metadata:
 4   name: test2
 5   labels:
 6     app: web
 7 spec:
 8   containers:
 9     - name: nginx
10       image: 192.168.110.133:5000/nginx:1.13
11       ports:
12         - containerPort: 80
13     - name: busybox
14       image: docker.io/busybox:latest
15       imagePullPolicy: IfNotPresent
16       command: ["sleep","3600"]
17       ports:
18         - containerPort: 80

然后创建这个Pod,可以看到立马就创建成功了。

 1 [root@k8s-master pod]# vim nginx_pod.yaml 
 2 [root@k8s-master pod]# kubectl create -f nginx_pod.yaml 
 3 pod "test2" created
 4 
 5 [root@k8s-master pod]# kubectl get pod -o wide
 6 NAME      READY     STATUS             RESTARTS   AGE       IP             NODE
 7 nginx     1/1       Running            1          4h        172.16.101.2   k8s-node3
 8 test      1/1       Running            0          29m       172.16.52.2    k8s-node2
 9 test1     1/2       ImagePullBackOff   0          26m       172.16.29.2    k8s-master
10 test2     2/2       Running            0          12s       172.16.101.3   k8s-node3
11 [root@k8s-master pod]# 

证明了一个Pod资源里面至少可以启动两个容器,也可以启动多个业务容器的。

 

 

7、查看kubectl的帮助,如下所示:

1 [root@k8s-master ~]# kubectl explain pod

一个Pod的必须属性,apiVersion、kind、metadata、spec,但是status创建之后,系统自动添加的,不用管它。

我们要找的是pod.spec.containers,可以继续使用命令进行查找。

1 [root@k8s-master ~]# kubectl explain pod.spec.containers

pod.spec.containers详细,如下所示:

可以看到command的而写法是command    <[]string>。参数是中括号,里面是一个字符串,然后里面是一个shell。

  1 [root@k8s-master ~]# kubectl explain pod.spec.containers
  2 RESOURCE: containers <[]Object>
  3 
  4 DESCRIPTION:
  5      List of containers belonging to the pod. Containers cannot currently be
  6      added or removed. There must be at least one container in a Pod. Cannot be
  7      updated. More info: http://kubernetes.io/docs/user-guide/containers
  8 
  9     A single application container that you want to run within a pod.
 10 
 11 FIELDS:
 12    command    <[]string>
 13      Entrypoint array. Not executed within a shell. The docker image's
 14      ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME)
 15      are expanded using the container's environment. If a variable cannot be
 16      resolved, the reference in the input string will be unchanged. The
 17      $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME).
 18      Escaped references will never be expanded, regardless of whether the
 19      variable exists or not. Cannot be updated. More info:
 20      http://kubernetes.io/docs/user-guide/containers#containers-and-commands
 21 
 22    env    <[]Object>
 23      List of environment variables to set in the container. Cannot be updated.
 24 
 25    lifecycle    <Object>
 26      Actions that the management system should take in response to container
 27      lifecycle events. Cannot be updated.
 28 
 29    volumeMounts    <[]Object>
 30      Pod volumes to mount into the container's filesystem. Cannot be updated.
 31 
 32    stdin    <boolean>
 33      Whether this container should allocate a buffer for stdin in the container
 34      runtime. If this is not set, reads from stdin in the container will always
 35      result in EOF. Default is false.
 36 
 37    livenessProbe    <Object>
 38      Periodic probe of container liveness. Container will be restarted if the
 39      probe fails. Cannot be updated. More info:
 40      http://kubernetes.io/docs/user-guide/pod-states#container-probes
 41 
 42    name    <string> -required-
 43      Name of the container specified as a DNS_LABEL. Each container in a pod
 44      must have a unique name (DNS_LABEL). Cannot be updated.
 45 
 46    readinessProbe    <Object>
 47      Periodic probe of container service readiness. Container will be removed
 48      from service endpoints if the probe fails. Cannot be updated. More info:
 49      http://kubernetes.io/docs/user-guide/pod-states#container-probes
 50 
 51    resources    <Object>
 52      Compute Resources required by this container. Cannot be updated. More info:
 53      http://kubernetes.io/docs/user-guide/persistent-volumes#resources
 54 
 55    workingDir    <string>
 56      Container's working directory. If not specified, the container runtime's
 57      default will be used, which might be configured in the container image.
 58      Cannot be updated.
 59 
 60    args    <[]string>
 61      Arguments to the entrypoint. The docker image's CMD is used if this is not
 62      provided. Variable references $(VAR_NAME) are expanded using the container's
 63      environment. If a variable cannot be resolved, the reference in the input
 64      string will be unchanged. The $(VAR_NAME) syntax can be escaped with a
 65      double $$, ie: $$(VAR_NAME). Escaped references will never be expanded,
 66      regardless of whether the variable exists or not. Cannot be updated. More
 67      info:
 68      http://kubernetes.io/docs/user-guide/containers#containers-and-commands
 69 
 70    imagePullPolicy    <string>
 71      Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always
 72      if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated.
 73      More info: http://kubernetes.io/docs/user-guide/images#updating-images
 74 
 75    ports    <[]Object>
 76      List of ports to expose from the container. Exposing a port here gives the
 77      system additional information about the network connections a container
 78      uses, but is primarily informational. Not specifying a port here DOES NOT
 79      prevent that port from being exposed. Any port which is listening on the
 80      default "0.0.0.0" address inside a container will be accessible from the
 81      network. Cannot be updated.
 82 
 83    tty    <boolean>
 84      Whether this container should allocate a TTY for itself, also requires
 85      'stdin' to be true. Default is false.
 86 
 87    image    <string>
 88      Docker image name. More info: http://kubernetes.io/docs/user-guide/images
 89 
 90    securityContext    <Object>
 91      Security options the pod should run with. More info:
 92      http://releases.k8s.io/HEAD/docs/design/security_context.md
 93 
 94    stdinOnce    <boolean>
 95      Whether the container runtime should close the stdin channel after it has
 96      been opened by a single attach. When stdin is true the stdin stream will
 97      remain open across multiple attach sessions. If stdinOnce is set to true,
 98      stdin is opened on container start, is empty until the first client attaches
 99      to stdin, and then remains open and accepts data until the client
100      disconnects, at which time stdin is closed and remains closed until the
101      container is restarted. If this flag is false, a container processes that
102      reads from stdin will never receive an EOF. Default is false
103 
104    terminationMessagePath    <string>
105      Optional: Path at which the file to which the container's termination
106      message will be written is mounted into the container's filesystem. Message
107      written is intended to be brief final status, such as an assertion failure
108      message. Defaults to /dev/termination-log. Cannot be updated.
109 
110 
111 [root@k8s-master ~]# 

 

8、K8s中Pod的常用操作。

 1 -- 创建Pod资源
 2 [root@k8s-master pod]# kubectl create -f nginx_pod.yaml 
 3 
 4 -- 删除一个Pod,强制删除Pod的参数--force --grace-period=0
 5 [root@k8s-master pod]# kubectl delete pod test1
 6 pod "test1" deleted
 7 [root@k8s-master pod]# kubectl get pod -o wide
 8 NAME      READY     STATUS        RESTARTS   AGE       IP             NODE
 9 nginx     1/1       Running       1          4h        172.16.101.2   k8s-node3
10 test      1/1       Running       0          35m       172.16.52.2    k8s-node2
11 test1     1/2       Terminating   0          31m       172.16.29.2    k8s-master
12 test2     2/2       Running       0          5m        172.16.101.3   k8s-node3
13 [root@k8s-master pod]# kubectl get pod -o wide
14 NAME      READY     STATUS    RESTARTS   AGE       IP             NODE
15 nginx     1/1       Running   1          4h        172.16.101.2   k8s-node3
16 test      1/1       Running   0          36m       172.16.52.2    k8s-node2
17 test2     2/2       Running   0          6m        172.16.101.3   k8s-node3
18 [root@k8s-master pod]# kubectl delete pod test --force --grace-period=0
19 warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
20 pod "test" deleted
21 [root@k8s-master pod]# kubectl get pod -o wide
22 NAME      READY     STATUS    RESTARTS   AGE       IP             NODE
23 nginx     1/1       Running   1          4h        172.16.101.2   k8s-node3
24 test2     2/2       Running   0          7m        172.16.101.3   k8s-node3
25 [root@k8s-master pod]# 
26 
27 -- 查看Pod的详细描述
28 [root@k8s-master pod]# kubectl describe pod nginx
29 
30 -- 更新Pod,根据配置文件更新,只能加资源。
31 [root@k8s-master pod]# kubectl apply -f nginx_pod.yaml 
32 pod "test4" created
33 [root@k8s-master pod]# kubectl get pod -o wide
34 NAME      READY     STATUS             RESTARTS   AGE       IP             NODE
35 nginx     1/1       Running            1          4h        172.16.101.2   k8s-node3
36 test1     0/1       ImagePullBackOff   0          1m        172.16.29.2    k8s-master
37 test2     2/2       Running            0          23m       172.16.101.3   k8s-node3
38 test4     1/1       Running            0          3s        172.16.52.2    k8s-node2
39 [root@k8s-master pod]# 

 

接下来开始学习RC(Replication Controller)咯。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐