说明【必看】

在这里插入图片描述

  • 因为总数超过了11万,我放在一篇中增修改的时候,我的天呐,打一个字要好几秒才能加载出来,而且页面加载也超级缓慢,最后一个原因是因为,标题实在太多,所以我分成了3篇来发布,初次看的时候,建议3篇都打开,按顺序学习和使实验,有助于理解哈。 后面复习的时候看标题,点进相应的文章哈
  • 这篇是第一篇

第二篇文章标题和链接

第三篇文章标题和链接

创建及删除pod

创建一个pod-1的文件夹和命名空间

  • 创建这个文件夹和命名空间是为了方便展示,这所有的测试文件放pod-1文件夹中,新增pod都放在这个pod-1命名空间内
[root@master wal]# mkdir pod-1
[root@master wal]# cd pod-1
[root@master pod-1]# 
[root@master pod-1]# kubectl create ns pod-1
namespace/pod-1 created
[root@master pod-1]# # 下面命令是切换到pod-1这个命名空间,kubens命令是需要单独安装的。
[root@master pod-1]# kubens pod-1
Context "context" modified.
Active namespace is "pod-1".
[root@master pod-1]# 
[root@master pod-1]# kubectl config get-contexts 
CURRENT   NAME           CLUSTER   AUTHINFO   NAMESPACE
*         context        master    ccx        pod-1
          context1-new   master1   ccx1       default
[root@master pod-1]# 
  • 安装kubens命令的博客如下:

k8s安装metric server和了解namespace【命名空间】,含k8s pod状态为ImagePullBackOff处理方法

镜像准备【node节点执行】

  • master上执行下面命令可以查看node节点的
[root@master pod-1]# kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
master   Ready    <none>   3d6h   v1.21.0
node1    Ready    <none>   3d6h   v1.21.0
node2    Ready    <none>   3d6h   v1.21.0
[root@master pod-1]# 
  • 在noede节点上下载一个nginx镜像,方便做测试
    命令:docker pull nginx 【如果这些流程不清楚的,去我博客docker分类中补一下相关知识】
    下载完毕以后就会有这个镜像【注意主机名】
[root@node1 ~]# docker images | grep nginx
nginx                                                             latest     d1a364dc548d   7 weeks ago     133MB
[root@node1 ~]# 

[root@node2 ~]# docker images | grep nginx
nginx                                                             latest     d1a364dc548d   7 weeks ago     133MB
[root@node2 ~]#

创建pod【虚拟机】

  • 创建pod的方式有2种:
    • 1、命令行的方式
    • 2、使用yaml配置文件的方式【建议用这种】

方式1:命令行的方式【不建议】

默认创建

在这里插入图片描述

  • 命令:kubectl run pod名称【自定义】 --images=镜像名称
    如果不指定下载策略参数,那么默认使用的是Always,每次都需要下载新的镜像,如果node节点没有通外网的话,那么创建出来的镜像状态为:ImagePullBackOff【如下,我的就是没有外网的,创建出来就不是running,这种情况需要用下面指定参数下载】
  • 如:我使用nginx镜像创建一个名称是pod1的容器出来
    状态就是ImagePullBackOff
[root@master pod-1]# kubectl run pod1 --image=nginx
pod/pod1 created
[root@master pod-1]# 
[root@master pod-1]# kubectl get pods 
NAME   READY   STATUS             RESTARTS   AGE
pod1   0/1     ImagePullBackOff   0          7s
[root@master pod-1]# kubectl describe pod pod1 #这是日志查看报错信息
  • 综上,这种默认创建方式适合通外网的环境。
加imagePullPolicy参数创建
  • imagePullPolicy参数是一种镜像的下载策略,3种模式如下图【默认是第一种】
    一般都是建议加上这个参数选择本地创建的,因为默认是通过国外网站拉取的镜像,难免会很慢。
    在这里插入图片描述
    注:上图中,2和3的区别是,2如果本地没有的话,会自动从网站拉取,3如果本地没有,就直接失败了。

  • 如,我这使用第2种方式来创建pod,状态就会为Running了【当前虚机无外网的,使用的是本地的镜像】
    命令:kubectl run pod1 --image=nginx --image-pull-policy=IfNotPresent

[root@master ~]# docker images | grep nginx
#当前是master节点,这上面没有nginx镜像, 但是我node节点上有nginx镜像!!!!
[root@master ~]#
[root@master ~]# kubectl get pods
No resources found in pod-1 namespace.
[root@master ~]# kubectl run pod1 --image=nginx --image-pull-policy=IfNotPresent
pod/pod1 created
[root@master ~]# 
[root@master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          17s
[root@master ~]# 
[root@master ~]# ping -w 2 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

--- 8.8.8.8 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 1999ms

[root@master ~]# 
说明
  • 上面命令的方式创建,是可以加参数的
    如:我在创建的时候指定变量创建【各种变量和docker创建时候一致,就不在这多做说明】
[root@master ~]# kubectl run pod1 --image=nginx --image-pull-policy=IfNotPresent --env "aa=bb" --env "cc=dd" --labels="aa=bb,cc=dd"
pod/pod1 created
[root@master ~]# 
[root@master ~]# kubectl exec -it pod1 -- bash
root@pod1:/# echo $aa
bb
root@pod1:/# echo $bb

root@pod1:/# echo $cc
dd
root@pod1:/# 
root@pod1:/# exit
exit
[root@master ~]# 
[root@master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          43s
[root@master ~]# kubectl get pods --show-labels 
NAME   READY   STATUS    RESTARTS   AGE   LABELS
pod1   1/1     Running   0          55s   aa=bb,cc=dd
[root@master ~]# 
[root@master ~]# 

方式2:yaml文件的方式创建【建议】

这种方式还有一个最大的好处就是,可以用同一个配置文件创建多个pod【下面会说明的】

获取yaml文件
  • 语法:
kubectl run 自定义pod名称 --image=镜像名称 --image-pull-policy=下载策略 --dry-run=client/server -o yaml > 自定义名称.yaml

#--image-pull-policy=有3种策略,加上面命令行中加参数创建

# --dry-run=这是模拟运行的意思
#--dry-run=client:简洁输出【一般用这个比较多】
#--dry-run=server:详细输出,内容很多

# -o yaml :以yaml文件的形式输出

# > 自定义名称.yaml :如果不加这个,就直接打印到屏幕上
  • 如,我使用该方法创建一个名称为pod1的nginx镜像pod
    命令:kubectl run pod1 --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml > pod1.yaml
    注:yaml文件默认存放在当前路径。 且执行该命令后是不会生成pod的。
[root@master ~]# kubectl delete pod pod1
pod "pod1" deleted
[root@master ~]# 
[root@master ~]# kubectl run pod1 --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml > pod1.yaml
[root@master ~]# 
[root@master ~]# kubectl get pods
No resources found in pod-1 namespace.
[root@master ~]# 
获取的配置文件说明
  • 我通过上面方法获取到了pod1.yaml文件,文件内容如下
    【下面是最基础信息,没有进行任何修改】
[root@master ~]# vim pod1.yaml 
apiVersion: v1 
kind: Pod 
metadata: 
  creationTimestamp: null 
  labels: 
    run: pod1 
  name: pod1 
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
yaml文件格式说明
  • yaml文件里的格式,遵照的驼峰写法

    • 1.冒号前面
      第一个单词首字母为小写,后续每个单词首字母大写
    • 2.冒号右边:
      关键字,每个单词的首字母都要大写
  • yaml是有严格的文件缩进制度的,每一级用2个空格隔开,必须是2个空格,不能使用tab缩进,并且每一个参数的:后面有一个空格,这个空格不能省略,否则创建报错。

  • 如下图:

    • 一级是没有空格缩进的
    • 二级前面有2个空格
    • 三级前面有4个空格
      在这里插入图片描述
  • 主要是后面添加新参数的时候,添加的几级参数,前面的空格数量一定不能错!!!!【一二三级参数查看方式见下面】

一级参数获取
  • 如下图中的一级菜单,这只是一部分,并不是全部
    在这里插入图片描述

  • 查看全部一级菜单方法:
    命令:kubectl explain pods

[root@master ~]# kubectl explain pods
KIND:     Pod
VERSION:  v1

DESCRIPTION:
     Pod is a collection of containers that can run on a host. This resource is
     created by clients and scheduled onto hosts.

FIELDS:
   apiVersion   <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   kind <string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

   metadata     <Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   spec <Object>
     Specification of the desired behavior of the pod. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

   status       <Object>
     Most recently observed status of the pod. This data may not be up to date.
     Populated by the system. Read-only. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

[root@master ~]# 
二级菜单获取
  • 如下图中的二级菜单,这只是一部分,并不是全部
    注:二级菜单是一级的子菜单,所以要获取全部二级菜单的时候,是查询某一级菜单。
    在这里插入图片描述
  • 如:我获取上图中apiVersionmetadata的所有二级参数
    命令:kubectl explain pods.一级菜单名称
[root@master ~]# kubectl explain pods.metadata
KIND:     Pod
VERSION:  v1

RESOURCE: metadata <Object>

DESCRIPTION:
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

     ObjectMeta is metadata that all persisted resources must have, which
     includes all objects users must create.

FIELDS:
   annotations  <map[string]string>
     Annotations is an unstructured key value map stored with a resource that
     may be set by external tools to store and retrieve arbitrary metadata. They
     are not queryable and should be preserved when modifying objects. More
     info: http://kubernetes.io/docs/user-guide/annotations

   clusterName  <string>
     The name of the cluster which the object belongs to. This is used to
     distinguish resources with same name and namespace in different clusters.
     This field is not set anywhere right now and apiserver is going to ignore
     it if set in create or update request.

   creationTimestamp    <string>
     CreationTimestamp is a timestamp representing the server time when this
     object was created. It is not guaranteed to be set in happens-before order
     across separate operations. Clients may not set this value. It is
     represented in RFC3339 form and is in UTC.

     Populated by the system. Read-only. Null for lists. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   deletionGracePeriodSeconds   <integer>
     Number of seconds allowed for this object to gracefully terminate before it
     will be removed from the system. Only set when deletionTimestamp is also
     set. May only be shortened. Read-only.

   deletionTimestamp    <string>
     DeletionTimestamp is RFC 3339 date and time at which this resource will be
     deleted. This field is set by the server when a graceful deletion is
     requested by the user, and is not directly settable by a client. The
     resource is expected to be deleted (no longer visible from resource lists,
     and not reachable by name) after the time in this field, once the
     finalizers list is empty. As long as the finalizers list contains items,
     deletion is blocked. Once the deletionTimestamp is set, this value may not
     be unset or be set further into the future, although it may be shortened or
     the resource may be deleted prior to this time. For example, a user may
     request that a pod is deleted in 30 seconds. The Kubelet will react by
     sending a graceful termination signal to the containers in the pod. After
     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)
     to the container and after cleanup, remove the pod from the API. In the
     presence of network partitions, this object may still exist after this
     timestamp, until an administrator or automated process can determine the
     resource is fully terminated. If not set, graceful deletion of the object
     has not been requested.

     Populated by the system when a graceful deletion is requested. Read-only.
     More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   finalizers   <[]string>
     Must be empty before the object is deleted from the registry. Each entry is
     an identifier for the responsible component that will remove the entry from
     the list. If the deletionTimestamp of the object is non-nil, entries in
     this list can only be removed. Finalizers may be processed and removed in
     any order. Order is NOT enforced because it introduces significant risk of
     stuck finalizers. finalizers is a shared field, any actor with permission
     can reorder it. If the finalizer list is processed in order, then this can
     lead to a situation in which the component responsible for the first
     finalizer in the list is waiting for a signal (field value, external
     system, or other) produced by a component responsible for a finalizer later
     in the list, resulting in a deadlock. Without enforced ordering finalizers
     are free to order amongst themselves and are not vulnerable to ordering
     changes in the list.

   generateName <string>
     GenerateName is an optional prefix, used by the server, to generate a
     unique name ONLY IF the Name field has not been provided. If this field is
     used, the name returned to the client will be different than the name
     passed. This value will also be combined with a unique suffix. The provided
     value has the same validation rules as the Name field, and may be truncated
     by the length of the suffix required to make the value unique on the
     server.

     If this field is specified and the generated name exists, the server will
     NOT return a 409 - instead, it will either return 201 Created or 500 with
     Reason ServerTimeout indicating a unique name could not be found in the
     time allotted, and the client should retry (optionally after the time
     indicated in the Retry-After header).

     Applied only if Name is not specified. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency

   generation   <integer>
     A sequence number representing a specific generation of the desired state.
     Populated by the system. Read-only.

   labels       <map[string]string>
     Map of string keys and values that can be used to organize and categorize
     (scope and select) objects. May match selectors of replication controllers
     and services. More info: http://kubernetes.io/docs/user-guide/labels

   managedFields        <[]Object>
     ManagedFields maps workflow-id and version to the set of fields that are
     managed by that workflow. This is mostly for internal housekeeping, and
     users typically shouldn't need to set or understand this field. A workflow
     can be the user's name, a controller's name, or the name of a specific
     apply path like "ci-cd". The set of fields is always in the version that
     the workflow used when modifying the object.

   name <string>
     Name must be unique within a namespace. Is required when creating
     resources, although some resources may allow a client to request the
     generation of an appropriate name automatically. Name is primarily intended
     for creation idempotence and configuration definition. Cannot be updated.
     More info: http://kubernetes.io/docs/user-guide/identifiers#names

   namespace    <string>
     Namespace defines the space within which each name must be unique. An empty
     namespace is equivalent to the "default" namespace, but "default" is the
     canonical representation. Not all objects are required to be scoped to a
     namespace - the value of this field for those objects will be empty.

     Must be a DNS_LABEL. Cannot be updated. More info:
     http://kubernetes.io/docs/user-guide/namespaces

   ownerReferences      <[]Object>
     List of objects depended by this object. If ALL objects in the list have
     been deleted, this object will be garbage collected. If this object is
     managed by a controller, then an entry in this list will point to this
     controller, with the controller field set to true. There cannot be more
     than one managing controller.

   resourceVersion      <string>
     An opaque value that represents the internal version of this object that
     can be used by clients to determine when objects have changed. May be used
     for optimistic concurrency, change detection, and the watch operation on a
     resource or set of resources. Clients must treat these values as opaque and
     passed unmodified back to the server. They may only be valid for a
     particular resource or set of resources.

     Populated by the system. Read-only. Value must be treated as opaque by
     clients and . More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency

   selfLink     <string>
     SelfLink is a URL representing this object. Populated by the system.
     Read-only.

     DEPRECATED Kubernetes will stop propagating this field in 1.20 release and
     the field is planned to be removed in 1.21 release.

   uid  <string>
     UID is the unique in time and space value for this object. It is typically
     generated by the server on successful creation of a resource and is not
     allowed to change on PUT operations.

     Populated by the system. Read-only. More info:
     http://kubernetes.io/docs/user-guide/identifiers#uids

[root@master ~]# 
三级菜单获取
  • 如下图中的三级菜单,这只是一部分,并不是全部
    注:三级菜单是二级的子菜单,所以要获取全部三级菜单的时候,是查询某二级菜单。
    在这里插入图片描述
  • 如:我查询上图中的creationTimestamp和labels菜单
    命令:kubectl explain pods.一级菜单名称.二级菜单名称
[root@master ~]# kubectl explain pods.metadata.creationTimestamp
KIND:     Pod
VERSION:  v1

FIELD:    creationTimestamp <string>

DESCRIPTION:
     CreationTimestamp is a timestamp representing the server time when this
     object was created. It is not guaranteed to be set in happens-before order
     across separate operations. Clients may not set this value. It is
     represented in RFC3339 form and is in UTC.

     Populated by the system. Read-only. Null for lists. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

     Time is a wrapper around time.Time which supports correct marshaling to
     YAML and JSON. Wrappers are provided for many of the factory methods that
     the time package offers.
[root@master ~]# 
[root@master ~]# 
[root@master ~]# 
[root@master ~]# kubectl explain pods.metadata.labels
KIND:     Pod
VERSION:  v1

FIELD:    labels <map[string]string>

DESCRIPTION:
     Map of string keys and values that can be used to organize and categorize
     (scope and select) objects. May match selectors of replication controllers
     and services. More info: http://kubernetes.io/docs/user-guide/labels
[root@master ~]# 
配置文件中“-”的作用以及啥时候需要加“-”
说明
  • 我们获取的yaml文件中,有一个 “-” 应该注意到了吧,如下图
    在这里插入图片描述
  • 这个-的意思我用下面方式展示吧,应该更容易理解一点

在这里插入图片描述
如果还不能理解,没关系,我下面用一个demo再说明

demo说明
  • 我上面说过,用字典表示没有-,用列表表示有-,所以下面配置文件中我标出来就是如下
spec: #字典
  containers:#字典
  - image: nginx #列表
    imagePullPolicy: IfNotPresent#列表内容
    name: pod1#列表内容
    resources: {}#列表内容
  dnsPolicy: ClusterFirst#字典
  • 又说过列表是可以重复的,那么我们新增几个列表就是
spec: #字典
  containers:#字典
  - image: nginx #列表
    imagePullPolicy: IfNotPresent#列表内容
    name: pod1#列表内容
    resources: {}#列表内容
    ...#可以新增自定义列表内容的
  - image: nginx #列表
    imagePullPolicy: IfNotPresent#列表内容
    name: pod1#列表内容
    resources: {}#列表内容
    ...#可以新增自定义列表内容的
  - image: nginx #列表
    imagePullPolicy: IfNotPresent#列表内容
    name: pod1#列表内容
    resources: {}#列表内容    
    ...#可以新增自定义列表内容的  
  dnsPolicy: ClusterFirst#字典
restartPolicy 参数说明
  • 默认参数为Always
  restartPolicy: Always
  • 我们可以查看菜单获取帮助的
[root@master ~]# kubectl explain pods.spec.restartPolicy
KIND:     Pod
VERSION:  v1

FIELD:    restartPolicy <string>

DESCRIPTION:
     Restart policy for all containers within the pod. One of Always, OnFailure,
     Never. Default to Always. More info:
     https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy
[root@master ~]# 
  • 一共有3个参数:
    • Always:总是是重启
    • OnFailure:非正常退出才重启【就是容器正常运行完成以后,就不会重启,比如定义个command为 sleep 100,那么100秒后容器就停了且不会重启】
    • Never:从不重启
编辑配置文件
  • 默认生成的文件呢,其实可以啥都不用修改,直接生成pod
    注:需要特意留意imagePullPolicy: IfNotPresent这个参数,没有的话务必要改成这个【前提是在node上准备好镜像】
  • 我下面添加了一个env变量参数 ,用来展示列表的使用。
    注:env变量如果是纯数字,必须要加"",否则创建会报错
[root@master ~]# cat pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    env:
    - name: aa
      value: xxx
    - name: bb
      value: "888"
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]#
通过文件创建pod
  • 命令:kubectl apply -f yaml文件
  • 如,我生成上面修改完毕的pod文件
[root@master ~]# kubectl apply -f pod1.yaml
pod/pod1 created
[root@master ~]# 
[root@master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          32s
[root@master ~]# 
通过文件创建多个pod
  • 先丢一个配置文件在这吧
[root@master ~]# cat pod1.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    env:
    - name: aa
      value: xxx
    - name: bb
      value: "888"
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]# 
  • 常规通过配置文件创建多个呢,是直接cp上面的配置文件,但我这不这么做,使用一个配置文件创建多个pod,看下面。

  • 语法

sed 's/pod1/pod2/' pod1.yaml | kubectl apply -f -

#pod1:是配置文件中的name
#pod2:是新名【自定义】 【如果要创建更多,仅修改这个值】
# pod1.yaml是配置文件名称
  • 如:我通过这个文件创建3个pod,分别为pod1,pod2,pod3
[root@master ~]# kubectl get pods
No resources found in pod-1 namespace.
[root@master ~]# 
[root@master ~]# kubectl apply -f pod1.yaml 
pod/pod1 created
[root@master ~]# sed 's/pod1/pod2/' pod1.yaml  | kubectl apply -f -
pod/pod2 created
[root@master ~]# 
[root@master ~]# sed 's/pod1/pod3/' pod1.yaml  | kubectl apply -f -
pod/pod3 created
[root@master ~]# 
[root@master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          35s
pod2   1/1     Running   0          15s
pod3   1/1     Running   0          5s
[root@master ~]# 
创建pod报错…expected "map"处理
  • 通过文件创建pod出错的话,一般都是因为你新增的参数格式不对
  • 报错内容
    比如下面2种报错,就是因为新增参数的格式错误
[root@master ~]# kubectl apply -f pod1.yaml
error: error validating "pod1.yaml": error validating data: [ValidationError(Pod.spec.containers[0].env[0]): invalid type for io.k8s.api.core.v1.EnvVar: got "string", expected "map", ValidationError(Pod.spec.containers[0].env[1]): invalid type for io.k8s.api.core.v1.EnvVar: got "string", expected "map"]; if you choose to ignore these errors, turn validation off with --validate=false

[root@master ~]# kubectl apply -f pod1.yaml
error: error parsing pod1.yaml: error converting YAML to JSON: yaml: line 17: could not find expected ':'

  • 处理方法就是在参数:后面新增空格即可。
    在这里插入图片描述

删除pod

pod名称方式删除【建议】

  • 命令:kubectl delete pod pod名称 --force【–force是不等待强制删除,平常删除可不加】
    如:我删除上面创建的pod1
[root@master pod-1]# kubectl delete pod pod1
pod "pod1" deleted
[root@master pod-1]# 
[root@master pod-1]# kubectl get pods
No resources found in pod-1 namespace.
[root@master pod-1]# 
配置文件方式删除
  • 这种方式主要适用于通过yaml文件的方式创建的pod
    命令:kubectl delete -f yaml文件
  • 如,我删除上面通过文件的方式创建的pod
[root@master ~]# 
[root@master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          10m
[root@master ~]# 
[root@master ~]# kubectl delete -f pod1.yaml 
pod "pod1" deleted
[root@master ~]# 
[root@master ~]# kubectl get pods
No resources found in pod-1 namespace.
[root@master ~]# 

pod的几种状态说明

  • 执行命令:kubectl get pods的时候有一栏是STATUS,这就是pod的状态
[root@master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          3d6h
pod2   2/2     Running   27         3d5h
[root@master ~]#
  • 下面我说一下几种状态的原因
    • Running:已经被调度到节点上,且容器工作正常
    • Pending :pod因为其他的原因导致pod准备开始创建,但还没有创建【卡主了】
    • Completed :pod里所有容器正常退出【比如restartPolicy模式为OnFailure且定义了sleep 100,100秒以后状态就成这个了】
    • Terminating:pod正在删除中【就是优雅的关闭,默认30秒才删除】
    • CrashLoopBackOff:创建的时候就出错,属于内部原因【比如定义变量错误了,或者无法访问存储了等等】
    • imagePullBackoff:创建pod的时候,镜像下载失败【这种一般就需要定义imagePullPolicy参数修改镜像获取模式了】

一个pod运行多个容器

  • 上面我们详细说明了pod的创建和删除,但这种仅仅是常规创建,也就是一个pod一个容器,pod和容器对应关系如下图
    在这里插入图片描述

  • cp一份pod1的配置文件为pod2,下面我们用pod2来做说明

[root@master ~]# cp pod1.yaml pod2.yaml

并在里面执行一下全局替换,将pod1替换为pod2
在这里插入图片描述

  • 最候将env变量删了,得到一个初始版的容器文件
[root@master ~]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod2
  name: pod2
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod2
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]#

说明

  • 何为一个pod运行多个容器呢,简单来说就是给一个pod定义多个 镜像,结合上图看,应该能明白我的意思吧。

  • 用豆荚表示呢,就是外壳是pod,壳子里面的豆呢,就是容器,一个pod可以有很多容器,外壳【pod】和里面的豆子【容器】名称是可以自定义的,如下图
    在这里插入图片描述

  • 注意,一个pod里面是不能运行2中容器的,比如同一个pod,不不能运行nginx和mysql这2种容器的,他们是不同的容器,还是需要运行在不同的pod上

  • 使用场景:一般使用在一个pod中同一个镜像的不同cmd而已
    比如:一个pod的nginx镜像中,我们挂载了一个外挂硬盘,这时候使用多个CMD,来做日志分析这样子。

  • 所以呢,下面的测试中,更多是明白这种用法或这种东西的存在即可,一般情况下不要在一个pod下创建多个容器,容易出错。

容器的CMD说明

  • 我们先创建2个容器的pod吧,将配置文件中的image内容全部复制,将name修改为c1和c2
[root@master ~]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod2
  name: pod2
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: c1
    resources: {}
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: c2
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]# 
[root@master ~]# kubectl apply -f pod2.yaml
pod/pod2 created
[root@master ~]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          31m
pod2   1/2     Error     1          11s
[root@master ~]# 
  • 可以看到上面创建出来的pod2呢,状态为Error,这是因为:
    容器默认使用的是 镜像中的 CMD
    但镜像中的cmd只有一个,如果有多个容器使用镜像中的这个CMD,肯定是不得行的,所以如果一个pod中创建多个容器,从第二容器开始,就需要指定cmd了,至于cmd内容是啥,你可以自定义,也可以使用和镜像中的一样。

指定容器CMD并创建

  • 我给第二个cmd指定为:sleep 10
    文件整体内容就直接看文件内容吧,应该能看懂的,操作步骤也看下面代码内容
[root@master ~]# vim pod2.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod2
  name: pod2
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: c1
    resources: {}
  - image: nginx
    imagePullPolicy: IfNotPresent
    command: ["sh","-c","sleep 10"]
    name: c2
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
~
~
~
~
~
"pod2.yaml" 21L, 379C written                                    
[root@master ~]# 
[root@master ~]# kubectl delete pod pod2 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod2" force deleted
[root@master ~]# 
[root@master ~]# kubectl apply -f pod2.yaml 
pod/pod2 created
[root@master ~]# 
[root@master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          35m
pod2   2/2     Running   0          7s
[root@master ~]# 

可以看到增加cmd内容以后,创建的pod 状态就为Running了

  • 我们再次查看pod的时候,可以看到 RESTART数量在增加
    是因为配置文件中c2的cmd的sleep 10,又定义了Always,会一直重启的,所以每隔10秒钟会重启一次该容器
[root@master ~]# kubectl get pods
NAME   READY   STATUS             RESTARTS   AGE
pod1   1/1     Running            0          37m
pod2   1/2     CrashLoopBackOff   3          104s
[root@master ~]# kubectl get pods
NAME   READY   STATUS             RESTARTS   AGE
pod1   1/1     Running            0          37m
pod2   1/2     CrashLoopBackOff   3          107s
[root@master ~]# kubectl get pods
NAME   READY   STATUS     RESTARTS   AGE
pod1   1/1     Running    0          37m
pod2   1/2     NotReady   4          2m26s
[root@master ~]# 
[root@master ~]# kubectl get pods
NAME   READY   STATUS     RESTARTS   AGE
pod1   1/1     Running    0          37m
pod2   1/2     NotReady   4          2m31s

在pod里执行一些命令

查看pod详细信息

  • 命令:kubectl describe pod pod名称
  • 如:我查看pod2,里面会有容器c1和c2的详细信息
[root@master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          77m
pod2   2/2     Running   0          5s
[root@master ~]# kubectl describe pod pod2
Name:         pod2
Namespace:    pod-1
Priority:     0
Node:         node1/192.168.59.143
Start Time:   Fri, 23 Jul 2021 12:17:01 +0800
Labels:       run=pod2
Annotations:  cni.projectcalico.org/podIP: 10.244.166.135/32
              cni.projectcalico.org/podIPs: 10.244.166.135/32
Status:       Running
IP:           10.244.166.135
IPs:
  IP:  10.244.166.135
Containers:
  c1:
    Container ID:   docker://32da51b11a075f077c392a8bab1a0aaa34423de21ee6a357d5ea15dadc8fee35
    Image:          nginx
    Image ID:       docker://sha256:d1a364dc548d5357f0da3268c888e1971bbdb957ee3f028fe7194f1d61c6fdee
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Fri, 23 Jul 2021 12:17:03 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8cql (ro)
  c2:
    Container ID:  docker://fe480106b8205a38e22997f2611382e964f6ab9161cb23df2046ffc5d61cf216
    Image:         nginx
    Image ID:      docker://sha256:d1a364dc548d5357f0da3268c888e1971bbdb957ee3f028fe7194f1d61c6fdee
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      sleep 10000
    State:          Running
      Started:      Fri, 23 Jul 2021 12:17:03 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8cql (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-x8cql:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  12s   default-scheduler  Successfully assigned pod-1/pod2 to node1
  Normal  Pulled     11s   kubelet            Container image "nginx" already present on machine
  Normal  Created    10s   kubelet            Created container c1
  Normal  Started    10s   kubelet            Started container c1
  Normal  Pulled     10s   kubelet            Container image "nginx" already present on machine
  Normal  Created    10s   kubelet            Created container c2
  Normal  Started    10s   kubelet            Started container c2
[root@master ~]# 

不进入bash直接执行pod容器命令

只有一个容器的情况下

  • 语法:kubectl exec pod名 -- 命令
  • 如:我执行pod1的一些命令
[root@master ~]# kubectl exec pod1 -- ls /tmp
[root@master ~]# 
[root@master ~]# kubectl exec pod1 -- ls /root
[root@master ~]# kubectl exec pod1 -- ls /var/log
apt
btmp
dpkg.log
faillog
lastlog
nginx
wtmp



#比如容器中命令不存在,那么就会报如下错误

[root@master ~]# kubectl exec pod1 -- ifconfig
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "ifconfig": executable file not found in $PATH: unknown
command terminated with exit code 126
[root@master ~]# 

指定pod容器查看

  • 命令:kubectl exec pod名称 -c 容器名称 -- 命令
    如果pod中有多个容器,不指定的话,会提示让你选择的
  • 如,我查看pod2中c1和c2的容器
[root@master ~]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          79m
pod2   2/2     Running   0          2m
[root@master ~]# 
[root@master ~]# kubectl exec pod2 -- ls /tmp
Defaulted container "c1" out of: c1, c2
[root@master ~]# 
[root@master ~]# kubectl exec pod2 -c c1 -- ls /tmp
[root@master ~]# 
[root@master ~]# kubectl exec pod2 -c c2 -- ls /tmp
[root@master ~]# 

创建bash并进入pod容器

只有一个容器的情况下

  • 命令:kubectl exec -it pod名 -- bash
  • 如:我进入pod1的bash
[root@master ~]# kubectl exec -it pod1 -- bash
root@pod1:/# 
root@pod1:/# ls
bin   docker-entrypoint.d   home   media  proc  sbin  tmp
boot  docker-entrypoint.sh  lib    mnt    root  srv   usr
dev   etc                   lib64  opt    run   sys   var
root@pod1:/# pwd
/
root@pod1:/# 

# 这种比较直观,命令不存在的话正常提示,而不是报错
root@pod1:/# ifconfig
bash: ifconfig: command not found
root@pod1:/# 
root@pod1:/# exit
exit
command terminated with exit code 127
[root@master ~]# 

指定pod容器查看

    • 命令:kubectl exec -it pod名称 -c 容器名称 -- bash
      如果pod中有多个容器,不指定的话,会提示让你选择的的同时,默认进入第一个容器了
  • 如,我进入pod2中c1和c2的容器
[root@master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          81m
pod2   2/2     Running   0          4m34s
[root@master ~]# 
[root@master ~]# kubectl exec -it pod2 -- bash
Defaulted container "c1" out of: c1, c2
root@pod2:/# 
root@pod2:/# exit
exit
[root@master ~]#
[root@master ~]# kubectl exec -it pod2 -c c1 -- bash
root@pod2:/# ls
bin   docker-entrypoint.d   home   media  proc  sbin  tmp
boot  docker-entrypoint.sh  lib    mnt    root  srv   usr
dev   etc                   lib64  opt    run   sys   var
root@pod2:/# exit
exit
[root@master ~]# 
[root@master ~]# kubectl exec -it pod2 -c c2 -- bash
root@pod2:/# ls
bin   docker-entrypoint.d   home   media  proc  sbin  tmp
boot  docker-entrypoint.sh  lib    mnt    root  srv   usr
dev   etc                   lib64  opt    run   sys   var
root@pod2:/# exit
exit
[root@master ~]#

拷贝本地文件到pod容器内【含反过来拷贝】

  • 命令:kubectl exec 主机文件 容器名:拷贝路径
  • 如:我拷贝主机的 /etc/hosts到容器的/tmp路径
[root@master ~]# kubectl cp /etc/hosts pod1:/tmp
[root@master ~]# 
[root@master ~]# kubectl exec -it pod1 -- bash
root@pod1:/# ls /tmp/
hosts
root@pod1:/# cat hosts
cat: hosts: No such file or directory
root@pod1:/# cat /tmp/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.59.142 master
192.168.59.143 node1
192.168.59.144 node2

root@pod1:/# 
  • 容器内容拷贝到主机的话,语法和上面一样的
    命令:kubectl cp 容器名:路径 主机路径和文件文件名【不加路径默认当前路径,文件名必须存在】
[root@master ~]# kubectl cp  pod1:/etc/hosts hosts
tar: Removing leading `/' from member names
[root@master ~]#
[root@master ~]# cat hosts 
# Kubernetes-managed hosts file.
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.104.5    pod1
[root@master ~]# 
[root@master ~]# rm -rf hosts 
[root@master ~]# 

指定pod容器

方法见上面bash的用法吧,方法都一样,就加个参数-c 容器名 罢了

查看pod容器日志输出【用于排错】

  • 命令:kubectl logs pod名称
  • 如,我查看pod1的日志输出
[root@master ~]# kubectl logs pod1
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/07/23 02:59:49 [notice] 1#1: using the "epoll" event method
2021/07/23 02:59:49 [notice] 1#1: nginx/1.21.0
2021/07/23 02:59:49 [notice] 1#1: built by gcc 8.3.0 (Debian 8.3.0-6) 
2021/07/23 02:59:49 [notice] 1#1: OS: Linux 3.10.0-957.el7.x86_64
2021/07/23 02:59:49 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2021/07/23 02:59:49 [notice] 1#1: start worker processes
2021/07/23 02:59:49 [notice] 1#1: start worker process 31
2021/07/23 02:59:49 [notice] 1#1: start worker process 32
2021/07/23 02:59:49 [notice] 1#1: start worker process 33
2021/07/23 02:59:49 [notice] 1#1: start worker process 34
[root@master ~]#

指定pod容器

方法见上面bash的用法吧,方法都一样,就加个参数-c 容器名 罢了

pod的生命周期【优雅的关闭】

  • 默认情况下,我们使用命令删除一个pod的时候,会发现删除的时候会等待一段时间,这个一般是等待30秒,叫做宽限期,如下图。
    在这里插入图片描述
  • 其实这种有2种方法可以直接删除pod不用等待30秒
    • 方式1: 在删除后面 加上参数:--force
    • 方式2:使用文件创建的时候,指定参数,将默认的30秒改为0
  • 这个参数是terminationGracePeriodSeconds,也叫做优雅的关闭pod,详细信息如下
[root@master ~]# kubectl explain pods.spec | egrep -A 9 terminationGracePeriodSeconds
   terminationGracePeriodSeconds        <integer>
     Optional duration in seconds the pod needs to terminate gracefully. May be
     decreased in delete request. Value must be non-negative integer. The value
     zero indicates stop immediately via the kill signal (no opportunity to shut
     down). If this value is nil, the default grace period will be used instead.
     The grace period is the duration in seconds after the processes running in
     the pod are sent a termination signal and the time when the processes are
     forcibly halted with a kill signal. Set this value longer than the expected
     cleanup time for your process. Defaults to 30 seconds.

[root@master ~]#
  • 我下面就说方式2吧,通过文件创建pod的时候,就加一个参数,将30秒改为0【默认30秒】
    这样创建出来的pod,即使不加force参数删除也是秒删了。
[root@master ~]# cp pod2.yaml  pod3.yaml
[root@master ~]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod3
  name: pod3
spec:
  terminationGracePeriodSeconds: 0
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: c1
    resources: {}
  - image: nginx
    imagePullPolicy: IfNotPresent
    command: ["sh","-c","sleep 10000"]
    name: c2
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
~
~
~
~
"pod3.yaml" 22L, 417C written                                    
[root@master ~]# kubectl apply -f pod3.yaml 
pod/pod3 created
[root@master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          102m
pod2   2/2     Running   0          25m
pod3   2/2     Running   0          5s
[root@master ~]# 
[root@master ~]# kubectl delete pod pod3
pod "pod3" deleted
[root@master ~]# 

pod钩子【pod hook】

说明

  • 我们知道,创建容器的时候,默认使用的是镜像的CMD进程,如果定义了command,那也是修改了镜像的默认进程罢了
  • 但如果我们需要新增一个command进程,而不是替换默认的command进程,这时候就需要使用到pod hook 【pod钩子】了。
  • pod hook有两种模式
    • postStart
      容器启动之后执行xxxx
      注:是和主进程是同时运行起来的,并没有先后顺序
    • preStop
      在容器关闭之前执行xxxx
  • 详细语法
spec:
  - image:**
    ...
    #下面的为钩子的全部语法了
	lifecycle:
	  postStart:
	    exec:
	      command: ["/bin/sh","-c","执行命令"]
	  preStop:
	    exec:
	      command: ["/bin/sh","-c","执行命令"]

demo

  • 先编辑如下内容
[root@master ~]# cat pod4.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod4
  name: pod4
spec:
  terminationGracePeriodSeconds: 600
  containers:
  - image: nginx
    command: ["sh","-c","data > /tmp/aa.txt ; sleep 10000"]
    imagePullPolicy: IfNotPresent
    name: c1
    resources: {}
    lifecycle:
      postStart:
        exec:
          command: ["/bin/sh","-c","data > /tmp/bb.txt"]
      preStop:
          exec:
            command: ["/bin/sh","-c","data >> /tmp/bb.txt ; sleep 100"]
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master ~]# 

#  terminationGracePeriodSeconds: 600 ——宽恕期设置为600
#  command: ["sh","-c","date > /tmp/aa.txt ; sleep 10000"] —— 主进程改为这个。
#  command: ["/bin/sh","-c","date > /tmp/bb.txt"]——容器创建时执行这个【和上面主进程同时运行】
#  command: ["/bin/sh","-c","date >> /tmp/bb.txt ; sleep 100"]——容器关闭【删除】时执行这个
  • 上面注意看上面的代码介绍,下面我们创建这个容器并进入容器验证一下主进程和容器运行时的进程,上面说了这2进程是同时运行的,所以我们可以创建好容器以后,进入容器查看这2个容器的时候是不是一致即可,如果是一致的,那么证明逻辑没问题。
[root@master ~]# kubectl apply -f pod4.yaml 
pod/pod4 created
[root@master ~]# 
[root@master ~]# kubectl exec -it pod4 -- bash
root@pod4:/# 
root@pod4:/# cat /tmp/aa.txt 
Mon Jul 26 09:10:48 UTC 2021
root@pod4:/# 
root@pod4:/# cat /tmp/bb.txt 
Mon Jul 26 09:10:48 UTC 2021
root@pod4:/# 
root@pod4:/# 
root@pod4:/# exit
exit
[root@master ~]#
  • 最后我们写了容器删除时候执行的这个代码,写入一个时间并睡眠100秒,这个时候的逻辑是这样子的
    当我们删除容器以后,会立刻追加时间到bb.txt中,然后开始sleep 100,这个执行完毕以后开始执行住进程的sleep 10000,所以即使上面的sleep 100结束了,该容器也不会被删除,需要等待主进程的10000执行完毕才会被删除;
    但是我们又定义了宽恕时间是 600,所以到600以后,该容器会被强制删除了。
    现在开始测试这个
[root@master ~]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          3d6h
pod2   2/2     Running   27         3d4h
pod4   1/1     Running   0          3m52s
[root@master ~]# kubectl delete pod pod4
pod "pod4" deleted
#卡在这里的,这时候我们重新打开一个终端,并进入这个bash里面


[root@master ~]# kubectl exec -it pod4 -- bash
root@pod4:/# cat /tmp/bb.txt 
Mon Jul 26 09:10:48 UTC 2021
Mon Jul 26 09:14:45 UTC 2021
root@pod4:/# 
root@pod4:/# exit
exit
[root@master ~]# 
[root@master ~]# kubectl get pod
NAME   READY   STATUS        RESTARTS   AGE
pod1   1/1     Running       0          3d6h
pod2   2/2     Running       27         3d4h
pod4   1/1     Terminating   0          5m43s
[root@master ~]# 

# 许久之后
[root@master ~]# kubectl get pod | tail -n 1
pod4   1/1     Terminating   0          8m52s
[root@master ~]# 

#我们上面是3分钟开始删除的,现在8分了,过去了5分钟,600秒马上到了,这个容器也该被删除了,再等一分钟
# 回到刚删除的界面,可以看到删除结束了,同时pod4也没了

[root@master ~]# kubectl delete pod pod4
pod "pod4" deleted
[root@master ~]# 
[root@master ~]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          3d6h
pod2   2/2     Running   27         3d5h
[root@master ~]# 
  • 注:以我们上面的nginx为例,如果没有修改bash进程,默认使用nginx的话,删除不需要等待30秒就结束了,可能十四五秒就结束了,如果想要默认nginx也优雅关闭【默认30秒,或定义多少为多少】,需要定义结束时的 command并加上-s quit参数,如下:
    加了这个参数以后,容器就不会提前被删除了,需要等宽恕时间完毕以后才关闭呢。
...
   preStop:
          exec:
            command: ["/bin/sh","-c","/usr/sbin/nginx -s quit"]
...

初始化pod

说明

  • 初始化容器的概念

    • 比如一个容器A依赖其他容器,可以为A设置多个 依赖容易A1,A2,A3 A1,A2,A3要按照顺序启动,A1没有启动启动起来的 话,A2,A3是不会启动的,直到所有的静态容器全 部启动完毕,主容器A才会启动。
    • 一般用于A容器运行之前,先做一些准备工作。
    • 注:如果初始化容器失败,则会一直重启,pod不会创建
  • 我们上面说到了pod钩子,pod钩子呢是和容器同时运行或在容器被删除时运行,而这个初始化呢,和pod钩子 类似,只是这个是在容器运行执行执行某些功能罢了,流程如下图:
    在这里插入图片描述

  • 完整语法是:

spec:
...
  initContainers:
  - name: initc1 #自定义名称
    image: nginx #镜像名称
    imagePullPolicy: IfNotPresent #镜像策略
    command: ["sh","-c","sleep 20"] #command
...
  • 注:每一个pod,即使不定义初始化内容,也会有一个隐藏的初始化容器,叫做pause容器,这个的存在是为了能让该pod容器能正常运行,比如调度网络这些。
    如下,我随便创建一个pod,查看其运行的node节点,然后去这个node节点上查看运行的docker容器,肯定有个相应的pause容器生成
[root@master ~]# kubectl apply -f pod5.yaml 
pod/pod5 created
[root@master ~]# 
[root@master ~]# kubectl get pods -o wide
NAME   READY   STATUS     RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod5   0/1     Init:0/1   0          8s    10.244.166.138   node1   <none>           <none>
[root@master ~]# 
[root@master ~]# ssh node1
root@node1's password: 
Last login: Tue Jul 27 10:43:30 2021 from master
[root@node1 ~]# 
[root@node1 ~]# docker ps | grep pause
f172aa85d2f1   registry.aliyuncs.com/google_containers/pause:3.4.1   "/pause"                 2 minutes ago        Up 2 minutes                  k8s_POD_pod5_pod-1_5d3283f7-0fb2-41d2-9a3f-d586ad92ddb7_0
9eb01589f988   registry.aliyuncs.com/google_containers/pause:3.4.1   "/pause"                 8 days ago           Up 8 days                     k8s_POD_calico-node-zl42z_kube-system_7d504cb1-790f-407f-b5f7-f292cef949a5_1
be2b39468acd   registry.aliyuncs.com/google_containers/pause:3.4.1   "/pause"                 8 days ago           Up 8 days                     k8s_POD_kube-proxy-7nqfv_kube-system_cb31c4f7-7dcc-4632-b281-907cef422133_1
[root@node1 ~]# 


#一个pod对应一个pause容器
# 也就是说,运行的容器出了pod本身,还会有一个pause存在
[root@node1 ~]# docker ps | grep pod5
69e1009f9b59   d1a364dc548d                                          "/docker-entrypoint.…"   3 minutes ago   Up 3 minutes             k8s_c1_pod5_pod-1_5d3283f7-0fb2-41d2-9a3f-d586ad92ddb7_0
f172aa85d2f1   registry.aliyuncs.com/google_containers/pause:3.4.1   "/pause"                 3 minutes ago   Up 3 minutes             k8s_POD_pod5_pod-1_5d3283f7-0fb2-41d2-9a3f-d586ad92ddb7_0
[root@node1 ~]#

规则

  • 1、它们总是运行到完成。
  • 2、每个都必须在下一个启动之前成功完成。
  • 3、如果 Pod 的 Init 容器失败,Kubernetes 会不断地重启该 Pod,直到 Init 容器成功为止。然而,如果 Pod 对应的 restartPolicy 为 Never,它不会重新启动。
  • 4、Init 容器支持应用容器的全部字段和特性,但不支持 Readiness Probe,因为它们必须在 Pod 就绪之前运行完 成。
  • 5、如果为一个 Pod 指定了多个 Init 容器,那些容器会按顺序一次运行一个。 每个 Init 容器必须运行成功,下一个 才能够运行。
  • 6、因为 Init 容器可能会被重启、重试或者重新执行,所以 Init 容器的代码应该是幂等的。 特别地,被写到 EmptyDirs 中文件的代码,应该对输出文件可能已经存在做好准备。
  • 7、在 Pod 上使用 activeDeadlineSeconds,在容器上使用 livenessProbe,这样能够避免 Init 容器一直失败。 这就 为 Init 容器活跃设置了一个期限。
  • 8、在 Pod 中的每个 app 和 Init 容器的名称必须唯一;与任何其它容器共享同一个名称,会在验证时抛出错误。
  • 9、对 Init 容器 spec 的修改,被限制在容器 image 字段中。 更改 Init 容器的 image 字段,等价于重启该 Pod。

demo

  • 我这呢,就添加一个初始化容器为例
    完整代码如下,可以直接拷贝:
[root@master ~]# cat pod5.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod5
  name: pod5
spec:
  terminationGracePeriodSeconds: 0
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: c1
    resources: {}
  initContainers:
  - name: initc1
    image: nginx
    imagePullPolicy: IfNotPresent
    command: ["sh","-c","sleep 20"]
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}
[root@master ~]# 
  • 我们生成这个容器,并观察状态
    因为我们定义了一个初始化内容的command为sleep 20,所以可以发现,20秒前都是在初始化,20秒以后容器才会变成Running
[root@master ~]# kubectl apply -f pod5.yaml 
pod/pod5 created
[root@master ~]# kubectl get pods
NAME   READY   STATUS     RESTARTS   AGE
pod1   1/1     Running    0          3d7h
pod2   2/2     Running    28         3d6h
pod5   0/1     Init:0/1   0          3s
[root@master ~]# 
[root@master ~]# kubectl get pods | tail -n 1 
pod5   0/1     Init:0/1   0          15s
[root@master ~]# 
[root@master ~]# kubectl get pods | tail -n 1 
pod5   0/1     Init:0/1   0          17s
[root@master ~]# kubectl get pods | tail -n 1 
pod5   0/1     Init:0/1   0          20s
[root@master ~]# kubectl get pods | tail -n 1 
pod5   0/1     Init:0/1   0          22s
[root@master ~]# kubectl get pods | tail -n 1 
pod5   1/1     Running   0          24s
[root@master ~]# 
  • 如果我们需要定义多个初始化内容,那么就多增几个 initContainers即可,如:
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod5
  name: pod5
spec:
  terminationGracePeriodSeconds: 0
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: c1
    resources: {}
  initContainers:
  - name: initc1
    image: nginx
    imagePullPolicy: IfNotPresent
    command: ["sh","-c","sleep 20"]
  initContainers:
  - name: initc1
    image: nginx
    imagePullPolicy: IfNotPresent
    command: ["sh","-c","sleep 20"]
  initContainers:
  - name: initc1
    image: nginx
    imagePullPolicy: IfNotPresent
    command: ["sh","-c","sleep 20"]
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}

扩展demo【修改内核参数】

  • 我们创建一个pod,若要这个pod正常运行,那么pod所属的主机/proc/sys/vm/swappiness值需必须要为0才行。
  • 正常情况我们可以去这个pod上修改这个值为0 ,但如果节点很多,这么修改就不现实了,所以,我们可以利用初始化的方式,在pod运行前修改这个值为0
  • 需要知道一个事,我们在容器内修改参数的时候,其实修改的是主机的内核参数,因为这个操作是修改内核参数,pod因为安全性,不允许容器里修改内核参数,但aseccom控制了容器能做哪些操作
  • 镜像准备,下载alpine镜像【所有node上节点下载】
    必须所有node节点都下载这个镜像,否则,在master上创建这个pod,如果在node2节点运行,但node2节点没这个镜像,就会报错。
[root@node1 ~]# docker pull alpine

# 下载完以后有如下镜像
[root@node1 ~]# docker images | grep alpine
alpine                                                            latest     d4ff818577bc   5 weeks ago     5.6MB
[root@node1 ~]#
  • 配置文件内容如下
[root@master ~]# cat pod6.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod6
  name: pod6
spec:
  terminationGracePeriodSeconds: 0
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: c1
    resources: {}
  initContainers:
  - name: initc1
    image: alpine
    imagePullPolicy: IfNotPresent
    command: ["sh","-c","/sbin/sysctl -w vm.swappiness=0"]
    securityContext:
      privileged: true
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}
[root@master ~]# 


#下面为解释哈
 initContainers: #定义初始化内容
  - name: initc1 #自定义名称
    image: alpine #镜像
    imagePullPolicy: IfNotPresent #容器模式
    command: ["sh","-c","/sbin/sysctl -w vm.swappiness=0"]
    securityContext: #这个就是允许容器修改主机内核参数的选项
      privileged: true #true为允许修改
  • 创建pod
[root@master ~]# kubectl apply -f pod6.yaml 
pod/pod6 created
# 下面这个命令是查看这个pod运行在哪个节点上的
[root@master ~]# kubectl get pods -o wide | grep pod6
pod6   1/1     Running   0          80s     10.244.166.137   node1   <none>           <none>
[root@master ~]#
  • 验证
    我们说了,现在允许pod修改主机的这个值,所以我们通过上面看到的pod运行在node1上,那么我们就去node1查看值是否被修改了
    结果为0是正常的!
[root@master ~]# ssh node1
root@node1's password: 
Last login: Tue Jul 27 10:13:10 2021 from master
[root@node1 ~]# 
[root@node1 ~]# cat /proc/sys/vm/swappiness 
0
[root@node1 ~]#
  • 顺便补充一下,为什么你去查看其它pod运行的主机上这个值依然为30,是因为默认不允许pod修改这个值,所以这个值查看是没有变化的,这变0了是因为我们定义了参数允许pod修改值
    如下:pod5是运行在node2上的,但我们pod5中没有配置允许容器修改内核参数,所以去node2上看这个值,为默认值30才对
[root@master ~]# kubectl get pod -o wide | grep pod5
pod5   1/1     Running   0          16h     10.244.104.14    node2   <none>           <none>
[root@master ~]# 
[root@master ~]# ssh node2
root@node2's password: 
Last login: Tue Jul 27 10:15:23 2021 from master
[root@node2 ~]# cat /proc/sys/vm/swappiness
30
[root@node2 ~]# 

扩展demo【容器数据同步到本地,在获取本地数据】

  • 注意:下面的仅仅是展示一种方法,一种思路,不要考虑持久化的问题,下面容器创建的方式,当容器被删除后数据也会跟着被删除,关于持久化,后面有单独做 卷 笔记的,想了解的可以去我博客k8s分栏找找。
  • 镜像准备,下载alpine镜像【所有node上节点下载】
    必须所有node节点都下载这个镜像,否则,在master上创建这个pod,如果在node2节点运行,但node2节点没这个镜像,就会报错。
[root@node2 ~]# docker pull busybox
[root@node2 ~]# docker images | grep bus
busybox                                                           latest     69593048aa3a   7 weeks ago     1.24MB
[root@node2 ~]# 
  • 配置文件如下
[root@node2 ~]# exit
logout
Connection to node2 closed.
[root@master ~]# cat pod7.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  terminationGracePeriodSeconds: 0
  volumes:
  - name: nodedir
    emptyDir: {}
  containers:
  - name: myapp-container
    image: nginx
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: nodedir
      mountPath: "/xx"
  initContainers:
  - name: initc1
    image: busybox
    imagePullPolicy: IfNotPresent
    command: ["sh","-c","touch /node-dir/aa.txt"]
    volumeMounts:
    - name: nodedir
      mountPath: "/node-dir"
[root@master ~]# 
  • 流程说明:下图中顺序,红色为1,黄色为2【截图软件突然卡死了,我拍的照,不想弄第二次了,怕过程太长这截图软件又卡死,将就看一下吧】
    我在简单叙述一下:
    1、先定义一个主机的数据存储路径
    2、初始化时创建一个随机路径,并创建一个任意文件,最后将随机路径挂载到主机定义的路径上【这时候主机路径就会有初始化时创建的文件了】【注:这个镜像和要创建容器的镜像不是同一个,因为这是初始化的镜像,touch创建完成任务后该进程就会被删了(因为初始化完成以后才会进行创建容器),所以我们进不去这个镜像内部】【我们用busybox镜像是因为这个镜像占用空间小,且集成了linux常用命令,比较方便,如果用一些其他镜像,没有touch命令,这个容器创建就会报错】
    3、最后将有数据的主机存储路径,挂载到镜像的任意路径,创建容器以后,我们进入到该路径,是可以看到初始化时候创建的文件才对的
    在这里插入图片描述
  • 创建容器并测试
[root@master ~]# kubectl apply -f pod7.yaml
pod/myapp-pod created
[root@master ~]# kubectl get pods
NAME        READY   STATUS    RESTARTS   AGE
myapp-pod   1/1     Running   0          64s
[root@master ~]# kubectl exec -it myapp-pod -- bash
Defaulted container "myapp-container" out of: myapp-container, initc1 (init)
root@myapp-pod:/# 
root@myapp-pod:/# ls /xx
aa.txt
root@myapp-pod:/# 
root@myapp-pod:/# exit
exit
command terminated with exit code 1
[root@master ~]#
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐