Kubectl [command][TYPE] [NAME] [flags]

  • command:对资源执行相应操作的子命令,例如get、create、delete、run等;
  • TYPE:要操作的资源类型,例如pods、services等;类型名称大小写敏感,但支持使用单数、复数或简写格式。
  • NAME:要操作的资源对象名称,大小写敏感;省略时,则表示指定TYPE的所有资源对象;同一类型的资源名称可于TYPE后同时给出多个,也可以直接使用TYPE/NAME的格式来为每个资源对象分别指定类型。
  • flags:命令行选项,例如-s或–server等;另外,get等命令在输出时还有一个常用的标志-o <format>用于指定输出格式

command

  • 初级基础命令

    • create:通过文件或标准输入创建资源
    • expose:基于RC、Service、Deployment或Pod创建Service资源
    • run:在集群中以Pod形式运行指定的镜像
    • set:设置目标资源对象的特定属性
  • 中级基础命令

    • get:显示一个或多个资源
    • explain:打印指定资源的内置文档
    • edit:编辑资源
    • delete:基于文件名、stdin、资源或名字,以及资源和选择器删除资源
  • 部署命令

    • rollout:管理资源的滚动更新
    • scale:伸缩Deployment、ReplicaSet、RC或Job的规模
    • autoscale:对Deployment、ReplicaSet、RC自动伸缩
  • 集群管理命令

    • certificate:配置数字证书资源

    • cluster-info:打印集群信息

    • top:打印资源使用率

    • cordon:指定node设定为“不可用”(unschedulable)状态

    • uncordon:指定node设定为“可用”(schedulable)状态

    • drain:“排空”Node上的Pod以进入“维护”模式

    • taint:为Node声明污点及标准行为

  • 排错及调试命令

    • describe:显示指定的资源或资源组的详细信息
    • logs:显示一个Pod内某容器的日志
    • attach:附加终端至一个运行中的容器
    • exec:在容器内执行指定命令
    • prot-forward:将本地的一个或多个端口转发至指定的pod
    • proxy:创建能访问Kubernetes API Server的代理
    • cp:在容器间复制文件或目录
    • auth:打印授权信息
  • 高级命令

    • diff:对比当前版本与即将应用的新版本的不同
    • apply:基于文件或stdin将配置应用于资源
    • patch:使用策略合并补丁更新资源字段
    • replace:基于文件或stdin替换一个资源
    • wait:等待一个或多个资源上的指定境况(condition)
    • convert:为不同的API版本转换配置文件
    • kustomize:基于目录或URL构建Kustomization目标
  • 设置命令

    • label:更新指定资源的label
    • annotate:更新资源的annotation
    • completion:输出指定的shell的补全码
  • 其他命令

    • version:打印Kubernetes服务段和客户端的版本信息
    • api-versions:以group/version格式打印服务器支持的API版本信息
    • api-resources:打印API支持的资源类型
    • config:配置Kubeconfig文件的内容
    • plugin:运行命令行插件
    • alpha:仍处于Alpha阶段的子命令

get -o

  • -o wide:以纯文本格式显示资源的附加信息
  • -o name:进打印资源的名称
  • -o yaml:以yaml格式化输出api对象信息
  • -o json:以json格式化输出api对象信息
  • -o jsonpath:以自定义jsonPath模板输出api对象信息
  • -o go-template:以自定义的go模板格式输出api对象信息
  • -o custom-columns:自定义要输出的字段

示例命令

创建资源对象

  1. kubectl get namespace:获取命名空间
  2. kubectl create namespace dev:创建dev命名空间
  3. docker search nginx:搜索nginx镜像
  4. docker pull nginx:获取nginx镜像
  5. kubectl create deployment demo-nginx --image="docker.io/nginx:latest" -n dev:在dev名称空间中建名为demo-nginx的Deployment控制器资源对象
  6. kubectl get po -n dev:获取dev命名空间下的pod
  7. kubectl create service clusterip demo-nginx --tcp=80 -n dev:在dev名称空间中建名为demo-nginx的Service资源对象
  8. kubectl get svc -n dev:获取dev命名空间下的服务
[root@k8s-master01 ~]# kubectl get namespace
NAME              STATUS   AGE
default           Active   123d
halm              Active   16h
kube-node-lease   Active   123d
kube-public       Active   123d
kube-system       Active   123d
[root@k8s-master01 ~]# kubectl create namespace dev
namespace/dev created
[root@k8s-master01 ~]# docker search nginx
NAME                               DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED
nginx                              Official build of Nginx.                        14630               [OK]                
[root@k8s-master01 ~]# docker pull nginx
[root@k8s-master01 ~]# kubectl create deployment demo-nginx --image="docker.io/nginx:latest" -n dev
deployment.apps/demo-nginx created
[root@k8s-master01 ~]# kubectl get po -n dev
NAME                          READY   STATUS    RESTARTS   AGE
demo-nginx-79d8fd4b47-4bfmj   1/1     Running   0          3m22s
[root@k8s-master01 ~]# kubectl create service clusterip demo-nginx --tcp=80 -n dev
service/demo-nginx created
[root@k8s-master01 ~]# kubectl get svc -n dev
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
demo-nginx   ClusterIP   10.103.93.234   <none>        80/TCP    115s

假设存在定义了Deployment对象的deployment-demoapp.yaml文件,以及定义了Service对象的service-demoapp.yaml文件,使用kubectl create命令即可进行基于命令式对象配置文件的创建操作:

kubectl create -f deployment-demoapp.yaml -f service-demoapp.yaml

甚至还可以将创建交由kubectl自行确定,用户只需要声明期望的状态,这种方式称为声明式对象配置。仍以deployment-demoapp.yaml和service-demoapp.yaml文件为例,使用kubectl apply命令即可实现声明式配置:

kubectl apply -f deployment-demoapp.yaml -f service-demoapp.yaml

查看资源对象

列出系统上所有的Namespace资源对象

[root@k8s-master01 ~]# kubectl get namespaces
NAME              STATUS   AGE
default           Active   123d
dev               Active   35m
halm              Active   17h
kube-node-lease   Active   123d
kube-public       Active   123d
kube-system       Active   123d
[root@k8s-master01 ~]# 

一次查看多个资源类别下的资源对象

[root@k8s-master01 ~]# kubectl get pods,services -o wide -n dev
NAME                              READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
pod/demo-nginx-79d8fd4b47-4bfmj   1/1     Running   0          27m   10.244.3.5   k8s-node01   <none>           <none>

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/demo-nginx   ClusterIP   10.103.93.234   <none>        80/TCP    23m   app=demo-nginx
[root@k8s-master01 ~]# 

查询kube-system命名空间中拥有k8s-app标签的所有pod对象

[root@k8s-master01 ~]# kubectl get pods -l k8s-app -n kube-system
NAME                       READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-gskbg   1/1     Running   6          123d
coredns-5c98db65d4-kgrls   1/1     Running   6          123d
kube-proxy-6ksnl           1/1     Running   12         123d
kube-proxy-fbtln           1/1     Running   1          118d
kube-proxy-tkwvb           1/1     Running   1          118d

下面的命令能够取出kube-system名称空间中带有k8s-app=kube-dns标签的Pod对象的资源名称

[root@k8s-master01 ~]# kubectl get pods -l k8s-app=kube-dns -n kube-system -o\
> jsonpath="{.items[0].metadata.name}"
coredns-5c98db65d4-gskbg[root@k8s-master01 ~]# 

打印资源对象的详细信息

每个资源对象都有用户期望的状态(Spec)和现有的实际状态(Status)两种状态信息,kubectl get -o {yaml|josn}可分别以YAML或JSON格式打印资源对象的规范,而kubectl describe命令则能够打印出指定资源对象的详细描述信息。

[root@k8s-master01 ~]# kubectl get po -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-gskbg               1/1     Running   6          123d
coredns-5c98db65d4-kgrls               1/1     Running   6          123d
etcd-k8s-master01                      1/1     Running   12         123d
kube-apiserver-k8s-master01            1/1     Running   12         123d
kube-controller-manager-k8s-master01   1/1     Running   12         123d
kube-flannel-ds-25ch5                  1/1     Running   6          123d
kube-flannel-ds-lpdbl                  1/1     Running   1          118d
kube-flannel-ds-rk4t5                  1/1     Running   1          118d
kube-proxy-6ksnl                       1/1     Running   12         123d
kube-proxy-fbtln                       1/1     Running   1          118d
kube-proxy-tkwvb                       1/1     Running   1          118d
kube-scheduler-k8s-master01            1/1     Running   12         123d
[root@k8s-master01 ~]# kubectl get pods kube-apiserver-k8s-master01 -o yaml -n kube-system
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/config.hash: 6754dca8c9e8cd2ff27474458b9f49f8
    kubernetes.io/config.mirror: 6754dca8c9e8cd2ff27474458b9f49f8
    kubernetes.io/config.seen: "2020-11-23T22:41:26.832470908+08:00"
    kubernetes.io/config.source: file
  creationTimestamp: "2020-11-23T14:43:11Z"
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver-k8s-master01
  namespace: kube-system
  resourceVersion: "41084"
  selfLink: /api/v1/namespaces/kube-system/pods/kube-apiserver-k8s-master01
  uid: a7618970-dadf-407c-9982-79445328a633
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.15.154
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --insecure-port=0
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-cluster-ip-range=10.96.0.0/12
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    image: k8s.gcr.io/kube-apiserver:v1.15.1
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 192.168.15.154
        path: /healthz
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 15
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 15
    name: kube-apiserver
    resources:
      requests:
        cpu: 250m
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  hostNetwork: true
  nodeName: k8s-master01
  priority: 2000000000
  priorityClassName: system-cluster-critical
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    operator: Exists
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: etc-pki
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2021-03-27T08:46:45Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2021-03-27T08:46:49Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2021-03-27T08:46:49Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2021-03-27T08:46:45Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://03a543b243a443cefc3f51cab7a5a24a05b4b1a162405368affeefbe7887fbe7
    image: k8s.gcr.io/kube-apiserver:v1.15.1
    imageID: docker://sha256:68c3eb07bfc3fc02f468d9e56564fd97fb4d75879b5f7c3ce1d8af4f60d32865
    lastState:
      terminated:
        containerID: docker://139205e03ea06e01eb103d00c16765a014a787b339b9a0240e65420973026050
        exitCode: 1
        finishedAt: "2020-11-28T20:15:43Z"
        reason: Error
        startedAt: "2020-11-28T14:44:28Z"
    name: kube-apiserver
    ready: true
    restartCount: 12
    state:
      running:
        startedAt: "2021-03-27T08:46:47Z"
  hostIP: 192.168.15.154
  phase: Running
  podIP: 192.168.15.154
  qosClass: Burstable
  startTime: "2021-03-27T08:46:45Z"
[root@k8s-master01 ~]# 

kubectl describe命令还能显示当前对象相关的其他资源对象,如Event或Controller等

[root@k8s-master01 ~]# kubectl describe pods kube-apiserver-k8s-master01  -n kube-system
Name:                 kube-apiserver-k8s-master01
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 k8s-master01/192.168.15.154
Start Time:           Sat, 27 Mar 2021 16:46:45 +0800
Labels:               component=kube-apiserver
                      tier=control-plane
Annotations:          kubernetes.io/config.hash: 6754dca8c9e8cd2ff27474458b9f49f8
                      kubernetes.io/config.mirror: 6754dca8c9e8cd2ff27474458b9f49f8
                      kubernetes.io/config.seen: 2020-11-23T22:41:26.832470908+08:00
                      kubernetes.io/config.source: file
Status:               Running
IP:                   192.168.15.154
Containers:
  kube-apiserver:
    Container ID:  docker://03a543b243a443cefc3f51cab7a5a24a05b4b1a162405368affeefbe7887fbe7
    Image:         k8s.gcr.io/kube-apiserver:v1.15.1
    Image ID:      docker://sha256:68c3eb07bfc3fc02f468d9e56564fd97fb4d75879b5f7c3ce1d8af4f60d32865
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-apiserver
      --advertise-address=192.168.15.154
      --allow-privileged=true
      --authorization-mode=Node,RBAC
      --client-ca-file=/etc/kubernetes/pki/ca.crt
      --enable-admission-plugins=NodeRestriction
      --enable-bootstrap-token-auth=true
      --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
      --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
      --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
      --etcd-servers=https://127.0.0.1:2379
      --insecure-port=0
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
      --requestheader-allowed-names=front-proxy-client
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      --requestheader-extra-headers-prefix=X-Remote-Extra-
      --requestheader-group-headers=X-Remote-Group
      --requestheader-username-headers=X-Remote-User
      --secure-port=6443
      --service-account-key-file=/etc/kubernetes/pki/sa.pub
      --service-cluster-ip-range=10.96.0.0/12
      --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
      --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    State:          Running
      Started:      Sat, 27 Mar 2021 16:46:47 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sat, 28 Nov 2020 22:44:28 +0800
      Finished:     Sun, 29 Nov 2020 04:15:43 +0800
    Ready:          True
    Restart Count:  12
    Requests:
      cpu:        250m
    Liveness:     http-get https://192.168.15.154:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/kubernetes/pki from k8s-certs (ro)
      /etc/pki from etc-pki (ro)
      /etc/ssl/certs from ca-certs (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  etc-pki:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/pki
    HostPathType:  DirectoryOrCreate
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>
[root@k8s-master01 ~]# 

打印容器中日志信息

通常一个应用容器中仅会运行一个进程(及其子进程),该进程作为1号进程接收并处理信号,同时负责将日志直接输出至容器终端中,因此容器日志信息的获取一般要通过容器控制台进行。

kubectl logs命令可打印Pod对象内指定容器的日志信息,命令格式为kubectl logs [-f] [-p] (POD | TYPE/NAME) [-cCONTAINER] [options]。若Pod对象内仅有一个容器,则-c选项及容器名可选。

下面这条命令的含义是获取标签为:k8s-app=kube-dns的pod号,同时该命令效果也可以用kubectl get po -n kube-system|grep "dns"|awk '{print $1}'实现

DNS_POD=`kubectl get pods -l k8s-app=kube-dns -n kube-system -ojsonpath="{.items[0].metadata.name}"`

下面是结合log命令查看日志

[root@k8s-master01 ~]# DNS_POD=`kubectl get pods -l k8s-app=kube-dns -n kube-system -ojsonpath="{.items[0].metadata.name}"`
[root@k8s-master01 ~]# kubectl logs $DNS_POD -n kube-system
.:53
2021-03-27T08:47:07.043Z [INFO] CoreDNS-1.3.1
2021-03-27T08:47:07.043Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2021-03-27T08:47:07.043Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
[root@k8s-master01 ~]# 

kubectl logs -f可以用于持续输出,类似于用了-f的tail命令

容器中执行命令

kubectl exec命令便是在指定的容器运行其他应用程序的命令

例如在kube-system名称空间中的Pod对象kube-apiserver-master01上的唯一容器中运行ps命令

[root@k8s-master01 ~]# kubectl exec kube-apiserver-k8s-master01 -n kube-system -- ps

类似于logs命令,若Pod对象中存在多个容器,需要以-c选项指定容器后才能运行指定的命令,而指定的命令程序也必须在容器中存在才能成功运行。

删除资源对象

kubectl delete命令能够删除指定的资源对象

例如删除一个service

[root@k8s-master01 ~]# kubectl get svc -n dev
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
demo-nginx   ClusterIP   10.103.93.234   <none>        80/TCP    2d5h
[root@k8s-master01 ~]# kubectl delete svc demo-nginx -n dev
service "demo-nginx" deleted
[root@k8s-master01 ~]# kubectl get svc -n dev
No resources found.

例如删除标签的所有Pod对象(危险操作,切勿在生产集群中测试执行)

#kubectl delete pods -l k8s-app=kube-proxy -n kube-system

例如删指定名称空间中的所有某类对象

#kubectl delete pods --all -n kube-public

例如有些资源类型支持优雅删除的机制,它们有着默认的删除宽限期,例如Pod资源的默认宽限期为30秒,但用户可在命令中使用–grace-period选项或–now选项来覆盖默认的宽限期。下面的命令就用于强制删除指定的Pod对象,但这种删除操作可能会导致相关容器无法终止并退出。需要特别说明的是,对于受控于控制器的对象来说,仅删除受控对象自身,其控制器可能会重建出类似的对象,例如Deployment控制器下的Pod对象被删除时即会被重建。

[root@k8s-master01 ~]# kubectl get po -n dev
NAME                          READY   STATUS    RESTARTS   AGE
demo-nginx-79d8fd4b47-4bfmj   1/1     Running   0          2d5h
[root@k8s-master01 ~]# kubectl delete pods demo-nginx-79d8fd4b47-4bfmj --force --grace-period=0 -n dev
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "demo-nginx-79d8fd4b47-4bfmj" force deleted
[root@k8s-master01 ~]# kubectl get po -n dev
NAME                          READY   STATUS              RESTARTS   AGE
demo-nginx-79d8fd4b47-z2p2r   0/1     ContainerCreating   0          6s
[root@k8s-master01 ~]# 
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐