最近学习k8s遇到很多问题,建了一个qq群:153144292,交流devops、k8s、docker等

认证、授权、访问控制

apiserver是访问控制的入口,管理入口。
到容器暴露出去后,走node节点为入口。

   API Server作为Kubernetes网关,是管理资源对象的唯一入口,其各种集群组件访问资源都需要
经过网关才能进行正常访问和管理。每一次的访问请求都需要进行合法性的检验,其中包括身份验
证、操作权限验证以及操作规范验证等,需要通过一系列验证通过之后才能访问或者存储数据到etcd
当中。
任何客户端访问kubectl---认证通过---授权检查---准入控制

token--承载令牌--预共享秘钥

https通信,ssh认证可以确认服务器身份,ca签字与服务器上的保持一致。客户端身份与证书身份一致,双向认证。

rbac
=============================
kubectl默认带了key

客户端--->API server
    user:username、uid
    group:
    extra:
    
    API
    Request path
        https://ip:端口/apis/apps/v1/namespaces/deployments/myapp-deploy/
    HTTP request verb:
        get、post、put、delete
    API requests verb:
        get、list、create、update、patch、watch、proxy、redirect、delete、deletecollection
    Resource:
    Subresource:
    Namespace:
    API group:

[root@master ~]# kubectl proxy --port=8080
Starting to serve on 127.0.0.1:8080
curl http://localhost:8080/api/v1/namespaces
json格式的数据
curl http://localhost:8080/apis/apps/v1/namespaces/kube-system/deployments

总结
api server是整个访问请求进入的网关接口,请求过程中认证用于身份识别,而授权用于权限检查,准入控制是进一步补充授权检查。

和APIserver打交道的有两类:
1,来自集群外部的地址
2,来自集群内部的Pod

人类客户端
pod客户端,每个pod都与apiserver打交道。k8s定义的存储卷挂载到pod上,默认pod里都有一个存储卷。
==================================
ServiceAccount
    Service account是为了方便Pod里面的进程调用Kubernetes API或其他外部服务而设计的。它与User account不同

    User account是为人设计的,而service account则是为Pod中的进程调用Kubernetes API而设计;
    User account是跨namespace的,而service account则是仅局限它所在的namespace;
    每个namespace都会自动创建一个default service account
    Token controller检测service account的创建,并为它们创建secret
    开启ServiceAccount Admission Controller后
    每个Pod在创建后都会自动设置spec.serviceAccount为default(除非指定了其他ServiceAccout)
    验证Pod引用的service account已经存在,否则拒绝创建
    如果Pod没有指定ImagePullSecrets,则把service account的ImagePullSecrets加到Pod中
    每个container启动后都会挂载该service account的token和ca.crt到/var/run/secrets/kubernetes.io/serviceaccount/

    当创建 pod 的时候,如果没有指定一个 service account,系统会自动在与该pod相同的 
namespace下为其指派一个default service account。而pod和apiserver之间进行通信的账号,
称为serviceAccountName。如下:


[root@master ~]# kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
myapp-0                         1/1     Running   0          23h
myapp-1                         1/1     Running   0          23h
myapp-2                         1/1     Running   0          23h
myapp-3                         1/1     Running   0          23h
nginx-7849c4bbcd-dscjr          1/1     Running   0          17d
nginx-7849c4bbcd-vdd45          1/1     Running   0          17d
nginx-7849c4bbcd-wrvks          1/1     Running   0          17d
nginx-deploy-84cbfc56b6-scrnt   1/1     Running   0          24h

[root@master ~]# kubectl get pods/myapp-0 -o yaml |grep "serviceAccountName"
  serviceAccountName: default
[root@master ~]# kubectl describe pods myapp-0
。。。。。。。。。。。
Volumes:
  myappdata:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  myappdata-myapp-0
    ReadOnly:   false
  default-token-6q28w:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-6q28w
    Optional:    false
。。。。。。。。。。

    从上面可以看到每个Pod无论定义与否都会有个存储卷,这个存储卷为default-token-*** token令牌,这就
是pod和serviceaccount认证信息。通过secret进行定义,由于认证信息属于敏感信息,所以需要保存在secret
资源当中,并以存储卷的方式挂载到Pod当中。从而让Pod内运行的应用通过对应的secret中的信息来连接apiserver,
并完成认证。每个namespace中都有一个默认的叫做default的service account 资源。进行查看名称空间内的secret,
也可以看到对应的default-token。让当前名称空间中所有的pod在连接apiserver时可以使用的预制认证信息,
从而保证pod之间的通信。

[root@master ~]# kubectl get sa
NAME      SECRETS   AGE
default   1         19d
[root@master ~]# kubectl get sa -n ingress-nginx
NAME                           SECRETS   AGE
default                        1         7d
nginx-ingress-serviceaccount   1         7d
[root@master ~]# kubectl get secret
NAME                    TYPE                                  DATA   AGE
default-token-6q28w     kubernetes.io/service-account-token   3      19d
mysecret                Opaque                                2      2d
mysecret-1              Opaque                                2      2d
mysecret2               Opaque                                2      2d
tomcat-ingress-secret   kubernetes.io/tls                     2      5d23h
[root@master ~]# kubectl get secret -n ingress-nginx
NAME                                       TYPE                                  DATA   AGE
default-token-gbqpv                        kubernetes.io/service-account-token   3      7d
nginx-ingress-serviceaccount-token-qs2x5   kubernetes.io/service-account-token   3      7d

    而默认的service account 仅仅只能获取当前Pod自身的相关属性,无法观察到其他名称空间Pod的相关属
性信息。如果想要扩展Pod,假设有一个Pod需要用于管理其他Pod或者是其他资源对象,是无法通过自身的名称
空间的serviceaccount进行获取其他Pod的相关属性信息的,此时就需要进行手动创建一个serviceaccount,并
在创建Pod时进行定义。那么serviceaccount该如何进行定义呢???实际上,service accout也属于一个k8s
资源,如下查看service account的定义方式:

[root@master ~]# kubectl explain sa
KIND:     ServiceAccount
VERSION:  v1

DESCRIPTION:
     ServiceAccount binds together: * a name, understood by users, and perhaps
     by peripheral systems, for an identity * a principal that can be
     authenticated and authorized * a set of secrets

FIELDS:
   apiVersion    <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#resources

   automountServiceAccountToken    <boolean>
     AutomountServiceAccountToken indicates whether pods running as this service
     account should have an API token automatically mounted. Can be overridden
     at the pod level.

   imagePullSecrets    <[]Object>
     ImagePullSecrets is a list of references to secrets in the same namespace
     to use for pulling any images in pods that reference this ServiceAccount.
     ImagePullSecrets are distinct from Secrets because Secrets can be mounted
     in the pod, but ImagePullSecrets are only accessed by the kubelet. More
     info:
     https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod

   kind    <string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds

   metadata    <Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata

   secrets    <[]Object>
     Secrets is the list of secrets allowed to be used by pods running using
     this ServiceAccount. More info:
     https://kubernetes.io/docs/concepts/configuration/secret
     
快速创建service account的方法

kubectl create serviceaccount mysa -o yaml --dry-run
备注:
命令:-o yaml可以生成yaml文件
命令:--dry-run 测试运行

 [root@master ~]# kubectl create serviceaccount mysa -o yaml --dry-run
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: null
  name: mysa
[root@master ~]# kubectl create serviceaccount mysa -o yaml --dry-run > serviceaccount.yaml
[root@master ~]# kubectl apply -f serviceaccount.yaml
serviceaccount/mysa created
[root@master ~]# kubectl get serviceaccount/mysa -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"creationTimestamp":null,"name":"mysa","namespace":"default"}}
  creationTimestamp: "2019-03-19T15:24:08Z"
  name: mysa
  namespace: default
  resourceVersion: "2416688"
  selfLink: /api/v1/namespaces/default/serviceaccounts/mysa
  uid: 0a41de0e-4a5b-11e9-bca0-a0369f95b76e
secrets:
- name: mysa-token-hb6lq

利用运行的Pod生成Yaml
kubectl get pods myapp-0 -o yaml --export

看到有一个 token 已经被自动创建,并被 service account 引用。设置非默认的 service account,
只需要在 pod 的spec.serviceAccountName 字段中将name设置为您想要用的 service account 名字
即可。在 pod 创建之初 service account 就必须已经存在,否则创建将被拒绝。需要注意的是不能
更新已创建的 pod 的 service account。

serviceaccount的自定义使用
这里在default名称空间创建了一个sa为admin,可以看到已经自动生成了一个Tokens:admin-token-68nnz
[root@master ~]# kubectl create serviceaccount admin
serviceaccount/admin created
这里会自动生成一个secret信息,用于sa连接到apiservice认证信息
但是认证不等于权限,必须有授权
[root@master ~]# kubectl get sa
NAME      SECRETS   AGE
admin     1         5s
default   1         19d
mysa      1         3m27s
[root@master ~]#  kubectl describe sa/admin
Name:                admin
Namespace:           default
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   admin-token-68nnz
Tokens:              admin-token-68nnz
Events:              <none>
[root@master ~]# kubectl get secret
NAME                    TYPE                                  DATA   AGE
admin-token-68nnz       kubernetes.io/service-account-token   3      37s
default-token-6q28w     kubernetes.io/service-account-token   3      19d
mysa-token-hb6lq        kubernetes.io/service-account-token   3      3m59s
mysecret                Opaque                                2      2d
mysecret-1              Opaque                                2      2d
mysecret2               Opaque                                2      2d
tomcat-ingress-secret   kubernetes.io/tls                     2      5d23h

Pod中引用新建的serviceaccount
[root@master manifests]# vi pod-sa-demo.yaml           
apiVersion: v1
kind: Pod
metadata:
  name: pod-sa-demo
  namespace: default
  labels:
    app: laolang
    tier: frontend
spec:
  containers:
  - name: laolang
    image: nginx
    ports:
    - name: http
      containerPort: 80
  serviceAccountName: admin              这里
~                          
[root@master manifests]# kubectl apply -f pod-sa-demo.yaml 
pod/pod-sa-demo created
[root@master manifests]# kubectl describe pods pod-sa-demo
。。。。。。。
Volumes:
  admin-token-68nnz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  admin-token-68nnz
    Optional:    false              这里
。。。。。。。。

============================
在K8S集群当中,每一个用户对资源的访问都是需要通过apiserver进行通信认证才能进行访问的,
那么在此机制当中,对资源的访问可以是token,也可以是通过配置文件的方式进行保存和使用认
证信息,可以通过kubectl config进行查看配置,如下:
[root@master manifests]# kubectl config view
apiVersion: v1
clusters:      #集群列表
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.249.6.100:6443
  name: kubernetes
contexts:     #上下文列表
- context:    #定义哪个集群被哪个用户访问
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes   #当前上下文
kind: Config
preferences: {}
users:        #用户列表
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
    
    在上面的配置文件当中,定义了集群、上下文以及用户。其中Config也是K8S的标准资源之一,
在该配置文件当中定义了一个集群列表,指定的集群可以有多个;用户列表也可以有多个,指明集群
中的用户;而在上下文列表当中,是进行定义可以使用哪个用户对哪个集群进行访问,以及当前使
用的上下文是什么。如图:定义了用户kubernetes-admin可以对kubernetes该集群的访问,用户
kubernetes-user1对Clluster1集群的访问.

自建证书和账号进行访问apiserver演示
(1)生成证书
cd /etc/kubernetes/pki
[root@master pki]# (umask 077; openssl genrsa -out wolf.key 2048)
Generating RSA private key, 2048 bit long modulus
.........+++
.........................+++
e is 65537 (0x10001)
[root@master pki]# ll
total 64
-rw-r--r--. 1 root root 1216 Feb 28 06:23 apiserver.crt
-rw-r--r--. 1 root root 1090 Feb 28 06:23 apiserver-etcd-client.crt
-rw-------. 1 root root 1679 Feb 28 06:23 apiserver-etcd-client.key
-rw-------. 1 root root 1675 Feb 28 06:23 apiserver.key
-rw-r--r--. 1 root root 1099 Feb 28 06:23 apiserver-kubelet-client.crt
-rw-------. 1 root root 1679 Feb 28 06:23 apiserver-kubelet-client.key
-rw-r--r--. 1 root root 1025 Feb 28 06:23 ca.crt
-rw-------. 1 root root 1675 Feb 28 06:23 ca.key
drwxr-xr-x. 2 root root  162 Feb 28 06:23 etcd
-rw-r--r--. 1 root root 1038 Feb 28 06:23 front-proxy-ca.crt
-rw-------. 1 root root 1675 Feb 28 06:23 front-proxy-ca.key
-rw-r--r--. 1 root root 1058 Feb 28 06:23 front-proxy-client.crt
-rw-------. 1 root root 1679 Feb 28 06:23 front-proxy-client.key
-rw-------. 1 root root 1675 Mar 19 12:03 jesse.key
-rw-------. 1 root root 1675 Feb 28 06:23 sa.key
-rw-------. 1 root root  451 Feb 28 06:23 sa.pub
-rw-------. 1 root root 1679 Mar 19 12:04 wolf.key

(2)使用ca.crt进行签署
[root@master pki]# openssl req -new -key wolf.key -out wolf.csr -subj "/CN=wolf"   证书签署请求
[root@master pki]# openssl x509 -req -in wolf.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out wolf.crt -days 365
Signature ok
subject=/CN=wolf
Getting CA Private Key
[root@master pki]# openssl x509 -in wolf.crt -text -noout
Certificate:
    Data:
        Version: 1 (0x0)
        Serial Number:
            c4:af:0f:78:dc:e3:ba:11
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Mar 19 16:07:05 2019 GMT
            Not After : Mar 18 16:07:05 2020 GMT
        Subject: CN=wolf
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:d6:e6:eb:4e:b6:6b:eb:2e:e8:92:32:2c:9a:56:
                    c4:a1:d7:05:9a:a1:8d:dc:72:4d:6f:ae:20:ff:eb:
                    3a:4a:15:2c:e9:51:f8:f5:65:b8:9f:1e:03:c9:43:
                    51:03:a9:b3:c3:57:c2:cb:7d:26:f4:11:72:61:c6:
                    19:e5:ac:67:c5:d4:3e:3c:ab:41:d7:2b:b1:ca:bb:
                    6d:ec:b1:35:dc:f6:71:d2:b9:38:8a:c2:97:21:b9:
                    1a:88:3b:2d:4d:8f:57:88:80:bd:bf:59:f7:cb:7e:
                    7c:cb:fd:96:7b:ec:b5:f8:dc:f0:db:55:7c:52:54:
                    39:ad:61:65:f9:a1:f3:7c:ba:2b:9c:b9:55:f2:b0:
                    66:3d:2b:65:8a:f6:69:d1:79:c7:84:c6:7a:88:cc:
                    23:2c:3d:d8:47:3d:5b:33:ed:41:31:f3:01:0d:9d:
                    f1:5b:5a:6f:d8:bf:0d:0d:e6:71:a2:ce:5a:ed:4a:
                    5f:d4:f4:7e:55:33:dc:9c:f0:75:3e:10:bf:18:14:
                    e7:6a:e8:f7:27:2c:e8:12:17:56:19:33:8b:eb:c5:
                    3a:0e:0b:ab:b3:55:d0:60:57:f8:74:d1:be:a9:b3:
                    1e:25:85:c4:32:e0:96:1b:1f:ba:39:c4:76:bd:bb:
                    98:9a:65:27:b1:d2:68:33:ce:58:bd:74:5c:c3:07:
                    00:ab
                Exponent: 65537 (0x10001)
    Signature Algorithm: sha256WithRSAEncryption
         40:9d:b2:fc:d7:5c:4b:14:e0:6d:4c:ac:6d:4a:f7:18:8e:85:
         12:1f:cf:5b:ad:6f:e7:b2:55:85:70:63:08:d5:da:ba:86:11:
         cf:5e:2e:e5:ac:00:3f:e4:e2:9b:7b:82:a7:d4:ab:9b:0b:92:
         32:0e:4c:4d:6f:1c:52:bb:4a:3b:2d:30:57:28:dd:c1:d2:51:
         35:bc:46:43:a0:b6:7b:5a:84:9e:8b:7c:43:e0:ec:64:cb:5d:
         f2:63:df:ab:db:0a:0b:97:aa:b3:34:13:c3:85:81:cb:0e:d4:
         7b:19:4e:31:fc:a1:db:67:16:68:09:b4:a2:2f:d9:2e:a1:43:
         2e:1d:91:6e:48:30:e3:ed:0d:14:60:60:67:fd:c6:0e:d3:12:
         32:73:b8:de:29:87:02:84:5c:68:52:aa:4d:1d:1b:fe:fc:74:
         0c:40:66:54:97:fa:31:57:28:46:40:da:43:73:e6:50:a0:00:
         0d:70:13:14:e4:1c:17:18:0d:af:58:e0:a4:ae:15:be:7a:fb:
         ee:f9:62:46:88:b4:de:98:0e:ac:48:22:44:83:3f:4e:44:3c:
         83:e5:d5:3c:cb:4f:bb:02:a9:7a:35:15:7f:69:97:16:93:5b:
         be:ec:0a:4a:2a:39:bb:e4:50:a1:f4:23:72:5d:56:82:f6:9b:
         d9:55:11:55
(3)添加到用户认证
[root@master pki]# kubectl config set-credentials wolf --client-certificate=./wolf.crt --client-key=./wolf.key --embed-certs=true
User "wolf" set.
[root@master pki]# kubectl config set-context wolf@kubernetes --cluster=kubernetes --user=wolf
Context "wolf@kubernetes" created.
[root@master pki]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.249.6.100:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
- context:
    cluster: kubernetes
    user: wolf      #已经生效
  name: wolf@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: wolf
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

[root@master pki]# kubectl config use-context wolf@kubernetes
Switched to context "wolf@kubernetes".
[root@master pki]# kubectl get pods
Error from server (Forbidden): pods is forbidden: User "wolf" cannot list resource "pods" in API group "" in the namespace "default"

从上面的演示,当切换成magedu用户进行访问集群时,由于magedu该账户没有管理集群的权限,所以在获取pods资源信息时,会提示Forrbidden。那么下面就再来了解一下怎么对账户进行授权!!
先回切
[root@master pki]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
[root@master pki]# kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
myapp-0                         1/1     Running   0          25h
myapp-1                         1/1     Running   0          25h
myapp-2                         1/1     Running   0          24h
myapp-3                         1/1     Running   0          24h
nginx-7849c4bbcd-dscjr          1/1     Running   0          17d
nginx-7849c4bbcd-vdd45          1/1     Running   0          17d
nginx-7849c4bbcd-wrvks          1/1     Running   0          17d
nginx-deploy-84cbfc56b6-scrnt   1/1     Running   0          25h

==================================
[root@master ~]# kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1

[root@master ~]# kubectl proxy --port=8080
Starting to serve on 127.0.0.1:8080
[root@master ~]# curl http://localhost:8080/api/v1/namespaces
{
  "kind": "NamespaceList",
  "apiVersion": "v1",
  "metadata": {
    "selfLink": "/api/v1/namespaces",
    "resourceVersion": "2409540"
  },
  "items": [
    {
      "metadata": {
        "name": "default",
        "selfLink": "/api/v1/namespaces/default",
        "uid": "4d16f517-3b4b-11e9-a704-a0369f95b76e",
        "resourceVersion": "5",
        "creationTimestamp": "2019-02-28T11:23:41Z"
      },
      "spec": {
        "finalizers": [
          "kubernetes"
        ]
      },
      "status": {
        "phase": "Active"
      }
    },
    {
      "metadata": {
        "name": "ingress-nginx",
        "selfLink": "/api/v1/namespaces/ingress-nginx",
        "uid": "3d43bbee-44d6-11e9-a704-a0369f95b76e",
        "resourceVersion": "1481457",
        "creationTimestamp": "2019-03-12T14:50:55Z",
        "labels": {
          "app.kubernetes.io/name": "ingress-nginx",
          "app.kubernetes.io/part-of": "ingress-nginx"
        },
        "annotations": {
          "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/name\":\"ingress-nginx\",\"app.kubernetes.io/part-of\":\"ingress-nginx\"},\"name\":\"ingress-nginx\"}}\n"
        }
      },
      "spec": {
        "finalizers": [
          "kubernetes"
        ]
      },
      "status": {
        "phase": "Active"
      }
    },
    {
      "metadata": {
        "name": "kube-public",
        "selfLink": "/api/v1/namespaces/kube-public",
        "uid": "4d19e864-3b4b-11e9-a704-a0369f95b76e",
        "resourceVersion": "14",
        "creationTimestamp": "2019-02-28T11:23:41Z"
      },
      "spec": {
        "finalizers": [
          "kubernetes"
        ]
      },
      "status": {
        "phase": "Active"
      }
    },
    {
      "metadata": {
        "name": "kube-system",
        "selfLink": "/api/v1/namespaces/kube-system",
        "uid": "4d1974d7-3b4b-11e9-a704-a0369f95b76e",
        "resourceVersion": "11",
        "creationTimestamp": "2019-02-28T11:23:41Z"
      },
      "spec": {
        "finalizers": [
          "kubernetes"
        ]
      },
      "status": {
        "phase": "Active"
      }
    }
  ]
}
}[root@master ~]# curl http://localhost:8080/apis/apps/v1/namespaces/kube-system/deployments
{
  "kind": "DeploymentList",
  "apiVersion": "apps/v1",
  "metadata": {
    "selfLink": "/apis/apps/v1/namespaces/kube-system/deployments",
    "resourceVersion": "2409849"
  },
  "items": [
    {
      "metadata": {
        "name": "coredns",
        "namespace": "kube-system",
        "selfLink": "/apis/apps/v1/namespaces/kube-system/deployments/coredns",
        "uid": "4ff9b522-3b4b-11e9-a704-a0369f95b76e",
        "resourceVersion": "2231",
        "generation": 1,
        "creationTimestamp": "2019-02-28T11:23:46Z",
        "labels": {
          "k8s-app": "kube-dns"
        },
        "annotations": {
          "deployment.kubernetes.io/revision": "1"
        }
      },
      "spec": {
        "replicas": 2,
        "selector": {
          "matchLabels": {
            "k8s-app": "kube-dns"
          }
        },
        "template": {
          "metadata": {
            "creationTimestamp": null,
            "labels": {
              "k8s-app": "kube-dns"
            }
          },
          "spec": {
            "volumes": [
              {
                "name": "config-volume",
                "configMap": {
                  "name": "coredns",
                  "items": [
                    {
                      "key": "Corefile",
                      "path": "Corefile"
                    }
                  ],
                  "defaultMode": 420
                }
              }
            ],
            "containers": [
              {
                "name": "coredns",
                "image": "k8s.gcr.io/coredns:1.2.6",
                "args": [
                  "-conf",
                  "/etc/coredns/Corefile"
                ],
                "ports": [
                  {
                    "name": "dns",
                    "containerPort": 53,
                    "protocol": "UDP"
                  },
                  {
                    "name": "dns-tcp",
                    "containerPort": 53,
                    "protocol": "TCP"
                  },
                  {
                    "name": "metrics",
                    "containerPort": 9153,
                    "protocol": "TCP"
                  }
                ],
                "resources": {
                  "limits": {
                    "memory": "170Mi"
                  },
                  "requests": {
                    "cpu": "100m",
                    "memory": "70Mi"
                  }
                },
                "volumeMounts": [
                  {
                    "name": "config-volume",
                    "readOnly": true,
                    "mountPath": "/etc/coredns"
                  }
                ],
                "livenessProbe": {
                  "httpGet": {
                    "path": "/health",
                    "port": 8080,
                    "scheme": "HTTP"
                  },
                  "initialDelaySeconds": 60,
                  "timeoutSeconds": 5,
                  "periodSeconds": 10,
                  "successThreshold": 1,
                  "failureThreshold": 5
                },
                "terminationMessagePath": "/dev/termination-log",
                "terminationMessagePolicy": "File",
                "imagePullPolicy": "IfNotPresent",
                "securityContext": {
                  "capabilities": {
                    "add": [
                      "NET_BIND_SERVICE"
                    ],
                    "drop": [
                      "all"
                    ]
                  },
                  "readOnlyRootFilesystem": true,
                  "allowPrivilegeEscalation": false,
                  "procMount": "Default"
                }
              }
            ],
            "restartPolicy": "Always",
            "terminationGracePeriodSeconds": 30,
            "dnsPolicy": "Default",
            "serviceAccountName": "coredns",
            "serviceAccount": "coredns",
            "securityContext": {
              
            },
            "schedulerName": "default-scheduler",
            "tolerations": [
              {
                "key": "CriticalAddonsOnly",
                "operator": "Exists"
              },
              {
                "key": "node-role.kubernetes.io/master",
                "effect": "NoSchedule"
              }
            ]
          }
        },
        "strategy": {
          "type": "RollingUpdate",
          "rollingUpdate": {
            "maxUnavailable": 1,
            "maxSurge": "25%"
          }
        },
        "revisionHistoryLimit": 10,
        "progressDeadlineSeconds": 600
      },
      "status": {
        "observedGeneration": 1,
        "replicas": 2,
        "updatedReplicas": 2,
        "readyReplicas": 2,
        "availableReplicas": 2,
        "conditions": [
          {
            "type": "Available",
            "status": "True",
            "lastUpdateTime": "2019-02-28T11:41:03Z",
            "lastTransitionTime": "2019-02-28T11:41:03Z",
            "reason": "MinimumReplicasAvailable",
            "message": "Deployment has minimum availability."
          },
          {
            "type": "Progressing",
            "status": "True",
            "lastUpdateTime": "2019-02-28T11:41:03Z",
            "lastTransitionTime": "2019-02-28T11:41:03Z",
            "reason": "NewReplicaSetAvailable",
            "message": "ReplicaSet \"coredns-86c58d9df4\" has successfully progressed."
          }
        ]
      }
    }
  ]
}[root@master ~]# 

Logo

开源、云原生的融合云平台

更多推荐