3.14 k8s API安全机制
文章目录一、安全机制二、认证(Authentication)三、鉴权(Authorization)四、准入控制五、案例一:对ServiceAccount认证和鉴权1、创建ServiceAccount2、创建引用ServiceAccount的pod3、通过ServiceAccount认证k8s API证书以及获取API服务器授权4、通过RBAC插件为ServiceAccout授权六、案例二:对Use
文章目录
一、安全机制
API Server是进群内部各个组件通信的中介,是k8s的核心,也是外部请求访问安全入口,k8s的安全机制就是围绕API server来设计的,k8s使用了认证(Authentication)、鉴权(Authorization)、准入控制(Admission Control)三步来保证API Server的安全。
- 认证:k8s使用了CA证书的双向认证,即客户端需要认证API Server,API Server也需要认证客户端。
- 鉴权:认证过程只是确认了双方是可信的,可以相互通信,而鉴权可以确定客户端可以请求服务器的什么资源,可以确认客户端有什么权限。目前k8s内部使用的是默认的RBAC(Role-Based Access Control),基于角色的访问控制。
- 准入控制:准入控制是API Server的插件集合,通过添加不同插件,可以实现额外的准入控制规则。
二、认证(Authentication)
-
认证客户端与k8s API Server的双向CA认证,这里的客户端可以是k8s的组件,比如kubectl、kube-proxy、Scheduler等,组件与API Server都是通过双向认证进行通信的;
-
客户端还可以指pod,pod与API Server通信实际就是Pod中的Service Account与API Server进行通信,每个Pod内部如不特别指定绑定的Service Account,都会绑定pod所在命名空间中的默认Service Account(可以简称为sa),sa内部又三个部分组成:token、ca.crt、namespace,所以sa是已经和API Server进行过双向认证的了,当然pod还不能和API Server进行通信,还要通过鉴权才可以;
-
客户端还可以指实实在在的一个用户或用户组,用户或组可以向k8s中CA请求CA证书与API Server进行双向认证,当然认证通过后也还需要后续鉴权,与sa一样。
-
上述已经描述客户端用户可以包括ServiceAccount、用户、用户组,这些都是k8s中的用户,k8s中的用户在定义时都是向K8S中CA申请过证书并与API Server进行认证过的,否则就不是k8s中的用户。ServiceAccount属于k8s内部的,是已经与API Server进行认证过的,只需要后续进行授权即可。但新建的用户必须首先向CA申请证书。
证书申请格式:
{
"CN": "xxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "shanghai",
"L": "sh",
"O": "k8s",
"OU": "System"
}
]
}
其中,CN表示用户名;hosts表示指定的主机,如果主机未写,表示任何主机上的CN用户都可以申请认证;algo表示申请证书的算法;C表示国家,ST表示州,O表示用户所在的组。
三、鉴权(Authorization)
k8s中默认采用RBAC基于角色的访问控制对客户端进行授权,当用户与API Server认证完毕后就开始RBAC插件给用户授权了。
RBAC引入了4个资源对象:Role、RoleBinding、ClusterRole、ClusterRoleBinding。其中,Role和ClusterRole表示一组权限或者角色,指拥有该角色的用户可以在集群中执行哪些操作,Role是名称空间级别的,可以定义在一个namespace中,如果想跨namespace,访问集群级别的资源需要利用ClusterRole角色。
RoleBinding和ClusterRoleBinding可以将定义的Role和RoleBinding角色权限授予用户(用户指ServiceAccount、用户、用户组),其中RoleBinding适用于命名空间内授权,与Role搭配使用,ClusterRoleBinding适用于集群范围内授权,与ClusterRole搭配使用。当然RoleBindind也可以引用ClusterRole,表示用户也可以访问指定命名空间中的集群资源。
Role和ClusterRole的区别:
- ClusterRole可以访问进群级别的资源,可以访问所有命名空间的资源,Role不可以;
- ClusterRole可以访问非资源型endpoints,例如/healthz访问;
用户可以对资源执行的动作:
- get/watch:用户可以查询单个资源;
- create:用户可以创建单个资源;
- update:用户可以更新单个资源;
- patch:用户可以打包单个资源;
- delete:用户删除单个资源;
- list:用户可以查询某类资源的集合。
1、定义一个可以对默认命名空间中pod可以查看、获取的角色
apiversion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: pod-role
namespace: default
rules:
- apiGroup: [""] #pod位于核心组,“”中什么都不用写
resources: ["pods"] #一定要写资源的复数形式
verbs: ["get", "watch", "list"] #可以对资源执行get/watch/list操作
2、定义一个集群角色ClusterRole,拥有该角色的用户可以获取、查看集群中的Secret
apiversion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: secret-clusterrole
namespace: default
rules:
- apiGroup: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
3、将default命名空间中的pod-role授予xxy用户,xxy用户在default命名空间将具有pod-role权限
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: pod-rolebinding
namespace: default
subjects:
- kind: User
name: xxy
apiGroup: rbac.authorization.k8s.io
roleRef:
name: pod-role
kind: Role
apiGroup: rbac.authorization.k8s.io
4、RoleBinding除了可以引用Role角色,还可以引用ClusterRole角色对当前namespace中用户进行授权,这种操作允许集群管理员在集群内定义一些通用的ClusterRole,然后不同的namespace中使用不同的RoleBind来引用。
例如RoleBind引用了一个ClusterRole,ClusterRole具有对整个集群内的Secrets有访问权限;但授权用户xxy只能访问dev空间中的secrets,因为RoleBinding定义在了dev命名空间中。
apiVersion: rbac.authorization.k8s.io/v1beta1 #RoleBinding所在的组
kind: RoleBinding
metadata:
name: clusterrole-rolebinding
namespace: dev #RoleBinding所在的命名空间
subjects:
- name xxy #对xxy用户进行授权
kind: User #用户类型为User
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole #为xxy授予ClusterRole权限
name: secret-clusterrole #授予的ClusterRole权限为已经创建的secret-clusterrole
apiGroup: rbac.authorization.k8s.io
5、使用ClusterRoleBinding可以对整个集群中的所有命名空间中的资源进行授权,例如授予mygroup组内的所有用户可以对集群所有命名空间中的secret进行访问。
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: secret-clusterrole-rolebinding
subjects:
- name mygroup
kind: Group
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-clusterrole
apiGroup: rbac.authorization.k8s.io
四、准入控制
准入控制是API Server插件的集合,通过添加不同的插件,对用户的访问实现额外的准入控制权限。
五、案例一:对ServiceAccount认证和鉴权
1、创建ServiceAccount
首先创建一个名为mysa的SercieAccount,执行如下命令
kubectl create sa mysa #sa 为ServiceAccount的简写
查看创建后的mysa,执行kubectl describe sa mysa
[root@k8s-master01 sa_work]# kubectl describe sa mysa
Name: mysa
Namespace: default #创建的mysa在default空间中
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: mysa-token-x2xbr #使用mysa的pod会挂载该秘钥
Tokens: mysa-token-x2xbr #使用mysa的pod会挂载该token
Events: <none>
查看mysa中密钥信息,执行kubectl describe secret mysa-token-x2xbr
[root@k8s-master01 sa_work]# kubectl describe secret mysa-token-x2xbr
Name: mysa-token-x2xbr
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: mysa
kubernetes.io/service-account.uid: c28059ed-9177-43d8-8dea-8d22c08305e4
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 7 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im15c2EtdG9rZW4teDJ4YnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoibXlzYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImMyODA1OWVkLTkxNzctNDNkOC04ZGVhLThkMjJjMDgzMDVlNCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0Om15c2EifQ.HkDMVe91XcfonWr9SkAz6cT2tE8h2r6Zc65s_KIqKfSeTdUNg9A-OkYfzRf_ajk2SDbKc3D4SHnQBa8UO_fEePZKbw8kaULzZx0JxRUH6irdqx_O6vGfcQkDeoufI0foLjHzVvpo4SS6-5MexDSQlqHs8SC4jJlCtfYcpMRnGLvdPQH4vcg5BfOSve5VYIB5v4zPGmRC7Acp-I8Lmep66S_E_0IuE-S3k_UfchTJPLx5VHZlUdpg2p9PzSeWadC-_wXKA3HAA1qAJ7EB94WT-QfnGhglHSs6z3O7vOuWK1PcM2a4pqObzjfVuMB4XGXM56vaxGQT_gzM31OsDUJP8A
2、创建引用ServiceAccount的pod
通过下面的yaml文件创建pod,其中挂载的ServiceAccount为上面创建的mysa
apiVersion: v1
kind: Pod
metadata:
name: sa-pod
spec:
serviceAccountName: mysa #挂载serviceAccountName
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
3、通过ServiceAccount认证k8s API证书以及获取API服务器授权
进入上面创建的sa-pod,从pod内部访问k8s API服务器。
首先进入pod内部,执行下面命令
kubectl exec sa-pod -it -- /bin/sh
然后通过sa-pod内部的mysa认证k8s API证书以及获取API服务器授权,分别执行如下命令
export CURL_CA_BUNDLE=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt #认证API服务器
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) #设置TOKEN,用于访问API服务器
curl -H "Authorization: Bearer $TOKEN" https://10.96.0.1:443 #请求API服务器
其中API服务器地址和端口个可以在容器内部通过env
命令显示。
请求API服务器后,响应如下,发现API服务器拒绝响应该请求
# curl -H "Authorization: Bearer $TOKEN" https://10.96.0.1:443
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:serviceaccount:default:mysa\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
4、通过RBAC插件为ServiceAccout授权
首先创建一个Role角色,资源清单pod_role.yaml如下所示
apiVersion: rbac.authorization.k8s.io/v1 #Role的命名空间
kind: Role
metadata:
namespace: default #default命名空间
name: pod-role #角色名pod-role
rules: #角色的规则
- apiGroups: [""] #""表示赋予该角色的ServiceAccount或用户有权限访问API的核心资源
resources: ["pods"] #pods表示赋予该角色的ServiceAccount或用户只可以访问核心资源里面的pods资源,注意指定资源时,要用资源的复数,故此地为pods
verbs: ["get", "list"] #verbs指定赋予该角色的ServiceAccount或用户只能查询pods或获取具体的某pod
然后通过kubectl apply -f pod_role.yaml创建角色。
然后创建rolebinding把pod-role角色绑定到上述创建的mysa的ServiceAccount上,执行如下命令
kubectl create rolebinding pod-rolebinding --role=pod-role --serviceaccount=default:mysa -n default
通过RBAC插件授权后,在sa-pod中重新执行curl -H "Authorization: Bearer $TOKEN" https://10.96.0.1:443/api/v1/namespaces/default/pods
命令,发现sa-pod内部已经被授予权限查看default空间中所有的pod了,当然还可以获取指定的sa-pod,可以执行curl -H "Authorization: Bearer $TOKEN" https://10.96.0.1:443/api/v1/namespaces/default/pods/sa-pod
,对应角色中的get操作。
# curl -H "Authorization: Bearer $TOKEN" https://10.96.0.1:443/api/v1/namespaces/default/pods
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/default/pods",
"resourceVersion": "12477247"
},
"items": [
{
"metadata": {
"name": "myjob-qdx5v",
"generateName": "myjob-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/myjob-qdx5v",
"uid": "f975bbed-639e-450d-9b57-03bef0194ce1",
"resourceVersion": "12148287",
"creationTimestamp": "2020-10-20T15:47:20Z",
"labels": {
"controller-uid": "df6da4d9-3f48-4d7a-b03f-dc85fa8f7794",
"job-name": "myjob"
},
"ownerReferences": [
{
"apiVersion": "batch/v1",
"kind": "Job",
"name": "myjob",
"uid": "df6da4d9-3f48-4d7a-b03f-dc85fa8f7794",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-sk5fk",
"secret": {
"secretName": "default-token-sk5fk",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "my-busybox",
"image": "busybox",
"command": [
"/bin/sh",
"-c",
"sleep 60"
],
"resources": {
},
"volumeMounts": [
{
"name": "default-token-sk5fk",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "OnFailure",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "k8s-node02",
"securityContext": {
},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0,
"enableServiceLinks": true
},
"status": {
"phase": "Succeeded",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2020-10-20T15:47:20Z",
"reason": "PodCompleted"
},
{
"type": "Ready",
"status": "False",
"lastProbeTime": null,
"lastTransitionTime": "2020-10-20T15:48:21Z",
"reason": "PodCompleted"
},
{
"type": "ContainersReady",
"status": "False",
"lastProbeTime": null,
"lastTransitionTime": "2020-10-20T15:48:21Z",
"reason": "PodCompleted"
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2020-10-20T15:47:20Z"
}
],
"hostIP": "192.168.137.20",
"podIP": "10.244.2.192",
"startTime": "2020-10-20T15:47:20Z",
"containerStatuses": [
{
"name": "my-busybox",
"state": {
"terminated": {
"exitCode": 0,
"reason": "Completed",
"startedAt": "2020-10-20T15:47:21Z",
"finishedAt": "2020-10-20T15:48:21Z",
"containerID": "docker://ee1f724861209f143519ce44443feb4e7909462f6ce6286d910e08b8db70bc6c"
}
},
"lastState": {
},
"ready": false,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker-pullable://busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209",
"containerID": "docker://ee1f724861209f143519ce44443feb4e7909462f6ce6286d910e08b8db70bc6c"
}
],
"qosClass": "BestEffort"
}
},
{
"metadata": {
"name": "sa-pod",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/sa-pod",
"uid": "2d637faa-fc17-47ad-9dd0-be18133f646a",
"resourceVersion": "12422118",
"creationTimestamp": "2020-11-01T06:46:55Z",
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"name\":\"sa-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"nginx\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"nginx\"}],\"serviceAccountName\":\"mysa\"}}\n"
}
},
"spec": {
"volumes": [
{
"name": "mysa-token-x2xbr",
"secret": {
"secretName": "mysa-token-x2xbr",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "nginx",
"image": "nginx",
"resources": {
},
"volumeMounts": [
{
"name": "mysa-token-x2xbr",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"serviceAccountName": "mysa",
"serviceAccount": "mysa",
"nodeName": "k8s-node02",
"securityContext": {
},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0,
"enableServiceLinks": true
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2020-11-01T06:46:55Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2020-11-01T06:46:57Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2020-11-01T06:46:57Z"
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2020-11-01T06:46:55Z"
}
],
"hostIP": "192.168.137.20",
"podIP": "10.244.2.193",
"startTime": "2020-11-01T06:46:55Z",
"containerStatuses": [
{
"name": "nginx",
"state": {
"running": {
"startedAt": "2020-11-01T06:46:56Z"
}
},
"lastState": {
},
"ready": true,
"restartCount": 0,
"image": "nginx:latest",
"imageID": "docker-pullable://nginx@sha256:21f32f6c08406306d822a0e6e8b7dc81f53f336570e852e25fbe1e3e3d0d0133",
"containerID": "docker://472dbd593f7b54ae13a39ebfc79df7744d03365afd82932c58a3595f5b3fd186"
}
],
"qosClass": "BestEffort"
}
}
]
}
六、案例二:对User用户进行认证和授权ClusterRole角色
1、首先创建用户名为xxy的User用户
useradd xxy #创建xxy用户
passwd xxy #为xxy用户设置密码
通过新创建xxy用户访问k8s中的pod资源,验证是否有权限,执行下述命令验证无权限。
[xxy@k8s-master01 ~]$ kubectl get pod
The connection to the server localhost:8080 was refused - did you specify the right host or port?
2、创建证书请求
xxy用户还无权限访问k8s中资源,切换到root用户,需为xxy用户授权,在授权之前先要认证,在认证之前先要创建xxy的认证请求(博主请求文件名:/usr/local/install-k8s/cert/lzy/lzy-csr.json),格式如下:
{
"CN": "lzy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "shanghai",
"L": "sh",
"O": "lzy",
"OU": "System"
}
]
}
3、下载cfssl工具生成证书
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
然后为cfssl工具赋执行权限,在*/usr/bin/*目录下执行
chmod a+x *
进入*/etc/kubernetes/kpi*目录,执行下述命令为lzy生成请求证书
[root@k8s-master01 pki]# cfssl gencert -ca=ca.crt -ca-key=ca.key -profile=kubernetes /usr/local/install-k8s/cert/lzy/lzy-csr.json | cfssljson -bare lzy
2020/11/10 10:31:54 [INFO] generate received request
2020/11/10 10:31:54 [INFO] received CSR
2020/11/10 10:31:54 [INFO] generating key: rsa-2048
2020/11/10 10:31:55 [INFO] encoded CSR
2020/11/10 10:31:55 [INFO] signed certificate with serial number 58474307588155219092333351641295621985100131178
2020/11/10 10:31:55 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
执行上述命令为会生成lzy.csr、 lzy-key.pem、lzy.pem三个文件。
4、设置参数
进入*/usr/local/install-k8s/cert/lzy*目录,设置集群参数,生成一个 lzy.kubeconfig配置文件
export KUBE_APISERVER="https://192.168.137.100:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=lzy.kubeconfig
设置客户端认证参数:
kubectl config set-credentials lzy \
--client-certificate=/etc/kubernetes/pki/lzy.pem \
--client-key=/etc/kubernetes/pki/lzy-key.pem \
--embed-certs=true \
--kubeconfig=lzy.kubeconfig
设置用户参数
首先设置user可以访问的名称空间
[root@k8s-master01 pki]# kubectl create namespace dev
namespace/dev created
kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=lzy \
--namespace=dev \
--kubeconfig=lzy.kubeconfig
参数设置完毕后,lzy.kubeconfig已经包含用户和集群的认证信息,在lzy用户目录下创建.kube目录,并把生成的lzy.kubeconfig文件copy到用户下,并设置用户权限
cp lzy.kubeconfig /home/lzy/.kube/
chown lzy:lzy /home/lzy/.kube/lzy.kubeconfig
5、授权
现在lzy用户以及认证信息都已经创建完毕,为lzy用户授予admin访问权限,lzy用户可以在dev命名空间中可以操作任何事项。
[root@k8s-master01 xxy]# kubectl create rolebinding lzy-admin-binding2 --clusterrole=admin --user=lzy --namespace=dev
rolebinding.rbac.authorization.k8s.io/lzy-admin-binding2 created
切换到lzy用户,绑定copy过来的lzy.kubeconfig文件
mv lzy.kubeconfig config
chmod 777 config
kubectl config use-context kubernetes --kubeconfig=/home/lzy/.kube/config
6、验证
在集群中根据下面资源清单在dev命名空间中创建pod
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: dev
spec:
containers:
- image: busybox
imagePullPolicy: IfNotPresent
name: main-container
command: ['sh', '-c', 'sleep 30']
然后在lzy用户中查询,发现lzy用户查询pod的权限
[lzy@k8s-master01 .kube]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mypod 1/1 Running 0 10s
七、案例三:对User用户授权Role角色
上述案例已对lzy用户认证完毕,本案例对User用户授权普通的Role角色。该角色使用户
1、首先删除上述案例为lzy授权的admin角色
[root@k8s-master01 lzy]# kubectl delete rolebinding lzy-admin-binding2 -n dev
rolebinding.rbac.authorization.k8s.io "lzy-admin-binding2" deleted
2、创建名为secret-reader-role的role角色资源清单
apiversion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: secret-clusterrole
namespace: dev
rules:
- apiGroup: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
[root@k8s-master01 rbac]# kubectl apply -f secret-role.yaml
role.rbac.authorization.k8s.io/secret-reader-role created
3、创建名secret-reader-rolebinding的RoleBinding,并把secret-reader-role绑定到lzy
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-reader-rolebinding
namespace: dev
subjects:
- name: lzy
kind: User
apiGroup: rbac.authorization.k8s.io
roleRef:
name: secret-reader-role
kind: Role
apiGroup: rbac.authorization.k8s.io
[root@k8s-master01 rbac]# kubectl apply -f secret-reader-rolebinding.yaml
rolebinding.rbac.authorization.k8s.io/secret-reader-rolebinding created
4、登录lzy用户,并查看secret
[lzy@k8s-master01 .kube]$ kubectl get secret
NAME TYPE DATA AGE
default-token-b8lk4 kubernetes.io/service-account-token 3 3d13h
更多推荐
所有评论(0)