k8s安全04--kube-apiserver 安全配置
k8s安全04--kube-apiserver 安全配置1 介紹2 安全配置2.1 配置 insecure-port2.2 RBAC2.3 Service Accounts2.4 Researching Pod Security Policies限制pods使用指定的目录控制pod 的网络配置 allowedUnsafeSysctls2.5 Enable Pod Security Policies
k8s安全04--kube-apiserver 安全配置
1 介紹
k8s安全02–云安全工具与安全运行时 和 k8s安全03–云安全工具 kube-bench & OPA 介绍了云安全常见的工具, 本文继续上述两篇文章,介绍 k8s 中 apiserver相关的安全配置。
2 安全配置
2.1 配置 insecure-port
k8s 集群中可以通过配置api-server 启动命令的–insecure-port=0 来配置api-server 的本地端口安全。
docs/concepts/security/controlling-access
root@kmaster:/home/xg# sed -e '/insecure-port/s/^/#/g' -i /etc/kubernetes/manifests/kube-apiserver.yaml
root@kmaster:/home/xg# cat /etc/kubernetes/manifests/kube-apiserver.yaml |grep inse
# - --insecure-port=0
root@kmaster:/home/xg# kube-bench
...
[PASS] 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Scored)
[FAIL] 1.2.16 Ensure that the admission control plugin PodSecurityPolicy is set (Scored)
[PASS] 1.2.17 Ensure that the admission control plugin NodeRestriction is set (Scored)
[PASS] 1.2.18 Ensure that the --insecure-bind-address argument is not set (Scored)
[FAIL] 1.2.19 Ensure that the --insecure-port argument is set to 0 (Scored)
[PASS] 1.2.20 Ensure that the --secure-port argument is not set to 0 (Scored)
...
2.2 RBAC
创建role 并通过rolebinding来绑定
Using RBAC Authorization
1. 创建 role
$ vim limitonerole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: prod-a
name: limitone
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "delete"]
rolebind
$ kubectl auth reconcile -f limitonerole.yaml
$ kubectl -n prod-a get role
2. 创建RoleBinding
$ vim limitonebind.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: manage-pods
namespace: prod-a
subjects:
- kind: User
name: paul
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: limitone
apiGroup: rbac.authorization.k8s.io
$ kubectl auth reconcile -f limitonebind.yaml
$ kubectl -n prod-a get rolebindings.rbac.authorization.k8s.io
2.3 Service Accounts
創建SA, 並設置到pod配置中
Configure Service Accounts for Pods
1. 创建 pod
$ vim simplepod.yaml
apiVersion: v1
kind: Pod
metadata:
name: simple-pod
namespace: prod-b
spec:
containers:
- name: simple
image: nginx
$ kubectl apply -f simplepod.yaml
2. 两个ns 的pod 都无法访问pod 信息
$ kubectl -n prod-b get pod simple-pod -oyaml|grep serviceaccount
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
# root@simple-pod:/# curl https://kubernetes.default:443/api/v1 --insecure
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/api/v1\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
# export TOKEN=$(cat /run/secrets/kubernetes.io/serviceaccount/token)
# curl -H "Authorization: Bearer $TOKEN" https://kubernetes.default:443/api/v1 --insecure
{
"kind": "APIResourceList",
"groupVersion": "v1",
"resources": [
{
"name": "bindings",
"singularName": "",
"namespaced": true,
"kind": "Binding",
"verbs": [
"create"
]
},
......
{
"name": "services/status",
"singularName": "",
"namespaced": true,
"kind": "Service",
"verbs": [
"get",
"patch",
"update"
]
}
]
}
# curl -H "Authorization: Bearer $TOKEN" https://kubernetes.default:443/api/v1/namespaces/default/pods/ --insecure
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "pods is forbidden: User \"system:serviceaccount:prod-b:default\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
$ kubectl -n prod-b get sa default -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2021-11-11T13:37:49Z"
name: default
namespace: prod-b
resourceVersion: "488820"
uid: 942370fb-f114-4ad5-a41c-402de7e9ea08
secrets:
- name: default-token-4w7mh
4. 一个ns配置sa,且设置pod权限,另外 一个保持不便
$ vim prodbSA.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: simple-sa
namespace: prod-b
secrets:
$ kubectl create -f prodbSA.yaml
serviceaccount/simple-sa created
$ vim SArole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: prod-b
name: sa-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list"]
$ kubectl create -f SArole.yaml
role.rbac.authorization.k8s.io/sa-role created
$ vim SArolebind.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sa-role-bind
namespace: prod-b
subjects:
- kind: ServiceAccount
name: simple-sa
namespace: prod-b
roleRef:
kind: Role
name: sa-role
apiGroup: rbac.authorization.k8s.io
$ kubectl create -f SArolebind.yaml
rolebinding.rbac.authorization.k8s.io/sa-role-bind created
5. 设置pod权限的可以正常访问pod 信息
再 pod 中新增上述新建的SA simple-sa
$ vim simplepod.yaml
apiVersion: v1
kind: Pod
metadata:
name: simple-pod
namespace: prod-b
spec:
serviceAccountName: simple-sa
containers:
- name: simple
image: nginx
$ kubectl delete -f simplepod.yaml
pod "simple-pod" deleted
$ kubectl create -f simplepod.yaml
pod/simple-pod created
$ kubectl -n prod-b exec -it simple-pod -- bash
# export TOKEN=$(cat /run/secrets/kubernetes.io/serviceaccount/token)
# curl -H "Authorization: Bearer $TOKEN" https://192.168.2.11:6443/api/v1/namespaces/default/pods/ --insecure
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "pods is forbidden: User \"system:serviceaccount:prod-b:default\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
# curl -H "Authorization: Bearer $TOKEN" https://192.168.2.11:6443/api/v1/namespaces/prod-b/pods/ --insecure
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "490779"
},
"items": [
{
"metadata": {
"name": "simple-pod",
"namespace": "prod-b",
"uid": "32b61feb-c3d6-48d7-9d84-ce3445d691ce",
"resourceVersion": "490723",
......
"ready": true,
"restartCount": 0,
"image": "nginx:latest",
"imageID": "docker-pullable://nginx@sha256:dfef797ddddfc01645503cef9036369f03ae920cac82d344d58b637ee861fda1",
"containerID": "docker://467ebc051442f7eb319a2da968f8340d019a2cdf9d794966d89320af055ef5e0",
"started": true
}
],
"qosClass": "BestEffort"
}
}
]
}
2.4 Researching Pod Security Policies
docs/concepts/policy/pod-security-policy/
PodSecurityPolicy-对不同的用户和组分配不同的PodSecurityPolicy
限制pods使用指定的目录
Which policy would I use to limit pods using hostPath to a directory, such as /data and all subdirectories?
$ vim psp01.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: allowed-host-paths
spec:
fsGroup:
rule: RunAsAny
seLinux:
rule: RunAsAny
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
allowedHostPaths:
- pathPrefix: "/home"
readOnly: true
$ kubectl -n prod-a apply -f psp01.yaml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/allowed-host-paths created
控制pod 的网络
man-pages/man7/capabilities.7.html
2. Which policy and what YAML stanza with CAP_xx would be used if you want to allow a pod from fully controlling the node’s networking?
$ vim psp02.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: allowed-capabilities
spec:
allowedCapabilities:
- CAP_NET_ADMIN
......
配置 allowedUnsafeSysctls
If a developer requires known unsafe sysctls, such as what high-performance computing may require, what yaml would you need to put into the pod spec to allow it?
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: allowed-unsafe-sysctls
spec:
allowedUnsafeSysctls:
- kernel.msg*
......
2.5 Enable Pod Security Policies
1. 新建 PodSecurityPolicy
vim nopriv.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: no-priv
spec:
privileged: false
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
$ kubectl apply -f nopriv.yaml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/no-priv created
2. 新建普通的pod,可以正常新建
$ kubectl create deployment busybox --image=busybox:1.32 -- sleep infinity
3. 配置 PodSecurityPolicy,使其生效
vim /etc/kubernetes/manifests/kube-apiserver.yaml
...
- --enable-admission-plugins=NodeRestriction,PodSecurityPolicy
...
4. 新建busybox2, 并查看报错信息,发现其受到 PodSecurityPolicy 限制
$ kubectl create deployment busybox2 --image=busybox:1.32 -- sleep infinity
$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
busybox 1/1 1 1 13m
busybox2 0/1 0 0 11s
$ kubectl describe rs busybox2-648559c4d4
$Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 8s (x14 over 49s) replicaset-controller Error creating: pods "busybox2-648559c4d4-" is forbidden: PodSecurityPolicy: unable to admit pod: []
5. 取消 PodSecurityPolicy 配置和 psp 策略后,可以正常新建服务
$ /etc/kubernetes/manifests/kube-apiserver.yaml 中去掉PodSecurityPolicy
- --enable-admission-plugins=NodeRestriction
kubectl delete psp no-priv
$ kubectl create deployment busybox2 --image=busybox:1.32 -- sleep infinity
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
busybox-6b8867777d-gs4qj 1/1 Running 0 34m
busybox2-648559c4d4-7lcpb 1/1 Running 0 5s
2.6 Enabling API Server Auditing
# vim /etc/kubernetes/simple-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
# vim /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.2.11:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
...
- --audit-log-maxage=7
- --audit-log-maxbackup=2
- --audit-log-maxsize=50
- --audit-log-path=/var/log/audit.log
- --audit-policy-file=/etc/kubernetes/simple-policy.yaml
...
image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.4
...
volumeMounts:
- mountPath: /etc/kubernetes/simple-policy.yaml
name: audit
readOnly: true
- mountPath: /var/log/audit.log
name: audit-log
readOnly: false
...
hostNetwork: true
priorityClassName: system-node-critical
volumes:
- hostPath:
path: /etc/kubernetes/simple-policy.yaml
type: File
name: audit
- hostPath:
path: /var/log/audit.log
type: FileOrCreate
name: audit-log
...
status: {}
更新后 api-server 就正常重启:
root@kmaster:/home/xg# docker ps |grep apiserver
797c8636f865 cef7457710b1 "kube-apiserver --ad…" 38 seconds ago Up 38 seconds k8s_kube-apiserver_kube-apiserver-kmaster_kube-system_0e9626cb6a8df5c71073e401abc687eb_0
14f4304feef2 registry.aliyuncs.com/google_containers/pause:3.4.1 "/pause" 38 seconds ago Up 38 seconds k8s_POD_kube-apiserver-kmaster_kube-system_0e9626cb6a8df5c71073e401abc687eb_0
发现持续出现审计日志:
root@kmaster:/home/xg# tail -n 3 /var/log/audit.log
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"6c71e531-c823-4c86-b0b1-0c8d20f77400","stage":"ResponseComplete","requestURI":"/apis/crd.projectcalico.org/v1/clusterinformations/default","verb":"get","user":{"username":"system:serviceaccount:kube-system:calico-kube-controllers","uid":"1346c03c-7aa9-48d1-b6eb-ca5356f9e75d","groups":["system:serviceaccounts","system:serviceaccounts:kube-system","system:authenticated"],"extra":{"authentication.kubernetes.io/pod-name":["calico-kube-controllers-58497c65d5-rlz7x"],"authentication.kubernetes.io/pod-uid":["63c1d7ca-0426-40db-9161-aa46b3ebddf3"]}},"sourceIPs":["10.224.189.2"],"userAgent":"Go-http-client/2.0","objectRef":{"resource":"clusterinformations","name":"default","apiGroup":"crd.projectcalico.org","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2021-11-09T00:02:41.779246Z","stageTimestamp":"2021-11-09T00:02:41.793716Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"calico-kube-controllers\" of ClusterRole \"calico-kube-controllers\" to ServiceAccount \"calico-kube-controllers/kube-system\""}}
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"f080f4c4-49cf-458b-902b-05409fbf6206","stage":"RequestReceived","requestURI":"/healthz?timeout=32s","verb":"get","user":{"username":"system:serviceaccount:kube-system:calico-kube-controllers","uid":"1346c03c-7aa9-48d1-b6eb-ca5356f9e75d","groups":["system:serviceaccounts","system:serviceaccounts:kube-system","system:authenticated"],"extra":{"authentication.kubernetes.io/pod-name":["calico-kube-controllers-58497c65d5-rlz7x"],"authentication.kubernetes.io/pod-uid":["63c1d7ca-0426-40db-9161-aa46b3ebddf3"]}},"sourceIPs":["10.224.189.2"],"userAgent":"kube-controllers/v0.0.0 (linux/amd64) kubernetes/$Format","requestReceivedTimestamp":"2021-11-09T00:02:41.794590Z","stageTimestamp":"2021-11-09T00:02:41.794590Z"}
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"f080f4c4-49cf-458b-902b-05409fbf6206","stage":"ResponseComplete","requestURI":"/healthz?timeout=32s","verb":"get","user":{"username":"system:serviceaccount:kube-system:calico-kube-controllers","uid":"1346c03c-7aa9-48d1-b6eb-ca5356f9e75d","groups":["system:serviceaccounts","system:serviceaccounts:kube-system","system:authenticated"],"extra":{"authentication.kubernetes.io/pod-name":["calico-kube-controllers-58497c65d5-rlz7x"],"authentication.kubernetes.io/pod-uid":["63c1d7ca-0426-40db-9161-aa46b3ebddf3"]}},"sourceIPs":["10.224.189.2"],"userAgent":"kube-controllers/v0.0.0 (linux/amd64) kubernetes/$Format","responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2021-11-09T00:02:41.794590Z","stageTimestamp":"2021-11-09T00:02:41.795889Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:discovery\" of ClusterRole \"system:discovery\" to Group \"system:authenticated\""}}
ls 可以发现会快速产生大量日志,因此实际中需要做rotate,此处暂时直接关闭
查看审计日志量:
root@kmaster:/home/xg# ls -lh /var/log/audit.log
-rw-r--r-- 1 root root 7.9M Nov 9 00:04 /var/log/audit.log
关闭审计操作:
# vim /etc/kubernetes/manifests/kube-apiserver.yaml
# - --audit-policy-file=/etc/kubernetes/simple-policy.yaml # 注釋掉該行
再次查看發現已經沒有生成审计日志了:
root@kmaster:/home/xg# ls -lh /var/log/audit.log
-rw-r--r-- 1 root root 7.9M Nov 9 00:04 /var/log/audit.log
root@kmaster:/home/xg# ls -lh /var/log/audit.log
-rw-r--r-- 1 root root 13M Nov 9 00:07 /var/log/audit.log
root@kmaster:/home/xg# ls -lh /var/log/audit.log
-rw-r--r-- 1 root root 13M Nov 9 00:07 /var/log/audit.log
复杂一點的审计功能:
# vim /etc/kubernetes/moderate-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
omitStages:
- "RequestReceived"
rules:
- level: RequestResponse
resources:
- group: ""
resources: ["pods"]
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
- level: Metadata
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*"
- "/version"
- level: Request
resources:
- group: ""
resources: ["configmaps"]
namespaces: ["kube-system"]
- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps"]
omitStages:
- "RequestReceived"
# vim /etc/kubernetes/manifests/kube-apiserver.yaml
...
volumes:
- hostPath:
path: /etc/kubernetes/moderate-policy.yaml # simple-policy.yaml
type: File
name: audit
2.7 Encrypting Secrets
1. 創建 secret
$ kubectl create secret generic first -n default --from-literal=firstkey=first
secret/first created
2. 查看 secret
$ kubectl get secrets first -oyaml
apiVersion: v1
data:
firstkey: Zmlyc3Q=
kind: Secret
metadata:
creationTimestamp: "2021-11-09T23:50:29Z"
...
3. 使用base64 檢查加密的值
$ echo "Zmlyc3Q="| base64 --decode
first
4. 在etcd 中查看secret
4.1 獲取 etcd 相關的配置
# grep etcd /etc/kubernetes/manifests/kube-apiserver.yaml
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
4.2 在etcd 中查看數據信息
# docker cp 56f5b:/usr/local/bin/etcdctl /usr/bin
# ETCDCTL_API=3 etcdctl --endpoints https://192.168.2.11:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
get /registry/secrets/default/first
輸出:
k8s
v1Secret�
�
first�default"*$1245c445-cec6-4fc9-b774-24a399af74522Œ��z�c
kubectl-createUpdate�vŒ��FieldsV1:1
/{"f:data":{".":{},"f:firstkey":{}},"f:type":{}}
firstkeyfirst�Opaque�"
5. 生成随机base64字符串
$ head -c 32 /dev/urandom | base64
GRLTglCQMgwNoq0Or5OjuC2rSrt6j4P7hgDY5sGCuJY=
6. 配置yaml
vim encryptionconfig.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
name: newsetup
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: firstkey
secret: GRLTglCQMgwNoq0Or5OjuC2rSrt6j4P7hgDY5sGCuJY=
- identity: {}
7. 拷贝配置
cp encryptionconfig.yaml /etc/kubernetes/pki/
8. kube-apiserver 中追加 --encryption-provider-config 配置
vim /etc/kubernetes/manifests/kube-apiserver.yaml
...
- --feature-gates=RemoveSelfLink=false
- --encryption-provider-config=/etc/kubernetes/pki/encryptionconfig.yaml # add this
image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.4
...
9. 创建新secret
$ kubectl create secret generic second -n default --from-literal=secondkey=second
secret/second created
10. 在etcd 中查看second
# ETCDCTL_API=3 etcdctl --endpoints https://192.168.2.11:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
get /registry/secrets/default/second
/registry/secrets/default/second
k8s:enc:aescbc:v1:firstkey:cK+ˣP�
<�6'��6�m�1'E��[i�B�GGg�$(��n�+�n9�ԭ>����s�߫��j��G��z����"�=
�zē���� �AVO�6��d'"�Ш����=0X�5TU�K�
QA3�.��'7��GU�龭ցQۮ� ���>���{�Q��
� �',6FH�H�_�K��������I�-�:����Y�G�r�J�y���_;�g�qh�jTS44R<���w ��
11. 重新新建所有的secret,並确认first也被创建且加密了
$ kubectl get secrets --all-namespaces -o json | kubectl replace -f -
12. 查看first
# ETCDCTL_API=3 etcdctl --endpoints https://192.168.2.11:2379 \
...
get /registry/secrets/default/first
/registry/secrets/default/first
k8s:enc:aescbc:v1:firstkey:r�yg��JX���ި����r�Y�w�p�������қ
�]8���X��8O�:}�o���6�,��_�����D�������x
�DG���\Ć��|�w�"[w{�Ni���?44��.�˔E�*��a�� ��9z<=9H�zC�4�����CɎ�f-�k
z;ݺ����/�A9k�ptЙ����U���V���bHܙlǍ�g �1�
�~�
可見first已經被加密了
3 注意事项
- 当前k8s 版本已经支持到1.22.xx 版本, 本文基于1.22.1 版本做实验,如果版本不匹配,那么最好更新到1.22.xx版本, 以下为笔者从1.21升级到1.22的方法
kmaster: # apt-mark unhold kubeadm kubectl kubelet # apt-get update && apt-get install -y kubeadm=1.22.1-00 kubelet=1.22.1-00 kubectl=1.22.1-00 # kubeadm upgrade apply v1.22.1 knode01: # apt-mark unhold kubeadm kubectl kubelet # kubeadm upgrade node
4 说明
更多推荐
所有评论(0)