1.CA签发客户端证书

检查证书是否存在

# ll /etc/kubernetes/pki/
总用量 48K
-rw-r----- 1 kube root 2.1K  3月  2 16:44 apiserver.crt
-rw------- 1 kube root 1.7K  3月  2 16:44 apiserver.key
-rw-r----- 1 kube root 1.2K  3月  2 16:44 apiserver-kubelet-client.crt
-rw------- 1 kube root 1.7K  3月  2 16:44 apiserver-kubelet-client.key
-rw-r----- 1 kube root 1.1K  3月  2 16:44 ca.crt
-rw------- 1 kube root 1.7K  3月  2 16:44 ca.key
-rw-r----- 1 kube root 1.1K  3月  2 16:44 front-proxy-ca.crt
-rw------- 1 kube root 1.7K  3月  2 16:44 front-proxy-ca.key
-rw-r----- 1 kube root 1.1K  3月  2 16:44 front-proxy-client.crt
-rw------- 1 kube root 1.7K  3月  2 16:44 front-proxy-client.key
-rw------- 1 kube root 1.7K  3月  2 16:44 sa.key
-rw-r----- 1 kube root  451  3月  2 16:44 sa.pub

2.安装cfssl工具

# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
# chmod +x cfssl*
# mv cfssl_linux-amd64 /usr/bin/cfssl
# mv cfssljson_linux-amd64 /usr/bin/cfssljson
# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

# ll /usr/bin/cfs*
-rwxrwxrwx 1 root root 9.9M  5月  9 14:53 /usr/bin/cfssl
-rwxrwxrwx 1 root root 6.3M  5月  9 14:54 /usr/bin/cfssl-certinfo
-rwxrwxrwx 1 root root 2.2M  5月  9 14:53 /usr/bin/cfssljson

3.编写cert.sh

 cat cert.sh 
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF
 
cat > sk-csr.json <<EOF
{
  "CN": "sk",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
cfssl gencert -ca=/etc/kubernetes/pki/ca.crt -ca-key=/etc/kubernetes/pki/ca.key -config=ca-config.json -profile=kubernetes sk-csr.json | cfssljson -bare sk

API Server会把客户端证书的CN字段作为User,把names.O字段作为Group。

需要新建的用户名为sk

k8s在校验授权的时候就会读取这两个字段。

        kubelet 使用 TLS Bootstaping 认证时,API Server 可以使用 Bootstrap Tokens 或者 Token authenticationfile 验证=token,无论哪一种,Kubenetes 都会为 token 绑定一个默认的 User 和 GroupPod使用 ServiceAccount 认证时,service-account-token 中的 JWT 会保存 User 信息有了用户信息,再创建一对角色/角色绑定(集群角色/集群角色绑定)资源对象,就可以完成权限绑定了

执行cert.sh脚本

# sudo ./cert.sh 
2023/05/30 15:52:54 [INFO] generate received request
2023/05/30 15:52:54 [INFO] received CSR
2023/05/30 15:52:54 [INFO] generating key: rsa-2048
2023/05/30 15:52:54 [INFO] encoded CSR
2023/05/30 15:52:54 [INFO] signed certificate with serial number 528018676919261691291627255415154576375819761670
2023/05/30 15:52:54 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
# ll
总用量 24K
-rw-r----- 1 root     root      292  5月 30 15:52 ca-config.json
-rwxr-x--- 1 nmyunwei nmyunwei  724  5月 10 16:43 cert.sh
-rw-r----- 1 root     root      989  5月 30 15:52 sk.csr
-rw-r----- 1 root     root      215  5月 30 15:52 sk-csr.json
-rw------- 1 root     root     1.7K  5月 30 15:52 sk-key.pem
-rw-r----- 1 root     root     1.3K  5月 30 15:52 sk.pem

上面就是客户端证书,有多个用户需要生成多个

sk-key.pem 私钥 类似配置nginx https访问 .key私钥

sk.pem 数字证书 类似配置nginx https访问的 .crt证书

注意这里要指定k8s根证书的,kubeadm部署的话根证书默认在/etc/kubernetes/pki/

# ll /etc/kubernetes/pki/
总用量 12K
-rw-r----- 1 kube root 1.1K  3月  2 16:44 ca.crt
-rw------- 1 kube root 1.7K  5月  9 15:11 ca.key
-rw-r----- 1 root root   41  5月  9 17:14 ca.srl

        以上步骤就是生成了根证书的配置文件ca-config.json,再生成为某个用户颁发的客户端请求文件。

        最后就是使用cfssl工具指定相关的文件去生成客户端所需要的证书。每个用户的客户端证书都是这样生成的。唯一需要区分的就是CN字段的用户名。

 

生成kubeconfig授权文件

1.生产sk.kubeconfig的配置文件

sk为用户名

# cat sconfig.sh 
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/pki/ca.crt \
  --embed-certs=true \
  --server=https://10.221.221.221:8443 \  
  --kubeconfig=sk.kubeconfig

 server的地址为k8s集群中master主机vip的ip:port

# ./sconfig.sh 
Cluster "kubernetes" set.

# cat sk.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURBRENDQWVpZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQ0FYRFRJek1ETXdNakE0TkRBek0xb1lEekl4TWpNd01qQTJNRGcwTURNeldqQVZNUk13RVFZRApWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxMDBGV3huOVdacXY1bElhei9GcXpGZ1V6ZG1sYUthUnZpb2s1eVkKRFkyK0VjaU0yNmFCU3ZucFo4NmREcjB3bUNBZThKNkpYKzBKSmJqZUhMOE9GR2ppQmdIWlR0M2RmZlNNVkM5cgpVRkZqR3M1TnB6Qm1uNEZ5Z3lOSXVRcmNHMmVid3NBSm1nWXJVamV4Tkl5T1dzZnhNU3dJZkhsT3p2SmxnOVRCCkdWaTE5RFF2K0NkanFoek8wNmMrRG4xaWFsZ2JpNU5YK0kyekkvMmowQkUrdUhaTFJFZHNSUXE2dVFpeHVySXgKZ0t6R2h3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.221.221.221:8443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

  本段设置了所需要访问的集群的信息。

使用set-cluster设置了需要访问的集群,如上为kubernetes,这只是个名称,实际为--server指向的apiserver

--certificate-authority设置了该集群的公钥

--embed-certs为true表示将--certificate-authority证书写入到kubeconfig中

--server则表示该集群的kube-apiserver地址,需手动修改为对应集群地址

生成的kubeconfig 被保存到 sk.kubeconfig文件

2.用户参数设置

# cat userconfig.sh 
kubectl config set-credentials sk    --client-key=sk-key.pem    --client-certificate=sk.pem    --embed-certs=true    --kubeconfig=sk.kubeconfig

# ./userconfig.sh 
User "sk" set.

再次查看sk.kubeconfig文件

# cat sk.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURBRENDQWVpZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQ0FYRFRJek1ETXdNakE0TkRBek0xb1lEekl4TWpNd01qQTJNRGcwTURNeldqQVZNUk13RVFZRApWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCnlBV3JjNnphREdQN1hZS1h0anB0WWVKSVd3SVdsUzEwbENwYjB2RkIyK3NEUUVPeENPMWFLMkRwc1VMZVRKRzIKU1pqTGxxYkxkek5RMWhNeWN0cjltdUMrY2I0bW52Skh4RDJ3Y1ZKUDh1bld4eTRVWTJZVzhxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxvSXVoaytDSGhlZEFNClNiL3lZTEJRVnNIcGFuWGdKdWpoWm9vUjVxMDBGV3huOVdacXY1bElhei9GcXpGZ1V6ZG1sYUthUnZpb2s1eVkKRFkyK0VjaU0yNmFCUxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxlNNVkM5cgpVRkZqR3M1TnB6Qm1uNEZ5Z3lOSXVRcmNHMmVid3NBSm1nWXJVamV4Tkl5T1dzZnhNU3dJZkhsT3p2SmxnOVRCCkdWaTE5RFF2K0NkanFoek8wNmMrRG4xaWFsZ2JpNU5YK0kyekkvMmowQkUrdUhaTFJFZHNSUXE2dVFpeHVySXgKZ0t6R2h3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.221.221.221:8443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: sk
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURmekNDQW1lZ0F3SUJBZ0lVWEgwdEpaNDlqV3pScENXdDdsK1Q0L3JqZUFZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0ZURVRNQkVHQTFVRUF4TUthM1ZpWlhKdVpYUmxjekFlRncweU16QTFNekF3TnpRNE1EQmFGdzB6TXpBMQpNamN3TnpRNE1EQmFNRjB4Q3pBSkJnTlZCQVlUQWtOT01SQXdEZ1lEVlFRSUV3ZENaV2xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxVjZPdEtoVwpzODNGclE4czAwTEFMYWJwRmk1YlVNcTVzK2Q1VVZvaG1qQXgvU0YzdFA0SitUNW9sRUNTcWE1MVZ6bmhtZHZ6ClgwMW5TR0QvMnV5QnRVZ0tCMVVLRGNpQ1Bxa0dzSUE4NkZWODU3RERCbEcyQm80PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMHhsNkVwQStTekZycGFTNFlPdktxREY4YVJ6eTUvaDRZYTNVWlZkYzRoR012RG1iCkE1ZDdTWkhjYjJaYTY3OFcxbW5GVlV2Z01xT1U4Zm92dDREaHZFeDdrSC9pdnNlRk5sc3ZUSlYwaU53NnptV2QKcEttaGZSTVo5Nm9hajNBZG1zTlkyS2xhdnFwc1puZGQzbHorZnZUTGM3eEh5UmNVSVpUNm9YRmNPZGpRRGJyawpHVlVjZ0N2eUt1ZlpybXk0emNDbGlxMzlWK3VveWprL0tsa3J6OGdOV0tmN3Q5dnlJRnF3TTFMK0tkNFBrNDFPCjd6azd4NlF5SitzTXMrMjBNZ0lETkhtUEIwK1lMTEIxRVVHTUI2RFc4RmpQUDIrK09udGJkUEpKRnVKenNmT0cKWTlKdHE3TzRoOEJndnJVN1pYSlBxWUtXaDM4cjFZZVhlVk16andJREFRQUJBbxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx2NjQzZAp2c0xUa0E1eGdFYUh0dHd1a0hZUTI4M2dtYWp0Q2tzVEJMek14ZEdQNWlXenl6RDNsQXFLYTlDcFVtV1BvMmVRCjJjU0hBb0dCQUxtaFRkVWRNaWZmSG9MaFo2NFZ2N2RGUEpyUy9GbzY5Nk01aDN6Y3YwUXZEL1RxWGlTcEJCNXMKSWN5ZDJDd0wwWjZVUHk4bDJxV0lXaTlibTFSUjdPNkJRNmdQQlJQdTU0S1I2S3VGQVZjK2tpQ1VhWXpYYzJ0NApXS3pXQisxS3hoZGNySFIwTmtUeUYvTDM5WGJiZ3BGc3BRR2ZXSVBZN1U2L3YvM0xKc0VJCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

 本段主要设置用户的相关信息,主要是用户证书。

如上的用户名为sk,证书为:sk.pem,私钥为:sk-key.pem。

注意客户端的证书首先要经过集群CA的签署,否则不会被集群认可。

此处使用的是ca认证方式,也可以使用token认证,

如kubelet的 TLS Boostrap机制下的bootstrapping使用的就是token认证方式。

上述kubectl使用的是ca认证,不需要token字段

3.设置上下文参数

# cat context.sh 
 kubectl config set-context kubernetes --cluster=kubernetes --user=sk --kubeconfig=sk.kubeconfig

# ./context.sh 
Context "kubernetes" created.

 再次查看sk.kubeconfig文件

# cat sk.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURBRENDQWVpZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQ0FYRFRJek1ETXdNakE0TkRBek0xb1lEekl4TWpNd01qQTJNRGcwTURNeldqQVZNUk13RVFZRApWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx1V6ZG1sYUthUnZpb2s1eVkKRFkyK0VjaU0yNmFCU3ZucFo4NmREcjB3bUNBZThKNkpYKzBKSmJqZUhMOE9GR2ppQmdIWlR0M2RmZlNNVkM5cgpVRkZqR3M1TnB6Qm1uNEZ5Z3lOSXVRcmNHMmVid3NBSm1nWXJVamV4Tkl5T1dzZnhNU3dJZkhsT3p2SmxnOVRCCkdWaTE5RFF2K0NkanFoek8wNmMrRG4xaWFsZ2JpNU5YK0kyekkvMmowQkUrdUhaTFJFZHNSUXE2dVFpeHVySXgKZ0t6R2h3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.221.236.248:8443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: sk
  name: kubernetes
current-context: ""
kind: Config
preferences: {}
users:
- name: sk
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURmekNDQW1lZ0F3SUJBZ0lVWEgwdEpaNDlqV3pScENXdDdsK1Q0L3JqZUFZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0ZURVRNQkVHQTFVRUF4TUthM1ZpWlhKdVpYUmxjekFlRncweU16xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx1hIaXM4OEZLc1Z2cnhXdTNkdzJqR2lrNUczVjZPdEtoVwpzODNGclE4czAwTEFMYWJwRmk1YlVNcTVzK2Q1VVZvaG1qQXgvU0YzdFA0SitUNW9sRUNTcWE1MVZ6bmhtZHZ6ClgwMW5TR0QvMnV5QnRVZ0tCMVVLRGNpQ1Bxa0dzSUE4NkZWODU3RERCbEcyQm80PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMHhsNkVwQStTekZycGFTNFlPdktxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxwTmtUeUYvTDM5WGJiZ3BGc3BRR2ZXSVBZN1U2L3YvM0xKc0VJCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

  集群参数和用户参数可以同时设置多对,在上下文参数中将集群参数和用户参数关联起来。

上面的上下文名称为kubenetes,集群为kubenetes,用户为sk,表示使用sk的用户凭证来访问kubenetes集群的default命名空间,也可以增加--namspace来指定访问的命名空间。

最后使用kubectl config use-context kubernetes来使用名为kubenetes的环境项来作为配置。

如果配置了多个环境项,可以通过切换不同的环境项名字来访问到不同的集群环境。

4.新增用户

# useradd -d /data/sk -m sk

# passwd sk
更改用户 sk 的密码 。
新的 密码:
重新输入新的 密码:
passwd:所有的身份验证令牌已经成功更新。

 5.设置当前kubernetes

#  kubectl config use-context kubernetes --kubeconfig=sk.kubeconfig
Switched to context "kubernetes".

 6.设置角色并绑定角色

# cat rbac.yaml 
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: ns-nmzh  #指定用户命名空间
  name: sk
rules:            #配置授权维度
- apiGroups: [""]   # '*' 匹配所有资源组
  resources: ["pods","pods/exec","pods/log"]
  verbs: ["get","watch","list","create","update","patch"]  #资源操作方法,除了delete权限不给,其他权限给了

---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: sk
  namespace: ns-nmzh # 指定用户命名空间
subjects:
- kind: User
  name: sk  # 指定kubeconfig用户
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: sk
  apiGroup: rbac.authorization.k8s.io

绑定角色验证

# su - sk
最后一次失败的登录: 二 5月  9 15:40:55 CST 2023 pts/0 上
最后一次成功登录后有 3 次失败的登录尝试。

# kubectl get po -A
The connection to the server localhost:8080 was refused - did you specify the right host or port?

# exit
注销

# kubectl apply -f rbac.yaml 
role.rbac.authorization.k8s.io/sk unchanged
rolebinding.rbac.authorization.k8s.io/sk unchanged

# su - sk
上一次登录: 二 5月 30 16:27:23 CST 2023 pts/0 上

# kubectl  get po -A
The connection to the server localhost:8080 was refused - did you specify the right host or port?

# kubectl  get po -n ns-nmzh
The connection to the server localhost:8080 was refused - did you specify the right host or port?

 7.用户设置.kube目录

在sk用户家目录下创建.kube文件夹,将上文生成的sk.kubeconfig复制到.kube下,并改名为config

# 在root用户下操作,或普通用户具有sudo权限
# mkdir .kube

# cp sk.kubeconfig  /data/sk/.kube/
# cd .kube/
# ll
总用量 8.0K
-rw------- 1 root root 5.7K  5月 30 16:34 sk.kubeconfig

# mv /data/sk/.kube/sk.kubeconfig  /data/sk/.kube/config
# ll /data/sk/.kube/
总用量 8.0K
-rw------- 1 root root 5.7K  5月 30 16:34 config

# chown -R sk:sk /data/sk/.kube/config 

 执行命令验证

$ kubectl  get po -A
Error from server (Forbidden): pods is forbidden: User "sk" cannot list resource "pods" in API group "" at the cluster scope

# kubectl  get po -n ns-nmzh
NAME                                                    READY   STATUS    RESTARTS       AGE
deploy-abtest-559c7b469b-2cvd7                          1/1     Running   0              13d
deploy-harbor90test-85c8c54f47-qtc92                    1/1     Running   11 (16h ago)   13d
deploy-testc-56684fbbf7-5bbcd                           1/1     Running   0              12d
deploy-testv622-5cc555ff97-ccn7q                        1/1     Running   0              28h
deploy-tomcat-test-9638b3ce-8ffc0854-7785598bdf-fplvw   1/1     Running   7 (2d4h ago)   20d
deploy-ttttttt-422f8813-cb55fa7b-6c99888d84-mlttq       1/1     Running   14 (31h ago)   20d
deploy-zhzy-web3-57dc554566-9m5r5                       1/1     Running   0              14d
sts-jjfredis1-0                                         1/1     Running   0              13d
sts-jjfredis1-1                                         1/1     Running   0              13d

# kubectl delete po deploy-harbor90test-85c8c54f47-qtc92 -n ns-nmzh
Error from server (Forbidden): pods "deploy-harbor90test-85c8c54f47-qtc92" is forbidden: User "sk" cannot delete resource "pods" in API group "" in the namespace "ns-nmzh"

# ll .kube/
总用量 8.0K
drwxr-x--- 4 sk sk   35  5月 30 16:39 cache
-rw------- 1 sk sk 5.7K  5月 30 16:34 config

原理参考链接Kubernetes RBAC 为指定用户授权访问不同命名空间权限_富士康质检员张全蛋的博客-CSDN博客

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐