动态供给(StorageClass)-RBD方式

Ceph配置

1. kubelet节点安装rbd命令,将ceph的ceph.client.admin.keyring和ceph.conf文件拷贝到master的/etc/ceph目录下
yum -y install ceph-common

2. 创建 osd pool 在ceph的mon或者admin节点
ceph osd pool create kube-data 128 128
ceph osd pool application enable kube-data rbd
ceph osd pool ls

3. 创建k8s访问ceph的用户, 在ceph的mon或者admin节点
ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube-data' -o ceph.client.kube.keyring

4. 获取admin用户和kube用户的key
ceph auth get-key client.admin;echo
ceph auth get-key client.kube;echo

K8S配置

创建Secret

  • key不需要base64转码
1. 创建 admin secret, 用户kubenetes集群访问ceph
kubectl create secret generic ceph-admin-secret --type="kubernetes.io/rbd" \
--from-literal=key=AQBAeaRgpKjkABAAbQz7RV/7meaFYKO8oZydsQ== \
--namespace=kube-system
2. 在 default 命名空间创建pvc用于访问ceph的 secret
kubectl create secret generic ceph-user-secret --type="kubernetes.io/rbd" \
--from-literal=key=AQDlGKZgG2xRNxAA4DYniPBpaV5SAyU1/QH/5w== \
--namespace=default

如果其余命名空间的服务挂载ceph存储,需要按步骤2方法,创建对应的secret

例如:

kubectl create secret generic ceph-user-secret --type="kubernetes.io/rbd" \
--from-literal=key=AQDlGKZgG2xRNxAA4DYniPBpaV5SAyU1/QH/5w== \
--namespace=monitoring

创建StorageClass

cat > external-storage-ceph-rbd.yaml << 'EOF'
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: dynamic-ceph-rbd
provisioner: kubernetes.io/rbd
# 回收策略: Retain(不删除)  Delete(删除) 默认: Delete
reclaimPolicy: Retain
parameters:
  monitors: 192.168.2.101:6789,192.168.2.102:6789,192.168.2.103:6789
  adminId: admin
  adminSecretName: ceph-admin-secret
  adminSecretNamespace: kube-system
  pool: kube-data
  userId: kube
  userSecretName: ceph-user-secret
EOF

kubectl apply -f external-storage-ceph-rbd.yaml
kubectl get sc
monitors: ceph集群mon地址
adminId: ceph集群admin用户ID
adminSecretName: 在k8s中为ceph集群admin用户secret
adminSecretNamespace: 创建ceph集群admin用户secret指定的命名空间
pool: 在ceph集群中创建的pool
userId: kube用户
userSecretName: k8s集群为ceph集群kube用户创建的secret

创建PVC测试

cat > ceph-claim-rbd-test.yaml << 'EOF'
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim-rbd-test
spec:
  accessModes:     
    - ReadWriteOnce
  storageClassName: dynamic-ceph-rbd
  resources:
    requests:
      storage: 2Gi
EOF
kubectl apply -f ceph-claim-rbd-test.yaml

kubectl get pv
kubectl get pvc

挂载POD验证PVC

cat > nginx-deployment-dynamic-pvc.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:latest
        name: nginx
        volumeMounts:
        - name: www-data
          mountPath: /usr/share/nginx/html

      volumes:
      - name: www-data
        persistentVolumeClaim:
          claimName: ceph-claim-nginx
---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ceph-claim-nginx
spec:
  storageClassName: dynamic-ceph-rbd
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

---

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: default
  labels:
    app: nginx
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  type: NodePort
EOF

kubectl apply -f nginx-deployment-dynamic-pvc.yaml

在pod中创建默认页面

kubectl exec -it nginx-b89b49bf9-g6bsg -- /bin/bash
# echo "dynamic-cephfs-test" > /usr/share/nginx/html/index.html

访问测试,出现如下界面,表示成功

curl http://10.0.0.146dynamic-cephfs-test
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐