k8s ceph rbd动态挂载
k8sceph rbd动态挂载 :k8s 使用ceph:本文介绍k8s怎么使用ceph 的rbd 存储1.首先要有一个ceph 集群。搭建在此不描述2.确保集群状态非errork8s动态挂载ceph rbd 存储首先需要在ceph 集群上创建一个池,初始化池root@ubuntu2-15:~# ceph osd pool create kubernetes 16 16pool 'kubernete
·
k8s ceph rbd动态挂载 :
k8s 使用ceph:
本文介绍k8s怎么使用ceph 的rbd 存储
1.首先要有一个ceph 集群。搭建在此不描述
2.确保集群状态非error
k8s动态挂载ceph rbd 存储
- 首先需要在ceph 集群上创建一个池,初始化池
root@ubuntu2-15:~# ceph osd pool create kubernetes 16 16
pool 'kubernetes' already exists
root@ubuntu2-15:~# rbd pool init kubernetes
root@ubuntu2-15:~# ceph osd pool ls
device_health_metrics
myfs-metadata
myfs-data0
postgres
my-store.rgw.control
my-store.rgw.meta
my-store.rgw.log
my-store.rgw.buckets.index
my-store.rgw.buckets.non-ec
.rgw.root
my-store.rgw.buckets.data
kubernetes
- 给池和k8s集群创建一个使用kubernetes池的用户
root@ubuntu2-15:~# ceph auth get-or-create client.kubernetes mon 'allow *' mds 'allow *' osd 'allow *'
[client.kubernetes]
key = AQC/EodiBYJXABAAfcDFDItPJ8Ve2xRa3ZhyuA==
- 到k8s创建cm
root@k8s-ceph:~/lim/csi# cat csi-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
[
{
"clusterID": "985fbc88-22d9-47d4-96b1-166c106d2787",
"monitors": [
"192.168.2.15:6789",
"192.168.2.16:6789",
"192.168.2.17:6789"
]
}
]
metadata:
name: ceph-csi-config
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-config-map.yaml
configmap/ceph-csi-config created
最新的ceph-csi 还需要一个kms 的配置 集群没有设置的话值可以为空,但必须创建
root@k8s-ceph:~/lim/csi# cat csi-kms-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
{}
metadata:
name: ceph-csi-encryption-kms-config
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-kms-config-map.yaml
configmap/ceph-csi-encryption-kms-config created
ceph-config-map.yaml 里面的keyring 需要admin 的keyring
root@k8s-ceph:~/lim/csi# cat ceph-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
ceph.conf: |
[global]
# keyring is a required key and its value should be empty
keyring: AQCdGF1i+4v7HRAAhxj6p2EV9z7sZ38CtEqakw==
metadata:
name: ceph-config
root@k8s-ceph:~/lim/csi# kubectl apply -f ceph-config-map.yaml
configmap/ceph-config created
root@k8s-ceph:~/lim/csi# cat /etc/ceph/keyring
[client.admin]
key = AQCdGF1i+4v7HRAAhxj6p2EV9z7sZ38CtEqakw==
- 创建sercert 使用第二步创建的kubernetes用户
root@k8s-ceph:~/lim/csi# cat csi-rbd-secret.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: default
stringData:
userID: kubernetes
userKey: AQC/EodiBYJXABAAfcDFDItPJ8Ve2xRa3ZhyuA==
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-rbd-secret.yaml
secret/csi-rbd-secret created
root@k8s-ceph:~/lim/csi#
- 需要创建ceph-csi plugins ( 这一步需要科学上网,可以直接在网页输入地址。可以获取到yaml,复制到终端)
$ kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-provisioner-rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml
#这两个rbac 先创建了。
$ wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin-provisioner.yaml
$ kubectl apply -f csi-rbdplugin-provisioner.yaml
$ wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin.yaml
$ kubectl apply -f csi-rbdplugin.yaml
---
以上均可以到网页输入地址获取内容。 直接复制到自己的终端即可
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-nodeplugin-rbac.yam
error: the path "csi-nodeplugin-rbac.yam" does not exist
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-nodeplugin-rbac.yaml
serviceaccount/rbd-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
root@k8s-ceph:~/lim/csi# kubectl apply -f kubec get pod
error: Unexpected args: [get pod]
See 'kubectl apply -h' for help and examples
root@k8s-ceph:~/lim/csi# kubectl get pod
No resources found in default namespace.
root@k8s-ceph:~/lim/csi# kubectl get sa
NAME SECRETS AGE
default 1 123m
rbd-csi-nodeplugin 1 16s
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-provisioner-rbac.yaml
serviceaccount/rbd-csi-provisioner created
clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created
role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created
rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created
root@k8s-ceph:~/lim/csi# kubectl get sa
NAME SECRETS AGE
default 1 123m
rbd-csi-nodeplugin 1 34s
rbd-csi-provisioner 1 4s
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-rbdplugin-provisioner.yaml
service/csi-rbdplugin-provisioner created
deployment.apps/csi-rbdplugin-provisioner created
root@k8s-ceph:~/lim/csi# ls
ceph-config-map.yaml csi-config-map.yaml csi-kms-config-map.yaml csi-nodeplugin-rbac.yaml csi-provisioner-rbac.yaml csi-rbdplugin-provisioner.yaml csi-rbdplugin.yaml csi-rbd-sc.yaml csi-rbd-secret.yaml csi.tar.gz raw-block-pod.yaml raw-block-pvc.yaml
root@k8s-ceph:~/lim/csi# kubectl get pod
NAME READY STATUS RESTARTS AGE
csi-rbdplugin-provisioner-54d9db86b5-dczd9 0/7 ContainerCreating 0 4s
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-rbdplugin.yaml
daemonset.apps/csi-rbdplugin created
service/csi-metrics-rbdplugin created
root@k8s-ceph:~/lim/csi# kubectl get pod
NAME READY STATUS RESTARTS AGE
csi-rbdplugin-nrdmh 3/3 Running 0 21s
csi-rbdplugin-provisioner-54d9db86b5-dczd9 7/7 Running 0 35s
这一步会创建两个pod 其中的镜像均需要科学上网。 (网络不太好。传不了网盘。实在不会拉在私信我)
- 需要创建一个sc 来动态创建pv
root@k8s-ceph:~/lim/csi# cat csi-rbd-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: 985fbc88-22d9-47d4-96b1-166c106d2787 # id从 ceph -s 获取即可
pool: kubernetes
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret #上面创建的secret
csi.storage.k8s.io/provisioner-secret-namespace: default
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: default
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
root@k8s-ceph:~/lim/csi# kubectl apply -f csi-rbd-sc.yaml
storageclass.storage.k8s.io/csi-rbd-sc created
root@k8s-ceph:~/lim/csi# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
csi-rbd-sc rbd.csi.ceph.com Delete Immediate true 59s
- 创建pvc
root@k8s-ceph:~/lim/csi# kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/aiplatform-ailab-data-pvc Bound default-aiplatform-ailab-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/aiplatform-app-data-pvc Bound default-aiplatform-app-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/aiplatform-dataset-data-pvc Bound default-aiplatform-dataset-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/aiplatform-gitea-data-pvc Bound default-aiplatform-gitea-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/aiplatform-model-data-pvc Bound default-aiplatform-model-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/static-rbd-claim Bound static-rbd-pv 2Gi RWO,ROX 3d3h
persistentvolumeclaim/static-rbd-k8s-claim Bound static-rbd-k8s-pv 1Gi RWO,ROX 84m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/default-aiplatform-ailab-data-pv 300Mi RWX Retain Bound default/aiplatform-ailab-data-pvc 3d3h
persistentvolume/default-aiplatform-app-data-pv 300Mi RWX Retain Bound default/aiplatform-app-data-pvc 3d3h
persistentvolume/default-aiplatform-dataset-data-pv 300Mi RWX Retain Bound default/aiplatform-dataset-data-pvc 3d3h
persistentvolume/default-aiplatform-gitea-data-pv 300Mi RWX Retain Bound default/aiplatform-gitea-data-pvc 3d3h
persistentvolume/default-aiplatform-model-data-pv 300Mi RWX Retain Bound default/aiplatform-model-data-pvc 3d3h
persistentvolume/kube-system-aiplatform-component-data-pv 300Mi RWX Retain Bound kube-system/aiplatform-component-data-pvc 3d3h
persistentvolume/logging-aiplatform-logging-data-pv 300Mi RWX Retain Available 3d3h
persistentvolume/static-rbd-k8s-pv 1Gi RWO,ROX Retain Bound default/static-rbd-k8s-claim 84m
persistentvolume/static-rbd-pv 2Gi RWO,ROX Retain Bound default/static-rbd-claim 3d3h
root@k8s-ceph:~/lim/csi# cat raw-block-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: raw-block-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc
root@k8s-ceph:~/lim/csi# kubectl apply -f raw-block-pvc.yaml
persistentvolumeclaim/raw-block-pvc created
root@k8s-ceph:~/lim/csi# kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/aiplatform-ailab-data-pvc Bound default-aiplatform-ailab-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/aiplatform-app-data-pvc Bound default-aiplatform-app-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/aiplatform-dataset-data-pvc Bound default-aiplatform-dataset-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/aiplatform-gitea-data-pvc Bound default-aiplatform-gitea-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/aiplatform-model-data-pvc Bound default-aiplatform-model-data-pv 300Mi RWX 3d3h
persistentvolumeclaim/raw-block-pvc Bound pvc-68eac306-2863-4efb-8abc-f395eca95169 1Gi RWO csi-rbd-sc 18m
persistentvolumeclaim/static-rbd-claim Bound static-rbd-pv 2Gi RWO,ROX 3d3h
persistentvolumeclaim/static-rbd-k8s-claim Bound static-rbd-k8s-pv 1Gi RWO,ROX 103m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/default-aiplatform-ailab-data-pv 300Mi RWX Retain Bound default/aiplatform-ailab-data-pvc 3d3h
persistentvolume/default-aiplatform-app-data-pv 300Mi RWX Retain Bound default/aiplatform-app-data-pvc 3d3h
persistentvolume/default-aiplatform-dataset-data-pv 300Mi RWX Retain Bound default/aiplatform-dataset-data-pvc 3d3h
persistentvolume/default-aiplatform-gitea-data-pv 300Mi RWX Retain Bound default/aiplatform-gitea-data-pvc 3d3h
persistentvolume/default-aiplatform-model-data-pv 300Mi RWX Retain Bound default/aiplatform-model-data-pvc 3d3h
persistentvolume/kube-system-aiplatform-component-data-pv 300Mi RWX Retain Bound kube-system/aiplatform-component-data-pvc 3d3h
persistentvolume/logging-aiplatform-logging-data-pv 300Mi RWX Retain Available 3d3h
persistentvolume/pvc-68eac306-2863-4efb-8abc-f395eca95169 1Gi RWO Delete Bound default/raw-block-pvc csi-rbd-sc 18m
可以看到只需要创建pvc 就可以自动拉起pv
- 使用pod 挂载对应的pvc
root@k8s-ceph:~/lim/csi# cat raw-block-pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: pod-with-raw-block-volume
spec:
containers:
- name: fc-container
image: fedora:26
command: ["/bin/sh", "-c"]
args: ["tail -f /dev/null"]
volumeDevices:
- name: data
devicePath: /dev/xvda
volumes:
- name: data
persistentVolumeClaim:
claimName: raw-block-pvc
root@k8s-ceph:~/lim/csi# kubectl apply -f ^C
root@k8s-ceph:~/lim/csi# kubectl get pod
NAME READY STATUS RESTARTS AGE
csi-rbdplugin-nrdmh 3/3 Running 0 45m
csi-rbdplugin-provisioner-54d9db86b5-dczd9 7/7 Running 5 45m
root@k8s-ceph:~/lim/csi# cat raw-block-pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: pod-with-raw-block-volume
spec:
containers:
- name: fc-container
image: fedora:26
command: ["/bin/sh", "-c"]
args: ["tail -f /dev/null"]
volumeDevices:
- name: data
devicePath: /dev/xvda
volumes:
- name: data
persistentVolumeClaim:
claimName: raw-block-pvc
root@k8s-ceph:~/lim/csi# kubectl apply -f raw-block-pod.yaml
pod/pod-with-raw-block-volume created
root@k8s-ceph:~/lim/csi# kubectl get pod
NAME READY STATUS RESTARTS AGE
csi-rbdplugin-nrdmh 3/3 Running 0 45m
csi-rbdplugin-provisioner-54d9db86b5-dczd9 7/7 Running 5 45m
pod-with-raw-block-volume 1/1 Running 0 19s
这一步报挂载失败的话,apt install ceph-common
自此k8s 动态挂载ceph rbd 完成
更多推荐
已为社区贡献2条内容
所有评论(0)