k8s指定ceph,实现静态持久化存储
一、使用pv/pvc(NFS挂载)1、创建ceph secret需要给k8s添加一个访问ceph的secret,主要用于k8s来给rbd做map。1,在ceph master节点执行如下命令获取admin的经过base64编码的key(生产环境可以创建一个给k8s使用的专门用户):[root@k8s-master1 ~]# ceph auth get-key client.admin | base
·
一、使用pv/pvc(ceph rbd挂载)
1、创建ceph secret
需要给k8s添加一个访问ceph的secret,主要用于k8s来给rbd做map。
1,在ceph master节点执行如下命令获取admin的经过base64编码的key(生产环境可以创建一个给k8s使用的专门用户):
[root@k8s-master1 ~]# ceph auth get-key client.admin | base64
QVFDS21YUmlaTVYyRWhBQWVqbTBHQnpmaW5SYmpqS3dpN0ZaZmc9PQ==
2,在k8s创建ceph的secret
[root@k8s-master1 ceph_pvc_pv]# cat ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: dev #和Deployment同一个命名空间
data:
key: QVFDS21YUmlaTVYyRWhBQWVqbTBHQnpmaW5SYmpqS3dpN0ZaZmc9PQ==
2、创建image
默认情况下,ceph创pool名ceph_rdb。使用如下命令在安装ceph的客户端或者直接在ceph master节点上创建image:
[root@k8s-master1 ceph_pvc_pv]# rbd create --size 1024 ceph_rbd/docker_image
[root@k8s-master1 ceph_pvc_pv]# rbd info ceph_rbd/docker_image
rbd image 'docker_image':
size 1GiB in 256 objects
order 22 (4MiB objects)
block_name_prefix: rbd_data.20c766b8b4567
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Mon May 9 15:35:09 2022
3、创建pv
1、编写pv.yaml文件
[root@k8s-master1 ceph_pvc_pv]# cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: ceph-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
- ReadOnlyMany
storageClassName: nginx #定义 storageClassName 只有相同名字的才能绑定在一起
rbd:
monitors:
- 10.0.19.127:6789
- 10.0.19.129:6789
- 10.0.19.130:6789
pool: ceph_rbd
image: docker_image
user: admin
secretRef:
name: ceph-secret #secret名字
fsType: ext4
persistentVolumeReclaimPolicy: Retain
2、创建pv并查看
[root@k8s-master1 ceph_pvc_pv]# kubectl apply -f pv.yaml
persistentvolume/ceph-pv created
[root@k8s-master1 ceph_pvc_pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
ceph-pv 1Gi RWO,ROX Retain Available nginx 6s
4、创建pvc,申请pv
[root@k8s-master1 ceph_pvc_pv]# cat pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-nginx
namespace: dev
spec:
accessModes:
- ReadWriteOnce
- ReadOnlyMany
storageClassName: nginx #定义 storageClassName 只有相同名字的才能绑定在一起
resources:
requests:
storage: 1Gi
2、创建pvc并查看
[root@k8s-master1 ceph_pvc_pv]# kubectl apply -f pvc.yaml
persistentvolumeclaim/ceph-nginx created
[root@k8s-master1 ceph_pvc_pv]# kubectl get pvc -n dev
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ceph-nginx Bound ceph-pv 1Gi RWO,ROX nginx 47s
5、使用pvc挂载
1、接下创建deployment.yaml 来应用pvc
[root@k8s-master1 ceph_pvc_pv]# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mysql #为该Deployment设置key为app,value为mysql的标签
name: lnmp
namespace: dev #和pvc在同一个namespace
spec:
replicas: 1 #副本数量
selector: #标签选择器,与上面的标签共同作用
matchLabels: #选择包含标签app:lnmp的资源
app: lnmp
template: #这是选择或创建的Pod的模板
metadata: #Pod的元数据
labels: #Pod的标签,上面的selector即选择包含标签app:lnmp的Pod
app: lnmp
spec: #期望Pod实现的功能(即在pod中部署)
containers: #生成container,与docker中的container是同一种
- name: nginx
image: nginx:1.8 #使用镜像nginx: 创建container,该container默认80端口可访问
ports:
- containerPort: 80 # 开启本容器的80端口可访问
volumeMounts: #挂载持久存储卷
- name: nginx-data #挂载设备的名字,与volumes[*].name 需要对应
mountPath: /var/log/nginx #挂载到容器的某个路径下
volumes:
- name: nginx-data #和上面保持一致 这是本地的文件路径,上面是容器内部的路径
persistentVolumeClaim:
claimName: ceph-nginx #pvc名称
readOnly: false #设置成false可读可写,设成true表示只读
2、然后应用到k8s中
[root@k8s-master1 ceph_pvc_pv]# kubectl apply -f deployment.yaml
deployment.apps/lnmp created
3、检测pod的状态,发现一只处于ContainerCreating阶段,然后通过describe日志发现有报错:
[root@k8s-master1 ceph_pvc_pv]# kubectl get pod -n dev
NAME READY STATUS RESTARTS AGE
lnmp-75cfb68758-58zxf 0/1 ContainerCreating 0 76s
[root@k8s-master1 ceph_pvc_pv]# kubectl describe pod lnmp-75cfb68758-58zxf -n dev
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned dev/lnmp-75cfb68758-58zxf to k8s-node1
Normal SuccessfulAttachVolume 100s attachdetach-controller AttachVolume.Attach succeeded for volume "ceph-pv"
Warning FailedMount 31s (x8 over 96s) kubelet, k8s-node1 MountVolume.WaitForAttach failed for volume "ceph-pv" : rbd: map failed exit status 6, rbd output: rbd: sysfs write failed
RBD image feature set mismatch. Try disabling features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
解决方法:
RBD image feature set mismatch. Try disabling features unsupported by the kernel with “rbd feature disable”.可以看出RBD图像功能集不匹配,需要禁用
rbd feature disable ceph_rbd/docker_image exclusive-lock object-map fast-diff deep-flatten
4、再次查看是否成功
[root@k8s-master1 ceph_pvc_pv]# kubectl describe pod lnmp-75cfb68758-58zxf -n dev
Events:
Type Reason Age From
Warning FailedMount 4m3s (x2 over 6m21s) kubelet, k8s-node1 Unable to attach or mount volumes: unmounted volumes=[nginx-data], unattached volumes=[nginx-data default-token-9lqp5]: timed out waiting for the condition
Normal Pulled 2m21s kubelet, k8s-node1 Container image "nginx:1.8" already present on machine
Normal Created 2m20s kubelet, k8s-node1 Created container nginx
Normal Started 2m20s kubelet, k8s-node1 Started container nginx
[root@k8s-master1 ceph_pvc_pv]# kubectl get pod -n dev
NAME READY STATUS RESTARTS AGE
lnmp-75cfb68758-58zxf 1/1 Running 0 10m
5、进入pod所在服务器,查看ceph是否挂载
[root@k8s-master1 ceph_pvc_pv]# kubectl get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
lnmp-75cfb68758-58zxf 1/1 Running 0 12m 10.244.0.223 k8s-node1 <none> <none>
[root@k8s-node1 ~]# lsblk -l|grep rbd0
rbd0 252:0 0 1G 0 disk /var/lib/kubelet/pods/bef02a75-cdca-438c-9e83-8984de79cedd/volumes/kubernetes.io~rbd/ceph-pv
[root@k8s-node1 ~]# ls /var/lib/kubelet/pods/bef02a75-cdca-438c-9e83-8984de79cedd/volumes/kubernetes.io~rbd/ceph-pv
access.log error.log lost+found
更多推荐
已为社区贡献14条内容
所有评论(0)