K8S集群实践之九: Ceph
3. 编写删除脚本,/k8s_apps/scripts/rook-ceph-delete.sh。2. 编写安装脚本,/k8s_apps/scripts/k8s-rook-ceph.sh。4. 执行安装,一切正常的话,如下图,不正常的话,执行删除脚本,排除问题后重来。因香橙派和树莓派资源所限,转移到基于VirtualBox建立的VMs继续实践。1. 获取rook仓库到安装路径,如:/k8s_apps
·
Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for Ceph storage to natively integrate with cloud-native environments.1. 说明
因香橙派和树莓派资源所限,转移到基于VirtualBox建立的VMs继续实践。
虚拟机环境列表:
Host Name | IP | 配置 | 备注 |
k8s-c0-master0 | 10.0.3.6 | Ubuntu 22.04.3 LTS 8核32G, 200G(sda) + 100G(sdb) | 虚拟机 |
k8s-c0-node0 | 10.0.3.7 | Ubuntu 22.04.3 LTS 4核8G, 200G(sda) + 100G(sdb) | ... |
k8s-c0-node1 | 10.0.3.8 | Ubuntu 22.04.3 LTS 4核8G, 200G(sda) + 100G(sdb) | ... |
2. 准备工作
- 下载 ubuntu-22.04.2-live-server-amd64.iso,并映射到启动光盘
- 虚拟机设置双网卡,1作桥接(同网访问),1作NAT网络(K8s内网)
- 安装时选择镜像(重要)https://mirrors.aliyun.com/ubuntu
- 初始化K8s环境
- 为避免CTR拉取镜像失败,设置容器代理,编辑 /lib/systemd/system/containerd.service
[Service] Environment="HTTP_PROXY=http://192.168.0.108:1081" Environment="HTTPS_PROXY=http://192.168.0.108:1081" Environment="NO_PROXY=aliyun.com,aliyuncs.com,huaweicloud.com,k8s-master-0,k8s-master-1,k8s-worker-0,localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
systemctl daemon-reload && systemctl restart containerd
3. 方案及安装步骤
3.1 Rook
Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for Ceph storage to natively integrate with cloud-native environments.
3.2 安装要求
- Raw devices (no partitions or formatted filesystems)
- Raw partitions (no formatted filesystem)
- LVM Logical Volumes (no formatted filesystem)
- Persistent Volumes available from a storage class in
block
mode
一句话:有个裸盘 (100G /dev/sdb)
保证至少3个节点,去掉污点
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
3.3 安装步骤
1. 获取rook仓库到安装路径,如:/k8s_apps/rook
git clone --single-branch --branch v1.12.6 https://github.com/rook/rook.git
2. 编写安装脚本,/k8s_apps/scripts/k8s-rook-ceph.sh
#!/bin/bash
kubectl apply -f /k8s_apps/rook/deploy/examples/crds.yaml
kubectl apply -f /k8s_apps/rook/deploy/examples/common.yaml
kubectl apply -f /k8s_apps/rook/deploy/examples/operator.yaml
kubectl -n rook-ceph get pod
kubectl apply -f /k8s_apps/rook/deploy/examples/cluster.yaml
kubectl apply -f /k8s_apps/rook/deploy/examples/toolbox.yaml
kubectl apply -f /k8s_apps/rook/deploy/examples/dashboard-ingress-https.yaml
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
# 应用各种存储
kubectl apply -f /k8s_apps/rook/deploy/examples/filesystem.yaml
kubectl apply -f /k8s_apps/rook/deploy/examples/nfs.yaml
kubectl apply -f /k8s_apps/rook/deploy/examples/object.yaml
# 声明存储类
kubectl apply -f /k8s_apps/rook/deploy/examples/csi/rbd/storageclass.yaml
# 上一个使用例子
kubectl apply -f /k8s_apps/rook/deploy/examples/mysql.yaml
kubectl apply -f /k8s_apps/rook/deploy/examples/wordpress.yaml
3. 编写删除脚本,/k8s_apps/scripts/rook-ceph-delete.sh
#!/bin/bash
kubectl delete -f /k8s_apps/rook/deploy/examples/wordpress.yaml
kubectl delete -f /k8s_apps/rook/deploy/examples/mysql.yaml
kubectl delete -n rook-ceph cephblockpool replicapool
kubectl delete storageclass rook-ceph-block
kubectl delete -f /k8s_apps/rook/deploy/examples/csi/cephfs/kube-registry.yaml
kubectl delete storageclass csi-cephfs
kubectl -n rook-ceph patch cephcluster rook-ceph --type merge -p '{"spec":{"cleanupPolicy":{"confirmation":"yes-really-destroy-data"}}}'
kubectl -n rook-ceph delete cephcluster rook-ceph
kubectl -n rook-ceph get cephcluster
kubectl delete -f /k8s_apps/rook/deploy/examples/operator.yaml
kubectl delete -f /k8s_apps/rook/deploy/examples/common.yaml
kubectl delete -f /k8s_apps/rook/deploy/examples/crds.yaml
kubectl delete -f /k8s_apps/rook/deploy/examples/toolbox.yaml
kubectl delete -f /k8s_apps/rook/deploy/examples/dashboard-ingress-https.yaml
kubectl delete -f /k8s_apps/rook/deploy/examples/csi/rbd/storageclass.yaml
kubectl delete -f /k8s_apps/rook/deploy/examples/mysql.yaml
kubectl delete -f /k8s_apps/rook/deploy/examples/wordpress.yaml
4. 执行安装,一切正常的话,如下图,不正常的话,执行删除脚本,排除问题后重来。
进入工具pod检查应用状态
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
bash-4.4$ ceph -s
cluster:
id: 1f97e89d-faba-405a-a200-4eaeb9c11035
health: HEALTH_WARN
clock skew detected on mon.c
services:
mon: 3 daemons, quorum a,b,c (age 17h)
mgr: b(active, since 17h), standbys: a
osd: 3 osds: 3 up (since 17h), 3 in (since 2d)
data:
pools: 1 pools, 1 pgs
objects: 2 objects, 449 KiB
usage: 64 MiB used, 300 GiB / 300 GiB avail
pgs: 1 active+clean
4. 相关命令
- 实时查看pod创建进度
kubectl get pod -n rook-ceph -w
- 实时查看集群创建进度
kubectl get cephcluster -n rook-ceph rook-ceph -w
- 详细描述
kubectl describe cephcluster -n rook-ceph rook-ceph
# 进入ceph部署应用
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
# ceph 相关
ceph status
ceph osd status
ceph df
rados df
5. 参考
- https://rook.io/docs/rook/v1.8/ceph-block.html
- Rook - Rook Ceph Documentation
更多推荐
已为社区贡献9条内容
所有评论(0)