问题情况

安装好rook-ceph后,使用工具进入pod进行状态检查的时候,发现了以下问题:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-u97rm1xY-1606699442768)(evernotecid://7A7EDC21-DF8C-4D54-A2F1-093DFC4742F1/appyinxiangcom/20217442/ENResource/p1827)]
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-84UBGicC-1606699442771)(evernotecid://7A7EDC21-DF8C-4D54-A2F1-093DFC4742F1/appyinxiangcom/20217442/ENResource/p1828)]

思考过程

看问题报错情况感觉应该是发生了服务器时钟偏差的问题

网上查了一些资料,某度上查出的大部分的文章都是这样去修改:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-NhsFHThs-1606699442774)(evernotecid://7A7EDC21-DF8C-4D54-A2F1-093DFC4742F1/appyinxiangcom/20217442/ENResource/p1829)]

但是实际上我们通过的是rook-ceph来管理的,于是乎想到了k8s的conigmap配置文件。

在这里插入图片描述

又查了一下官方的解释,那么跟我想的基本是一样的

在这里插入图片描述

找到了修改配置文件的方法
在这里插入图片描述

最终效果

在这里插入图片描述

解决思路
[root@kv-master-00 ~]# kubectl -n rook-ceph exec -it $toolbox sh
sh-4.2# ceph status
  cluster:
    id:     5a0bbe74-ce42-4f49-813d-7c434af65aad
    health: HEALTH_WARN
            clock skew detected on mon.c

  services:
    mon: 3 daemons, quorum a,b,c (age 3m)
    mgr: a(active, since 2m)
    osd: 4 osds: 4 up (since 105s), 4 in (since 105s)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   4.0 GiB used, 72 GiB / 76 GiB avail
    pgs:

[root@kv-master-00 ~]# kubectl -n rook-ceph edit ConfigMap rook-config-override -o yaml
config: |
    [global]
    mon clock drift allowed = 0.5

[root@kv-master-00 ~]# kubectl -n rook-ceph get ConfigMap rook-config-override -o yaml
apiVersion: v1
data:
  config: |
    [global]
    mon clock drift allowed = 0.5
kind: ConfigMap
metadata:
  creationTimestamp: "2019-10-18T14:08:39Z"
  name: rook-config-override
  namespace: rook-ceph
  ownerReferences:
  - apiVersion: ceph.rook.io/v1
    blockOwnerDeletion: true
    kind: CephCluster
    name: rook-ceph
    uid: d0bd3351-e630-44af-b981-550e8a2a50ec
  resourceVersion: "12831"
  selfLink: /api/v1/namespaces/rook-ceph/configmaps/rook-config-override
  uid: bdf1f1fb-967a-410b-a2bd-b4067ce005d2

[root@kv-master-00 ~]# kubectl -n rook-ceph delete pod $(kubectl -n rook-ceph get pods -o custom-columns=NAME:.metadata.name --no-headers| grep mon)
pod "rook-ceph-mon-a-8565577958-xtznq" deleted
pod "rook-ceph-mon-b-79b696df8d-qdcpw" deleted
pod "rook-ceph-mon-c-5df78f7f96-dr2jn" deleted

[root@kv-master-00 ~]# kubectl -n rook-ceph exec -it $toolbox sh
sh-4.2# ceph status                                                                         cluster:
    id:     5a0bbe74-ce42-4f49-813d-7c434af65aad
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 43s)
    mgr: a(active, since 9m)
    osd: 4 osds: 4 up (since 8m), 4 in (since 8m)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   4.0 GiB used, 72 GiB / 76 GiB avail
    pgs:
最新解法

基于rook-ceph安装后的configmap,直接在dashboard中操作就行。

[global]
#Default of 0.05 is too aggressive for my cluster. (seconds)
mon clock drift allowed = 0.1
#K8s image-gc-low-threshold is 80% - not much point warning
before that point. (percent)
mon data avail warn = 20

参考资料
https://github.com/rook/rook/blob/master/Documentation/ceph-advanced-configuration.md

https://kubevirt.io/2019/KubeVirt_storage_rook_ceph.html
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐