一、环境描述

(1)ceph的官网要求至少要有个的mon,至少有两个osd(但是ceph的默认配置有3个osd),所以本实验中有三台虚

拟机(ubuntu-14.04),部署3个mon和3个osd

(2)三台虚拟机的ip分别是192.168.110.157(hostname;docker1)、168.168.110.147(hostname;docker2)

192.168.110.158(hostname;docker3)

(3)搭建本地私有registry请参考我以前的博客

(4)本实验中,三台虚拟机都有两块磁盘,其中一块磁盘专门为osd使用,要求是将磁盘/dev/sdb挂在到/mnt/sdb目

录下,具体步骤请参看博客http://blog.csdn.net/xuguokun1986/article/details/53886701

(5)三台虚拟机上需要安装好docke人,本实验是安装的最新版本,具体安装请参考docker官网

二、实验步骤

1。下载ceph/daemon镜像(docker pull ceph/daemon

2.将ceph/daemon打标签,然后推送到本地registry中,相关命令如下   

docker tag ceph/daemon docker1:5000/daemon
docker push  docker1:5000/daemon

3.编写启动mon的脚本

(1)157虚拟机上mon的启动脚本如下

docker run -d \
     --net=host \
     -v /etc/ceph:/etc/ceph \
     -v /var/lib/ceph/:/var/lib/ceph/ \
     -e MON_IP=192.168.110.157 \
     -e CEPH_PUBLIC_NETWORK=192.168.110.0/24 \
     docker1:5000/daemon mon


(3)启动157上的mon之后要需要将157上产生的配置文件拷贝到147和158的指定目录下,具体操作如下:

 分别在docker2和docker3上创建/var/lib/ceph目录

然后在157上执行执行如下命令,将配置文件拷贝的相应位置

scp -r /etc/ceph root@docker2:/etc
scp -r /var/lib/ceph/bootstrap* root@docker2:/var/lib/ceph

scp -r /etc/ceph root@docker3:/etc
scp -r /var/lib/ceph/bootstrap* root@docker3:/var/lib/ceph


注意:一般情况下是不允许root用户执行scp命令的,此时需要将三台虚拟的ssh进行配置修改,具体办法如下

将/etc/ssh/sshd_config文件中的PermitRootLogin without-password注释掉,然后在该行配置下加PermitRootLogin 

yes,配置结果如下

#PermitRootLogin without-password
PermitRootLogin yes



(4)147虚拟机上mon的启动脚本如下
docker run -d \
     --net=host \
     -v /etc/ceph:/etc/ceph \
     -v /var/lib/ceph/:/var/lib/ceph/ \
     -e MON_IP=192.168.110.147 \
     -e CEPH_PUBLIC_NETWORK=192.168.110.0/24 \
     docker1:5000/daemon mon

(5)158虚拟机上mon的启动脚本如下

docker run -d \
     --net=host \
     -v /etc/ceph:/etc/ceph \
     -v /var/lib/ceph/:/var/lib/ceph/ \
     -e MON_IP=192.168.110.158 \
     -e CEPH_PUBLIC_NETWORK=192.168.110.0/24 \
     docker1:5000/daemon mon


(5).在157上查看mon的启动状态

root@docker1:/home/docker/xu/ceph# docker exec 23f9 ceph -s
    cluster 86960efb-531f-498a-b540-c9306df1639e
     health HEALTH_ERR
            clock skew detected on mon.docker1, mon.docker3
            64 pgs are stuck inactive for more than 300 seconds
            64 pgs stuck inactive
            no osds
            Monitor clock skew detected 
     monmap e3: 3 mons at {docker1=192.168.110.157:6789/0,docker2=192.168.110.147:6789/0,docker3=192.168.110.158:6789/0}
            election epoch 6, quorum 0,1,2 docker2,docker1,docker3
     osdmap e1: 0 osds: 0 up, 0 in
            flags sortbitwise
      pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                  64 creating
 说明mon分别在docker1,docker2和docker3上正常启动了。其中23f9是容器id的前四个随机字符,根据各自的实验

有所不同应该。

4.编写osd的启动脚本

(1)157上osd的启动脚本

docker run -d \
    --net=host \
    --name osd1 \
    -v /etc/ceph:/etc/ceph \
    -v /var/lib/ceph/:/var/lib/ceph/ \
    -v /dev/:/dev/ \
    -v /mnt/sdb:/var/lib/ceph/osd \
    --privileged=true \
    docker1:5000/daemon osd_directory

(2)147上osd的启动脚本

docker run -d \
    --net=host \
    --name osd1 \
    -v /etc/ceph:/etc/ceph \
    -v /var/lib/ceph/:/var/lib/ceph/ \
    -v /dev/:/dev/ \
    -v /mnt/sdb:/var/lib/ceph/osd \
    --privileged=true \
    docker1:5000/daemon osd_directory

(3)158上osd的启动脚本

docker run -d \
    --net=host \
    --name osd1 \
    -v /etc/ceph:/etc/ceph \
    -v /var/lib/ceph/:/var/lib/ceph/ \
    -v /dev/:/dev/ \
    -v /mnt/sdb:/var/lib/ceph/osd \
    --privileged=true \
    docker1:5000/daemon osd_directory

(4)查看osd的启动状态(例如在157上执行如下命令)

root@docker1:/home/docker/xu/ceph# docker exec 23f9 ceph -s
    cluster 86960efb-531f-498a-b540-c9306df1639e
     health HEALTH_ERR
            clock skew detected on mon.docker1, mon.docker3
            5 pgs are stuck inactive for more than 300 seconds
            64 pgs degraded
            5 pgs stuck inactive
            59 pgs stuck unclean
            64 pgs undersized
            Monitor clock skew detected 
     monmap e3: 3 mons at {docker1=192.168.110.157:6789/0,docker2=192.168.110.147:6789/0,docker3=192.168.110.158:6789/0}
            election epoch 6, quorum 0,1,2 docker2,docker1,docker3
     osdmap e11: 3 osds: 2 up, 2 in; 7 remapped pgs
            flags sortbitwise
      pgmap v18: 64 pgs, 1 pools, 0 bytes data, 0 objects
            266 MB used, 20193 MB / 20460 MB avail
                  59 active+undersized+degraded
                   5 creating+activating+undersized+degraded

root@docker1:/home/docker/xu/ceph# docker exec 23f9 ceph osd tree
ID WEIGHT  TYPE NAME        UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.02998 root default                                       
-2 0.00999     host docker1                                   
 0 0.00999         osd.0         up  1.00000          1.00000 
-3 0.00999     host docker2                                   
 1 0.00999         osd.1         up  1.00000          1.00000 
-4 0.00999     host docker3                                   
 2 0.00999         osd.2         up  1.00000          1.00000 

看见上面的信息说明osd在三台机器上都应经正常启动

5.实验结束

Logo

权威|前沿|技术|干货|国内首个API全生命周期开发者社区

更多推荐