离散卷

1、volume信息

# gluster volume info  dis-vol 
 
Volume Name: dis-vol
Type: Disperse
Volume ID: 3ea14b87-eec0-40ab-b81a-2fb519356cde
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: master1:/k8s-glusterfs/lv1/brick1
Brick2: master2:/k8s-glusterfs/lv1/brick1
Brick3: node1:/k8s-glusterfs/lv1/brick1
Brick4: node2:/k8s-glusterfs/lv1/brick1
Brick5: node1:/k8s-glusterfs/lv2/brick2
Brick6: node2:/k8s-glusterfs/lv2/brick2
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on

2、写测试

1)创建1000个文件,检查发现这1000个文件写在每一个brick中

# time for i in `seq 1 1000`;do touch file_$i;done

real	0m21.623s
user	0m0.675s
sys	0m1.681s

2)创建400M文件,每次写入4K 64K 500K 100K 

# time  dd if=/dev/zero of=/mnt/glusterfs/io  bs=4k count=100000
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 41.0785 s, 10.0 MB/s

real	0m41.418s
user	0m0.255s
sys	0m6.231s

# time  dd if=/dev/zero of=/mnt/glusterfs/io  bs=64k count=6400
6400+0 records in
6400+0 records out
419430400 bytes (419 MB) copied, 4.93291 s, 85.0 MB/s

real	0m4.986s
user	0m0.011s
sys	0m0.829s

# time  dd if=/dev/zero of=/mnt/glusterfs/io  bs=500k count=800
800+0 records in
800+0 records out
409600000 bytes (410 MB) copied, 2.88555 s, 142 MB/s

real	0m2.980s
user	0m0.007s
sys	0m0.481s

# time  dd if=/dev/zero of=/mnt/glusterfs/io  bs=1M count=400
400+0 records in
400+0 records out
419430400 bytes (419 MB) copied, 2.90042 s, 145 MB/s

real	0m3.457s
user	0m0.002s
sys	0m0.727s

3、读测试

读400M文件,每次读4K 64K 500K 100K (每次执行sync && echo 3 > /proc/sys/vm/drop_caches)

# time dd if=/mnt/glusterfs/io of=/dev/null bs=4k count=100000
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 9.39263 s, 43.6 MB/s

real	0m9.478s
user	0m0.234s
sys	0m0.730s

# time dd if=/mnt/glusterfs/io of=/dev/null bs=64k count=6400
6400+0 records in
6400+0 records out
419430400 bytes (419 MB) copied, 9.0409 s, 46.4 MB/s

real	0m9.061s
user	0m0.024s
sys	0m0.395s

# time dd if=/mnt/glusterfs/io of=/dev/null bs=500k count=800
800+0 records in
800+0 records out
409600000 bytes (410 MB) copied, 9.43864 s, 43.4 MB/s

real	0m9.475s
user	0m0.004s
sys	0m0.352s

# time dd if=/mnt/glusterfs/io of=/dev/null bs=1M count=400
400+0 records in
400+0 records out
419430400 bytes (419 MB) copied, 9.53796 s, 44.0 MB/s

real	0m9.583s
user	0m0.002s
sys	0m0.387s

分布式复制卷

1、volume信息

# gluster volume info disrep 
 
Volume Name: disrep
Type: Distributed-Replicate
Volume ID: 5eb3ff05-7f61-4b0f-8ce8-9db9e93c56ff
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: master1:/k8s-glusterfs/lv1/brick1
Brick2: master2:/k8s-glusterfs/lv1/brick1
Brick3: node1:/k8s-glusterfs/lv1/brick1
Brick4: node2:/k8s-glusterfs/lv1/brick1
Brick5: node1:/k8s-glusterfs/lv2/brick2
Brick6: node2:/k8s-glusterfs/lv2/brick2
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

2、写测试

1)创建1000个文件,1000个文件被分散存放在各个brick上

# time for i in `seq 1 1000`;do touch file_$i;done

real	0m27.869s
user	0m0.808s
sys	0m2.323s

2)创建400M文件,每次写入4K 64K 500K 100K 

# time  dd if=/dev/zero of=/mnt/glusterfs/io  bs=4k count=100000
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 26.8254 s, 15.3 MB/s

real	0m26.854s
user	0m0.398s
sys	0m5.090s

# time  dd if=/dev/zero of=/mnt/glusterfs/io  bs=64k count=6400
6400+0 records in
6400+0 records out
419430400 bytes (419 MB) copied, 3.79212 s, 111 MB/s

real	0m3.807s
user	0m0.210s
sys	0m0.525s

# time  dd if=/dev/zero of=/mnt/glusterfs/io  bs=500k count=800
800+0 records in
800+0 records out
409600000 bytes (410 MB) copied, 3.53517 s, 116 MB/s

real	0m3.651s
user	0m0.014s
sys	0m0.368s


# time  dd if=/dev/zero of=/mnt/glusterfs/io  bs=1M count=400
400+0 records in
400+0 records out
419430400 bytes (419 MB) copied, 3.74281 s, 112 MB/s

real	0m3.845s
user	0m0.008s
sys	0m0.398s

3、读测试

读400M文件,每次读4K 64K 500K 100K (每次执行sync && echo 3 > /proc/sys/vm/drop_caches)

测试中检查发现被读取的文件io大小为2G,整个文件存在一个GlusterFS节点brick下面

# time dd if=/mnt/glusterfs/io of=/dev/null bs=4k count=100000
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 2.9424 s, 139 MB/s

real	0m2.957s
user	0m0.252s
sys	0m0.634s

# time dd if=/mnt/glusterfs/io of=/dev/null bs=64k count=6400
6400+0 records in
6400+0 records out
419430400 bytes (419 MB) copied, 2.8611 s, 147 MB/s

real	0m2.887s
user	0m0.025s
sys	0m0.404s

# time dd if=/mnt/glusterfs/io of=/dev/null bs=500k count=800
800+0 records in
800+0 records out
409600000 bytes (410 MB) copied, 2.72999 s, 150 MB/s

real	0m2.753s
user	0m0.000s
sys	0m0.345s

# time dd if=/mnt/glusterfs/io of=/dev/null bs=1M count=400
400+0 records in
400+0 records out
419430400 bytes (419 MB) copied, 2.81897 s, 149 MB/s

real	0m2.847s
user	0m0.000s
sys	0m0.361s

把一个10G的大文件切割成1G的小文件放在GlusterFS各个brick后,测试读性能

# du -sh  /mnt/glusterfs/*
1.0G	/mnt/glusterfs/xaa
1.0G	/mnt/glusterfs/xab
1.0G	/mnt/glusterfs/xac
1.0G	/mnt/glusterfs/xad
1.0G	/mnt/glusterfs/xae
1.0G	/mnt/glusterfs/xaf
1.0G	/mnt/glusterfs/xag
1.0G	/mnt/glusterfs/xah
1.0G	/mnt/glusterfs/xai
784M	/mnt/glusterfs/xaj

# time cat * > /dev/null 

real	2m4.696s
user	0m0.174s
sys	0m8.356s

 

 

 

 

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐