k8s-ceph-statefulsets-storageclass-nfs 动态卷有状态应用实践
k8s-ceph-statefulsets-storageclass-nfs 有状态应用布署实践Copyright 2017-05-22 xiaogang(172826370@qq.com)由于网上的文章全部是抄袭官网等,烂文章一堆,误导一堆人,完全没有实用性,特拟写此文章,nfs相对来说比较简单,一般都会安装先送上nfs的相关文档,稍后将为大家献上ceph rbd动态卷文档
Copyright 2017-05-22 xiaogang(172826370@qq.com)
由于网上的文章全部是抄袭官网等,烂文章一堆,误导一堆人,完全没有实用性,特拟写此文章,nfs相对来说比较简单,一般都会安装
先送上nfs的相关文档,稍后将为大家献上ceph rbd动态卷文档,同时还有几个redis和mysql主从实例
有状态容器的工作过程中,存储是一个关键问题,Kubernetes 对存储的管理提供了有力的支持。Kubernetes 独有的动态卷供给特性,
实现了存储卷的按需创建。在这一特性面世之前,集群管理员首先要给云供应商或者存储供应商致电,来申请新的存储卷,然后创建持
久卷(PersistentVolue),使其在 Kubernetes 中可见。而动态卷供给功能则实现了这两个步骤的自动化,让管理员无需再进行存储卷
预分配。存储资源会依照 StorageClass 定义的方式进行供给。StorageClass 是对底层存储资源的抽象,包含了存储相关的参数,
例如磁盘类型(标准类型和 SSD)。
StorageClass 的多种供给者(Previsioner),为 Kubernetes 提供了针对特定物理存储或云存储的访问能力。目前提供了多种开箱即
用的存储支持,另外还有一些在 Kubernetes 孵化器中提供的其他存储支持。
在 Kubernetes 1.6 中,动态卷供给提升为稳定版(1.4 开始进入 Beta 版)。这在 Kubernetes 的存储自动化过程中是很重要的一步,
让管理员能够控制资源的供给方式,让用户能够更专注于自己的应用。在上面提到的益处之外,在升级到 Kubernetes 1.6 之前,还需
要了解一下这里涉及到的针对用户方面的变更。
有状态的应用程序
一般情况下,nginx或者web server(不包含MySQL)自身都是不需要保存数据的,对于 web server,数据会保存在专门做持久化的节点
上。所以这些节点可以随意扩容或者缩容,只要简单的增加或减少副本的数量就可以。但是很多有状态的程序都需要集群式的部署,
意味着节点需要形成群组关系,每个节点需要一个唯一的ID(例如Kafka BrokerId, Zookeeper myid)来作为集群内部每个成员的标识,
集群内节点之间进行内部通信时需要用到这些标识。传统的做法是管理员会把这些程序部署到稳定的,长期存活的节点上去,这些节点
有持久化的存储和静态的IP地址。这样某个应用的实例就跟底层物理基础设施比如某台机器,某个IP地址耦合在一起了。Kubernets中
StatefulSet的目标是通过把标识分配给应用程序的某个不依赖于底层物理基础设施的特定实例来解耦这种依赖关系。(消费方不使用静
态的IP,而是通过DNS域名去找到某台特定机器)
StatefulSet
前提
使用StatefulSet的前提:
Kubernetes集群的版本 >=1.5
安装好DNS集群插件,版本 >=15
特点
StatefulSet(1.5版本之前叫做PetSet)为什么适合有状态的程序,因为它相比于Deployment有以下特点:
稳定的,唯一的网络标识,可以用来发现集群内部的其他成员。比如StatefulSet的名字叫kafka,那么第一个起来的Pet叫kafka-0,mysql-0
第二个叫 kafk-1, mysql-1依次类推。
稳定的持久化存储:通过Kubernetes的PV/PVC或者外部存储(预先提供的)来实现
启动或关闭时保证有序:优雅的部署和伸缩性: 操作第n个pod时,前n-1个pod已经是运行且准备好的状态。 有序的,优雅的删除和
终止操作:从 n, n-1, ... 1, 0 这样的顺序删除
上述提到的“稳定”指的是Pod在多次重新调度时保持稳定,即存储,DNS名称,hostname都是跟Pod绑定到一起的,跟Pod被调度到哪个
节点没关系。
所以Zookeeper, Etcd 或 Elasticsearch这类需要稳定的集群成员的应用时,就可以用StatefulSet。通过查询无头服务域名的A记录,
就可以得到集群内成员的域名信息。
限制
StatefulSet也有一些限制:
Pod的存储必须是通过 PersistentVolume Provisioner基于 storeage类来提供,或者是管理员预先提供的外部存储
删除或者缩容不会删除跟StatefulSet相关的卷,这是为了保证数据的安全
StatefulSet现在需要一个无头服务(Headless Service)来负责生成Pods的唯一网络标示,需要开发人员创建这个服务
对StatefulSet的升级是一个手工的过程
无头服务(Headless Service)
要定义一个服务(Service)为无头服务(Headless Service),需要把Service定义中的ClusterIP配置项设置为空: spec.clusterIP:None。
和普通Service相比,Headless Service没有ClusterIP(所以没有负载均衡),它会给一个集群内部的每个成员提供一个唯一的DNS域名来
作为每个成员的网络标识,集群内部成员之间使用域名通信。无头服务管理的域名是如下的格式:$(service_name).$(k8s_namespace).svc.cluster.local。
其中的 "cluster.local"是集群的域名,除非做了配置,否则集群域名默认就是cluster.local。StatefulSet下创建的每个Pod,得到一个对应的DNS子域名,
格式如下:
$(podname).$(governing_service_domain),这里 governing_service_domain是由StatefulSet中定义的serviceName来决定。举例子,
无头服务管理的kafka的域名是:kafka.test.svc.cluster.local, 创建的Pod得到的子域名是 kafka-1.kafka.test.svc.cluster.local。
注意这里提到的域名,都是由kuber-dns组件管理的集群内部使用的域名,可以通过命令来查询:
1.nfs-client storage class动态卷
在nfs-server物理机上配置权限 cat /etc/exports
/data/nfs-storage/k8s-storage/ssd *(rw,insecure,sync,no_subtree_check,no_root_squash)
注意所有k8s节点必须全部安装yum install nfs-utils.x86_64 -y
下载nfs-client 插件
docker pull quay.io/external_storage/nfs-client-provisioner:v2.0.0
docker tag quay.io/external_storage/nfs-client-provisioner:v2.0.0 192.168.1.103/k8s_public/nfs-client-provisioner:v2.0.0
docker push 192.168.1.103/k8s_public/nfs-client-provisioner:v2.0.0
注意:RBAC 现在用的很广,之前的文档没加入RBAC授权,需要和KUBEAPISERVER交互,现补上RBAC yml
[root@master1 nfs-client]# cat serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
[root@master1 nfs-client]# cat clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
[root@master1 nfs-client]# cat clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
[root@master1 nfs-client]#
布署供应卷,实际上是把pv挂载成class供应卷
cat deployment-nfs.yaml
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry-k8s.novalocal/k8s_system/nfs-client-provisioner:v2.0.0
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.0.29
- name: NFS_PATH
value: /data
volumes:
- name: nfs-client-root
nfs:
server: 192.168.0.29
path: /data #此处填写nfs 存储路径 跟据实际情况填写
[root@master3 deploy]# kubectl create -f deployment-nfs.yaml -f clusterrolebinding.yaml -f clusterrole.yaml -f serviceaccount.yaml
kubectl get pod
nfs-client-provisioner-4163627910-fn70d 1/1 Running 0 1m
布署storageclass.yaml
[root@master3 deploy]# cat nfs-class.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # 此处引用nfs-client-provisioner里面的 fuseim.pri/ifs or choose another name, must match deployment's env PROVISIONER_NAME'
[root@master3 deploy]# kubectl create -f nfs-class.yaml
[root@master3 deploy]# kubectl get storageclass
NAME TYPE
ceph-web kubernetes.io/rbd
managed-nfs-storage fuseim.pri/ifs
创建一个pod引用storageclass
[root@master3 stateful-set]# cat nginx.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx1"
replicas: 2
volumeClaimTemplates:
- metadata:
name: test
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" #此处引用classname
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
template:
metadata:
labels:
app: nginx1
spec:
containers:
- name: nginx1
image: 192.168.1.103/k8s_public/nginx:latest
volumeMounts:
- mountPath: "/mnt"
name: test
imagePullSecrets:
- name: "registrykey" #注意此处注名了secret安全连接registy 本地镜相服务器
验证pv pvc 是否自己创建成功
[root@master3 stateful-set]# kubectl get pv |grep web
default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59 2Gi RWO Delete Bound default/test-web-0 1m
default-test-web-1-pvc-6bbec6a0-3ed4-11e7-9818-525400c2bc59 2Gi RWO Delete Bound default/test-web-1 1m
[root@master3 stateful-set]# kubectl get pvc |grep web
test-web-0 Bound default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59 2Gi RWO 1m
test-web-1 Bound default-test-web-1-pvc-6bbec6a0-3ed4-11e7-9818-525400c2bc59 2Gi RWO 1m
[root@master3 stateful-set]# kubectl get storageclass |grep web
ceph-web kubernetes.io/rbd
[root@master3 stateful-set]# kubectl get storageclass
NAME TYPE
ceph-web kubernetes.io/rbd
managed-nfs-storage fuseim.pri/ifs
[root@master3 stateful-set]# kubectl get pod |grep web
web-0 1/1 Running 0 2m
web-1 1/1 Running 0 2m
扩展 pod
[root@master3 stateful-set]# kubectl scale statefulset web --replicas=3
[root@master3 stateful-set]# kubectl get pod |grep web
web-0 1/1 Running 0 10m
web-1 1/1 Running 0 10m
web-3 1/1 Running 0 1m
收缩 pod 至1个
kubectl scale statefulset web --replicas=1
[root@master3 stateful-set]# kubectl get pod |grep web
web-0 1/1 Running 0 11m
ok ,创建完成 pod也正常
进入web-0验证pvc挂载目录
[root@master3 stateful-set]# kubectl exec -it web-0 /bin/bash
root@web-0:/#
root@web-0:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:0-654996-18a8b448ce9ebf898e46c4468b33093ed9a5f81794d82a271124bcd1eb27a87c 10G 230M 9.8G 3% /
tmpfs 1.6G 0 1.6G 0% /dev
tmpfs 1.6G 0 1.6G 0% /sys/fs/cgroup
192.168.1.103:/data/nfs-storage/k8s-storage/ssd/default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59 189G 76G 104G 43% /mnt
/dev/mapper/centos-root 37G 9.1G 26G 27% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 1.6G 12K 1.6G 1% /run/secrets/kubernetes.io/serviceaccount
root@web-0:/#
去nfs-server上看看pvc卷
root@pxt:/data/nfs-storage/k8s-storage/ssd# ll
total 40
drwxr-xr-x 10 root root 4096 May 22 17:53 ./
drwxr-xr-x 7 root root 4096 May 12 17:26 ../
drwxr-xr-x 3 root root 4096 May 16 16:19 default-data-mysql-0-pvc-3954b59e-3a10-11e7-b646-525400c2bc59/
drwxr-xr-x 3 root root 4096 May 16 16:20 default-data-mysql-1-pvc-396bd26f-3a10-11e7-b646-525400c2bc59/
drwxr-xr-x 3 root root 4096 May 16 16:21 default-data-mysql-2-pvc-39958611-3a10-11e7-b646-525400c2bc59/
drwxr-xr-x 2 root root 4096 May 17 17:49 default-redis-primary-volume-redis-primary-0-pvc-bb19aa13-3ad3-11e7-b646-525400c2bc59/
drwxr-xr-x 2 root root 4096 May 17 17:56 default-redis-secondary-volume-redis-secondary-0-pvc-16c8749d-3ae7-11e7-b646-525400c2bc59/
drwxr-xr-x 2 root root 4096 May 17 17:58 default-redis-secondary-volume-redis-secondary-1-pvc-16da7ba5-3ae7-11e7-b646-525400c2bc59/
drwxr-xr-x 2 root root 4096 May 22 17:53 default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59/
drwxr-xr-x 2 root root 4096 May 22 17:53 default-test-web-1-pvc-6bbec6a0-3ed4-11e7-9818-525400c2bc59/
root@pxt:/data/nfs-storage/k8s-storage/ssd# showmount -e
Export list for pxt.docker.agent103:
/data/nfs_ssd *
/data/nfs-storage/k8s-storage/standard *
/data/nfs-storage/k8s-storage/ssd *
/data/nfs-storage/k8s-storage/redis *
/data/nfs-storage/k8s-storage/nginx *
/data/nfs-storage/k8s-storage/mysql *
root@pxt:/data/nfs-storage/k8s-storage/ssd# cat /etc/exports
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
#/data/nfs-storage/k8s-storage *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs-storage/k8s-storage/mysql *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs-storage/k8s-storage/nginx *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs-storage/k8s-storage/redis *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs-storage/k8s-storage/ssd *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs-storage/k8s-storage/standard *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs_ssd *(rw,insecure,sync,no_subtree_check,no_root_squash)
2.布署一个可伸缩的mysql 主从集群,基于mysql5.7 一主多从,准备3个yaml文件
准备镜相文件 #"image": gcr.io/google-samples/xtrabackup:1.0 原始镜相自己打tag push 到私库
#原始镜相:image: msql:5.7
mysql-configmap.yaml mysql-services.yaml mysql-statefulset.yaml
[root@master3 setateful-set-mysql]# cat mysql-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
labels:
app: mysql
data:
master.cnf: |
# Apply this config only on the master.
[mysqld]
log-bin
slave.cnf: |
# Apply this config only on slaves.
[mysqld]
super-read-only
[root@master3 setateful-set-mysql]# cat mysql-services.yaml
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
clusterIP: None
selector:
app: mysql
---
# Client service for connecting to any MySQL instance for reads.
# For writes, you must instead connect to the master: mysql-0.mysql.
apiVersion: v1
kind: Service
metadata:
name: mysql-read
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
selector:
app: mysql
[root@master3 setateful-set-mysql]# cat mysql-statefulset.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mysql
spec:
serviceName: mysql
replicas: 3
template:
metadata:
labels:
app: mysql
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "init-mysql",
"image": "192.168.1.103/k8s_public/mysql:5.7",
"command": ["bash", "-c", "
set -ex\n
# Generate mysql server-id from pod ordinal index.\n
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n
ordinal=${BASH_REMATCH[1]}\n
echo [mysqld] > /mnt/conf.d/server-id.cnf\n
# Add an offset to avoid reserved server-id=0 value.\n
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf\n
# Copy appropriate conf.d files from config-map to emptyDir.\n
if [[ $ordinal -eq 0 ]]; then\n
cp /mnt/config-map/master.cnf /mnt/conf.d/\n
else\n
cp /mnt/config-map/slave.cnf /mnt/conf.d/\n
fi\n
"],
"volumeMounts": [
{"name": "conf", "mountPath": "/mnt/conf.d"},
{"name": "config-map", "mountPath": "/mnt/config-map"}
]
},
{
"name": "clone-mysql",
"image": "192.168.1.103/k8s_public/xtrabackup:1.0",
"command": ["bash", "-c", "
set -ex\n
# Skip the clone if data already exists.\n
[[ -d /var/lib/mysql/mysql ]] && exit 0\n
# Skip the clone on master (ordinal index 0).\n
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n
ordinal=${BASH_REMATCH[1]}\n
[[ $ordinal -eq 0 ]] && exit 0\n
# Clone data from previous peer.\n
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql\n
# Prepare the backup.\n
xtrabackup --prepare --target-dir=/var/lib/mysql\n
"],
"volumeMounts": [
{"name": "data", "mountPath": "/var/lib/mysql", "subPath": "mysql"},
{"name": "conf", "mountPath": "/etc/mysql/conf.d"}
]
}
]'
spec:
containers:
- name: mysql
image: 192.168.1.103/k8s_public/mysql:5.7
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 1
memory: 1Gi
#memory: 500Mi
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
initialDelaySeconds: 5
timeoutSeconds: 1
- name: xtrabackup
image: 192.168.1.103/k8s_public/xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing slave.
mv xtrabackup_slave_info change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from master. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm xtrabackup_binlog_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
mysql -h 127.0.0.1 <<EOF
$(<change_master_to.sql.orig),
MASTER_HOST='mysql-0.mysql',
MASTER_USER='root',
MASTER_PASSWORD='',
MASTER_CONNECT_RETRY=10;
START SLAVE;
EOF
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
#nodeSelector: #注意此处打了node label可以注释此行
# zone: mysql #注意此处打了node label可以注释此行
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
annotations:
#volume.alpha.kubernetes.io/storage-class: "managed-nfs-storage" #不同版本这里引用的alpha/beta不同注意
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
[root@master3 setateful-set-mysql]# kubectl create -f mysql-configmap.yaml -f mysql-services.yaml -f mysql-statefulset.yaml
[root@master3 setateful-set-mysql]# kubectl get storageclass,pv,pvc,statefulset,pod,service |grep mysql
pv/default-data-mysql-0-pvc-3954b59e-3a10-11e7-b646-525400c2bc59 10Gi RWO Delete Bound default/data-mysql-0 6d
pv/default-data-mysql-1-pvc-396bd26f-3a10-11e7-b646-525400c2bc59 10Gi RWO Delete Bound default/data-mysql-1 6d
pv/default-data-mysql-2-pvc-39958611-3a10-11e7-b646-525400c2bc59 10Gi RWO Delete Bound default/data-mysql-2
6d
pvc/data-mysql-0 Bound default-data-mysql-0-pvc-3954b59e-3a10-11e7-b646-525400c2bc59 10Gi RWO 6d
pvc/data-mysql-1 Bound default-data-mysql-1-pvc-396bd26f-3a10-11e7-b646-525400c2bc59 10Gi RWO 6d
pvc/data-mysql-2 Bound default-data-mysql-2-pvc-39958611-3a10-11e7-b646-525400c2bc59 10Gi RWO 6d
statefulsets/mysql 3 3 5d
po/mysql-0 2/2 Running 0 5d
po/mysql-1 2/2 Running 0 5d
po/mysql-2 2/2 Running 0 5d
svc/mysql None <none> 3306/TCP 6d #同一个namespaces 下面是可以ping 的 ping mysql-0.mysql ; ping mysql-1.mysql
svc/mysql-read 172.1.11.160 <none> 3306/TCP 6d
[root@master3 setateful-set-mysql]# ok 所有pok创建完成,注意这里的service 没有clusterip 这种就是headless service无头类型,
注意删除kubectl delete statefulset yaml pv和pvc还是会存在的
扩容mysql slave 扩容后可以看到pv,pvc相应的自动创建了
kubectl scale --replicas=5 statefulset mysql
kubectl get pod|grep mysql
po/mysql-0 2/2 Running 0 5d
po/mysql-1 2/2 Running 0 5d
po/mysql-2 2/2 Running 0 5d
po/mysql-3 2/2 Running 0 5m
po/mysql-4 2/2 Running 0 5m
收宿 kubectl scale --replicas=2 statefulset mysql
kubectl get pod|grep mysql
po/mysql-0 2/2 Running 0 5d
po/mysql-1 2/2 Running 0 5d
下面我们来布署一个基于storageclass动态卷的redis的主从 HA 自动切换的集群
准备一个目录编译dockerfile
准备备三个文件
[root@node1 redis-build-cluster]# ls
docker-entrypoint.sh Dockerfile redis.conf
[root@node1 redis-build-cluster]# cat docker-entrypoint.sh
#!/bin/sh
set -eox pipefail
#shopt -s nullglob
REDIS_CONF=${REDIS_CONF:-"/opt/k8s-redis/redis.conf"}
if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then
set -- redis-server "$@"
fi
if [ "$1" = 'redis-server' ] && [ -n "$SLAVEOF" ] && [ -z "$SENTINEL" ]; then
echo "Starting Redis replica"
set -- $@ "$REDIS_CONF" --slaveof "$SLAVEOF" 6379
elif [ "$1" = 'redis-server' ] && [ -n "$SENTINEL" ]; then
echo "Starting Redis sentinel"
while true; do
redis-cli -h $SENTINEL INFO
if [[ "$?" == "0" ]]; then
break
fi
echo "Connecting to master failed. Waiting..."
sleep 10
done
echo "sentinel monitor primary $SENTINEL 6379 2" >> "$REDIS_CONF"
echo "sentinel down-after-milliseconds primary 5000" >> "$REDIS_CONF"
echo "sentinel failover-timeout primary 10000" >> "$REDIS_CONF"
echo "sentinel parallel-syncs primary 1" >> "$REDIS_CONF"
set -- $@ "$REDIS_CONF" --port 26379 --sentinel --protected-mode no
elif [ "$1" = 'redis-server' ]; then
echo "Starting Redis master"
set -- $@ "$REDIS_CONF"
fi
exec "$@"
[root@node1 redis-build-cluster]# cat Dockerfile
FROM redis:3.2-alpine
#RUN mkdir -p /opt/k8s-redis
#ENV "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
COPY ["redis.conf", "/opt/k8s-redis/"]
COPY ["docker-entrypoint.sh", "/usr/local/bin/"]
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
#ENTRYPOINT ["docker-entrypoint.sh"]
[root@node1 redis-build-cluster]# cat redis.conf
protected-mode no
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile ""
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
#运行生成新的镜相,查看已经生成,上传到私库
[root@node1 redis-build-cluster]# docker build -t "192.168.1.103/k8s_public/redis:3.2-alpine" .
[root@node1 redis-build-cluster]# docker images |grep alpine
192.168.1.103/k8s_public/redis 3.2-alpine 07521028e2f3 7 days ago 19.82 MB
redis 3.2-alpine 77799e3e7aea 11 weeks ago 19.82 MB
[root@node1 redis-build-cluster]# docker push 192.168.1.103/k8s_public/redis:3.2-alpine
#准备集群所需文件
[root@master3 redis]# ls
primary.yml secondary.yml sentinel.yml
[root@master3 redis]# cat primary.yml
apiVersion: v1
kind: Service
metadata:
name: redis-primary
labels:
app: redis-primary
spec:
ports:
- port: 6379
name: redis-primary
clusterIP: None
selector:
app: redis-primary
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: redis-primary
spec:
serviceName: redis-primary
replicas: 1
template:
metadata:
labels:
app: redis-primary
spec:
terminationGracePeriodSeconds: 10
containers:
- name: redis-primary
image: 192.168.1.103/k8s_public/redis:3.2-alpine
imagePullPolicy: Always
ports:
- containerPort: 6379
name: redis-primary
volumeMounts:
- name: redis-primary-volume
mountPath: /data
imagePullSecrets:
- name: "registrykey"
volumeClaimTemplates:
- metadata:
name: redis-primary-volume
annotations:
#volume.alpha.kubernetes.io/storage-class: "managed-nfs-storage" #不同版本这里引用的alpha/1.5用的是beta不同注意
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
[root@master3 redis]# cat secondary.yml
apiVersion: v1
kind: Service
metadata:
name: redis-secondary
labels:
app: redis-secondary
spec:
ports:
- port: 6379
name: redis-secondary
clusterIP: None
selector:
app: redis-secondary
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: redis-secondary
spec:
serviceName: redis-secondary
replicas: 2
template:
metadata:
labels:
app: redis-secondary
spec:
terminationGracePeriodSeconds: 10
containers:
- name: redis-secondary
image: 192.168.1.103/k8s_public/redis:3.2-alpine
imagePullPolicy: IfNotPresent
env:
- name: SLAVEOF
value: redis-primary-0.redis-primary
ports:
- containerPort: 6379
name: redis-secondary
volumeMounts:
- name: redis-secondary-volume
mountPath: /data
imagePullSecrets:
- name: "registrykey"
volumeClaimTemplates:
- metadata:
name: redis-secondary-volume
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
[root@master3 redis]# cat sentinel.yml
apiVersion: v1
kind: Service
metadata:
name: redis-sentinel
labels:
app: redis-sentinel
spec:
ports:
- port: 26379
name: redis-sentinel
selector:
app: redis-sentinel
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: redis-sentinel
spec:
serviceName: redis-sentinel
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
terminationGracePeriodSeconds: 10
containers:
- name: redis-sentinel
image: 192.168.1.103/k8s_public/redis:3.2-alpine
imagePullPolicy: Always
env:
- name: SENTINEL
value: redis-primary
ports:
- containerPort: 26379
name: redis-sentinel
imagePullSecrets:
- name: "registrykey"
#启动yaml 目录文件
[root@master3 redis]# cd ../ ; kubectl create -f redis/
#检查pod 以及动态pv pvc启动情况
[root@master3 redis]# kubectl get statefulset,pv,pvc,pod --namespace=default|egrep 'redis|NAME'
NAME DESIRED CURRENT AGE
statefulsets/redis-primary 1 1 7d
statefulsets/redis-secondary 2 2 7d
statefulsets/redis-sentinel 3 3 7d
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc/redis-primary-volume-redis-primary-0 Lost default-redis-primary-volume-redis-primary-0-pvc-bb19aa13-3ad3-11e7-b646-525400c2bc59 0 7d
pvc/redis-secondary-volume-redis-secondary-0 Lost default-redis-secondary-volume-redis-secondary-0-pvc-16c8749d-3ae7-11e7-b646-525400c2bc59 0 7d
pvc/redis-secondary-volume-redis-secondary-1 Lost default-redis-secondary-volume-redis-secondary-1-pvc-16da7ba5-3ae7-11e7-b646-525400c2bc59 0 7d
NAME READY STATUS RESTARTS AGE
po/redis-primary-0 1/1 Running 0 7d
po/redis-secondary-0 1/1 Running 0 7d
po/redis-secondary-1 1/1 Running 4 7d
po/redis-sentinel-0 1/1 Running 0 7d
po/redis-sentinel-1 1/1 Running 0 7d
po/redis-sentinel-2 1/1 Running 0 7d
[root@master3 redis]#
#进入pod 容器运行命令 写入一个key a 1
[root@master3 redis]# kubectl exec -it redis-primary-0 /bin/sh
/data # redis-cli
127.0.0.1:6379> set a 1
OK
127.0.0.1:6379> exit
#查看挂载卷pvc 自动创建成功192.168.1.103:/data/nfs-storage/k8s-storage/ssd/default-redis-primary-volume-redis-primary-0-pvc-bb19aa13-3ad3-11e7-b646-525400c2bc59
/data # df -h
Filesystem Size Used Available Use% Mounted on
/dev/mapper/docker-253:0-654996-04040b3142bd911bd7954b3830fbb96500cc655a531dfa04577ded801613d559
10.0G 52.2M 9.9G 1% /
tmpfs 1000.1M 0 1000.1M 0% /dev
tmpfs 1000.1M 0 1000.1M 0% /sys/fs/cgroup
192.168.1.103:/data/nfs-storage/k8s-storage/ssd/default-redis-primary-volume-redis-primary-0-pvc-bb19aa13-3ad3-11e7-b646-525400c2bc59
188.9G 76.2G 103.1G 42% /data
/dev/mapper/centos-root
36.8G 21.0G 13.9G 60% /dev/termination-log
/dev/mapper/centos-root
36.8G 21.0G 13.9G 60% /etc/resolv.conf
/dev/mapper/centos-root
36.8G 21.0G 13.9G 60% /etc/hostname
/dev/mapper/centos-root
36.8G 21.0G 13.9G 60% /etc/hosts
shm 64.0M 0 64.0M 0% /dev/shm
/dev/mapper/centos-root
36.8G 21.0G 13.9G 60% /run/secrets
tmpfs 1000.1M 12.0K 1000.1M 0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs 1000.1M 0 1000.1M 0% /proc/kcore
tmpfs 1000.1M 0 1000.1M 0% /proc/timer_list
tmpfs 1000.1M 0 1000.1M 0% /proc/timer_stats
tmpfs 1000.1M 0 1000.1M 0% /proc/sched_debug
/data #
#退出pod
exit
进入从库验证key 是否同步,我们看到key a 的值 已同步,至此redis集群搭建完成
[root@master3 redis]# kubectl exec -it redis-secondary-0 /bin/sh
/data # redis-cli
127.0.0.1:6379> get a
"1"
127.0.0.1:6379> info replication
# Replication
role:slave
master_host:redis-primary-0.redis-primary
master_port:6379
master_link_status:up #这里显示已经和主连接了
master_last_io_seconds_ago:7
master_sync_in_progress:0
slave_repl_offset:330
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
127.0.0.1:6379>
ok nfs 动态卷布署就到这,现在进行ceph rbd mysql Galera 分布式集群的布署流程
Galera Cluster介绍
Galera是一个MySQL(也支持MariaDB,Percona)的同步多主集群软件。
从用户视角看,一组Galera集群可以看作一个具有多入口的MySQL库,用户可以同时从多个IP读写这个库。
目前Galera已经得到广泛应用,例如Openstack中,在集群规模不大的情况下,稳定性已经得到了实践考验。
真正的multi-master,即所有节点可以同时读写数据库
优势:
因为是多主,所以不存在Slave lag
不存在丢失交易的情况
同时具有读和写的扩展能力
更小的客户端延迟
节点间数据是同步的,而Master/Slave模式是异步的,不同slave上的binlog可能是不同的
技术:
Galera集群的复制功能基于Galera library实现,为了让MySQL与Galera library通讯,特别针对MySQL开发了wsrep API。
Galera集群中,后加入的节点叫“joiner”,joiner会向之前的节点请求同步数据,接受同步请求的节点叫“donor”,同步可以通过IST: incremental
state transfer 和SST: full state transfer两种方式,支持的wsrep_sst_method有mysqldump,rsync,xtrabackup三种。其中xtrabackup锁表的时间最短,同步速度最快,所以一般选择xtrabackup。
无论采用哪种方法,都会短暂锁表,如果对这个比较敏感,那么可以采用专用的“参考节点”,即该节点不对用户开放,也不执行任何SQL操作。
1.布署ceph 服务端 这里单独用一台机用docker运行ceph 服务端
下载镜相
docker pull ceph/demo
启动ceph demo #注意这里填本机ip #这里局域网络授权仿问ip 和k8s在同一网段
docker run -d --net=host -v /etc/ceph:/etc/ceph --name=ceph -e MON_IP=192.168.1.31 -e CEPH_PUBLIC_NETWORK=192.168.0.0/16 ceph/demo
2.在k8s所有节点上 master node安装ceph-common-0.94.5-1.el7.x86_64 包,我机器是centos7 不同系统安装不同的包,自行解决,在这里我在ansible的主控节点对K8S 所有节点进行安装
ansible自行脑补,没有ansible所有节点手动装 yum install ceph-common -y
ansible -m shell -a 'yum install ceph-common -y ' 'masters nodes'
3.ceph服务器上下发证书到所有节点
[root@ceph-31 ~]# ls /etc/ceph/ceph.client.admin.keyring
/etc/ceph/ceph.client.admin.keyring
下发/etc/ceph/ceph.client.admin.keyring 认证文件 到所有的k8s node 和master节点
[root@ceph-31 ~] scp /etc/ceph/ceph.client.admin.keyring 192.168.1.61:/etc/ceph/
[root@ceph-31 ~] scp /etc/ceph/ceph.client.admin.keyring 192.168.1.62:/etc/ceph/
[root@ceph-31 ~] scp /etc/ceph/ceph.client.admin.keyring 192.168.1.63:/etc/ceph/
4.在k8s master上生成ceph-key 给k8s 调用
[root@master3】 grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64
得到key QVFCNXpoOVovdWdrSUJBQVBib2dGMnJnZXU5MGd3VGxTNlVTNlE9PQ== 填入下面key 用于认证 生成下面yaml文件
[root@master3 mariadb-cluster] mkdir -p mariadb-cluster ;cd mariadb-cluster
#创建租户空间
[root@master3 mariadb-cluster]# cat galera-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: galera
[root@master3 mariadb-cluster]# cat secret-ceph.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: galera
type: "kubernetes.io/rbd"
data:
key: QVFCNXpoOVovdWdrSUJBQVBib2dGMnJnZXU5MGd3VGxTNlVTNlE9PQ==
5.按照下面 依次创建yaml文件
#动态卷类创建
[root@master3 mariadb-cluster]# cat ceph-class.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: ceph-web
provisioner: kubernetes.io/rbd
parameters:
monitors: 192.168.1.31:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: kube-system
pool: rbd #此处默认是rbd池,生产上建议自己创建存储池隔离
userId: admin
userSecretName: ceph-secret
#创建mysql的配置文件
[root@master3 mariadb-cluster]# cat mysql-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config-vol
namespace: galera
labels:
app: mysql
data:
mariadb.cnf: |
[client]
default-character-set = utf8
[mysqld]
character-set-server = utf8
collation-server = utf8_general_ci
# InnoDB optimizations
innodb_log_file_size = 64M
galera.cnf: |
[galera]
user = mysql
bind-address = 0.0.0.0
# Optimizations
innodb_flush_log_at_trx_commit = 0
sync_binlog = 0
expire_logs_days = 7
# Required settings
default_storage_engine = InnoDB
binlog_format = ROW
innodb_autoinc_lock_mode = 2
query_cache_size = 0
query_cache_type = 0
# MariaDB Galera settings
#wsrep_debug=ON
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_sst_method=rsync
# Cluster settings (automatically updated)
wsrep_cluster_address=gcomm://
wsrep_cluster_name=galera
wsrep_node_address=127.0.0.1
#创建 mysql root 用户 和密码 #注意用echo -n idea77 |base64 一定要用-n 去掉换行符,要不然报错
echo -n idea77 |base64
aWRlYTc3
echo -n root |base64
cm9vdA==
[root@master3 mariadb-cluster]# cat mysql-secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysql-secrets
namespace: galera
labels:
app: mysql
data:
# Root password: changeit run echo -n idea77|base64
root-password: aWRlYTc3
# Root user: root
root-user: cm9vdA==
创建 docker 私库供当前空间调用
[root@master3 mariadb-cluster]# cat system-registry-secret.yaml
apiVersion: v1
data:
.dockercfg: eyIxOTIuMTY4LjEuMTAzIjp7InVzZXJuYW1lIjoidGVzdDEiLCJwYXNzd29yZCI6IkExMjM0NTY3ODliIiwiZW1haWwiOiJ0ZXN0QHFxLmNvbSIsImF1dGgiOiJkR1Z6ZERFNlFURXlNelExTmpjNE9XST0ifX0=
kind: Secret
metadata:
name: registrykey
namespace: galera
type: kubernetes.io/dockercfg
#创建stateful 主要启动文件 ,文件内已标明原始镜相下载,请自行下载
#"image": "youdowell/k8s-galera-init:latest",注意原始镜相,自行下载
#image: mariadb:10.1 注意原始镜相,自行下载
[root@master3 mariadb-cluster]# cat galera-mariadb.yaml
# MariaDB 10.1 Galera Cluster
#
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: mysql
namespace: galera
labels:
app: mysql
tier: data
spec:
ports:
- port: 3306
name: mysql
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mysql
namespace: galera
spec:
serviceName: "mysql"
replicas: 3
template:
metadata:
labels:
app: mysql
tier: data
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "galera-init",
"image": "192.168.1.103/k8s_public/k8s-galera-init:latest",
"args": ["-service=mysql"],
"env": [
{
"name": "POD_NAMESPACE",
"valueFrom": {
"fieldRef": { "apiVersion": "v1", "fieldPath": "metadata.namespace" }
}
},
{
"name": "SAFE_TO_BOOTSTRAP",
"value": "1"
},
{
"name": "DEBUG",
"value": "1"
}
],
"volumeMounts": [
{
"name": "config",
"mountPath": "/etc/mysql/conf.d"
},
{
"name": "data",
"mountPath": "/var/lib/mysql"
}
]
}
]'
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: 192.168.1.103/k8s_public/mariadb:10.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3306
name: mysql
- containerPort: 4444
name: sst
- containerPort: 4567
name: replication
- containerPort: 4568
name: ist
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: root-password
- name: MYSQL_ROOT_USER
valueFrom:
secretKeyRef:
name: mysql-secrets
key: root-user
- name: MYSQL_INITDB_SKIP_TZINFO
value: "yes"
livenessProbe:
exec:
command: ["sh", "-c", "mysql -u\"${MYSQL_ROOT_USER:-root}\" -p\"${MYSQL_ROOT_PASSWORD}\" -e 'show databases;'"]
initialDelaySeconds: 60
timeoutSeconds: 5
readinessProbe:
exec:
command: ["sh", "-c", "mysql -u\"${MYSQL_ROOT_USER:-root}\" -p\"${MYSQL_ROOT_PASSWORD}\" -e 'show databases;'"]
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: config
mountPath: /etc/mysql/conf.d
- name: data
mountPath: /var/lib/mysql
volumes:
- name: config
configMap:
name: mysql-config-vol
imagePullSecrets:
- name: "registrykey"
volumeClaimTemplates:
- metadata:
name: data
annotations:
volume.beta.kubernetes.io/storage-class: "ceph-web" #引用ceph class 的类
#volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" 这里类弄你即可以引用nfs也可以引用 ceph的class
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
#ok 所有的yaml 准备好,我们进行启动
cd ../
kubectl create -f mariadb-cluster/
#检查所有pod是否启动完成 正常如下
[root@master3 mariadb-cluster]# kubectl get statefulset,pod,pvc,svc --namespace=galera
NAME DESIRED CURRENT AGE
statefulsets/mysql 3 3 1h
NAME READY STATUS RESTARTS AGE
po/mysql-0 1/1 Running 0 1h
po/mysql-1 1/1 Running 1 1h
po/mysql-2 1/1 Running 1 1h
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc/data-mysql-0 Bound pvc-ed3c0f24-3f8c-11e7-9818-525400c2bc59 3Gi RWO 1h
pvc/data-mysql-1 Bound pvc-ed52acfa-3f8c-11e7-9818-525400c2bc59 3Gi RWO 1h
pvc/data-mysql-2 Bound pvc-ed6debec-3f8c-11e7-9818-525400c2bc59 3Gi RWO 1h
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/mysql None <none> 3306/TCP 1h
[root@master3 mariadb-cluster]#
查看ceph上动太卷是否创建,任意节点上执行
[root@master3 mariadb-cluster]# rbd list
ceph-image
kubernetes-dynamic-pvc-0b0a7037-3eea-11e7-b964-525400c2bc59
kubernetes-dynamic-pvc-0b18303f-3eea-11e7-b964-525400c2bc59
kubernetes-dynamic-pvc-0b2f4fed-3eea-11e7-b964-525400c2bc59
kubernetes-dynamic-pvc-17ee7e77-3e25-11e7-aeeb-525400ab16e6
kubernetes-dynamic-pvc-18ebd85d-3e25-11e7-aeeb-525400ab16e6
kubernetes-dynamic-pvc-245310bb-3f83-11e7-b964-525400c2bc59
kubernetes-dynamic-pvc-2477ce68-3f83-11e7-b964-525400c2bc59
kubernetes-dynamic-pvc-24c3709c-3f83-11e7-b964-525400c2bc59
kubernetes-dynamic-pvc-3bd28b65-3ec3-11e7-b964-525400c2bc59
kubernetes-dynamic-pvc-ed6027b2-3f8c-11e7-b964-525400c2bc59
kubernetes-dynamic-pvc-ed823789-3f8c-11e7-b964-525400c2bc59
kubernetes-dynamic-pvc-eda2a871-3f8c-11e7-b964-525400c2bc59
kubernetes-dynamic-pvc-fa9b79e0-3ec7-11e7-b964-525400c2bc59
kubernetes-dynamic-pvc-fed5d892-3ee8-11e7-b964-525400c2bc59
kubernetes-dynamic-pvc-fee7783d-3ee8-11e7-b964-525400c2bc59
kubernetes-dynamic-pvc-fef9e95a-3ee8-11e7-b964-525400c2bc59
#动态卷创建完成
[root@master3 mariadb-cluster]#
进入pod查看rbd挂载是否正常,在这里我们看到mysql的数据目录已经挂载到rbd ok
[root@master3 mariadb-cluster]# kubectl exec -it --namespace=galera mysql-0 /bin/bash
root@mysql-0:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:0-654996-f8b8d50768c57fd7d294967f704861afb7b7b6a915edd520e4619a31266f36c1 10G 432M 9.6G 5% /
tmpfs 1001M 0 1001M 0% /dev
tmpfs 1001M 0 1001M 0% /sys/fs/cgroup
/dev/mapper/centos-root 37G 18G 18G 50% /etc/hosts
shm 64M 0 64M 0% /dev/shm
/dev/rbd0 2.9G 279M 2.5G 10% /var/lib/mysql
tmpfs 1001M 12K 1001M 1% /run/secrets/kubernetes.io/serviceaccount
root@mysql-0:/#
ceph 动态卷布署流程就到这里了,其它的和nfs一样用,只要在 pod volumeClaimTemplates: 部份引用创建的storagessclass类即可
更多推荐
所有评论(0)