1、环境安装

请安装完成K8S环境,此示例中我们有一个master:10.41.10.61,3个node:10.41.10.71-73,1个registry镜像仓库:10.41.10.81
请在K8S所有节点上配置/etc/docker/daemon.json,添加:
“insecure-registries”: [“10.41.10.81:5000”]

2、在镜像仓库服务上下载Percona_xtradb_cluster镜像

docker pull percona/percona_xtradb_cluster
docker tag percona/percona_xtradb_cluster 10.41.10.81:5000/pxc
docker push 10.41.10.81:5000/pxc #提交到镜像仓库

3、在master上部署三个服务

cat <<eof >pxc-master-service.yaml #master的服务
apiVersion: v1
kind: Service
metadata:
  name: pxc-master-svc
spec:
  selector:
    node: pxc-master
  ports:
  - name: mysql
    port: 3306
  - name: snapshot-trans
    port: 4444
  - name: repli-trans
    port: 4567
  - name: increment-state-trans
    port: 4568
eof
cat <<eof >pxc-slave-service.yaml #slave的服务
apiVersion: v1
kind: Service
metadata:
  name: pxc-slave-svc
spec:
  selector:
    node: pxc-slave
  ports:
  - name: mysql
    port: 3306
  - name: snapshot-trans
    port: 4444
  - name: repli-trans
    port: 4567
  - name: increment-state-trans
    port: 4568
eof
cat <<eof >pxc-service.yaml #pxc集群的服务
apiVersion: v1
kind: Service
metadata:
  name: pxc-svc
spec:
  selector:
    unit: pxc-cluster
  ports:
  - name: mysql
    port: 3306
    targetPort: 3306
  type: NodePort
  externalIPs:
  - 10.41.10.60
eof
#使三个服务都生效
kubectl apply -f pxc-master-service.yaml
kubectl apply -f pxc-slave-service.yaml
kubectl apply -f pxc-service.yaml 

4、在K8S上部署第一个master节点的deployment

cat <<eof >pxc-master-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pxc-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pxc-master
  template:
    metadata:
      labels:
        app: pxc-master
        node: pxc-master
        unit: pxc-cluster
    spec:
      containers:
      - name: pxc-master
        image: 10.41.10.81:5000/pxc
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "root"
        - name: CLUSTER_NAME
          value: "PXC"
        - name: XTRABACKUP_PASSWORD
          value: "root"
        #volumeMounts: #数据持久化
        #- mountPath: "/var/lib/mysql"
        #  name: data
        #- mountPath: "/backup"
        #  name: backup
        ports:
        - containerPort: 3306
        - containerPort: 4567
        - containerPort: 4568
        - containerPort: 4444
      #volumes: #数据库持久化
      #- name: data
      #  persistentVolumeClaim:
      #    claimName: pxc-master-pvc
      #- name: backup
      #  persistentVolumeClaim:
      #    claimName: pxc-backup-pvc
eof
kubectl apply -f pxc-master-deployment.yaml #使master的部署生效
kubectl get svc #查看一下master对应的服务内部集群地址,这个地址用于其它mysql节点连接集群使

在这里插入图片描述
得到这个地址后,添加另外一个mysql节点

5、添加mysql节点进入集群,这一步,请等待master正常运行之后,大概2分钟

cat <<eof >pxc-slave-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pxc-slave
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pxc-slave
  template:
    metadata:
      labels:
        app: pxc-slave
        node: pxc-slave
        unit: pxc-cluster
    spec:
      containers:
      - name: pxc-slave
        image: 10.41.10.81:5000/pxc
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "root"
        - name: CLUSTER_NAME
          value: "PXC"
        - name: XTRABACKUP_PASSWORD
          value: "root"
        - name: CLUSTER_JOIN
          value: 10.97.110.147 #这个地址就是刚刚mysql的集群地址
        #volumeMounts: #数据持久化
        #- mountPath: "/var/lib/mysql"
        #  name: data
        #- mountPath: "/backup"
        #  name: backup
        ports:
        - containerPort: 3306
        - containerPort: 4567
        - containerPort: 4568
        - containerPort: 4444
      #volumes: #数据持久化
      #- name: data
      #  persistentVolumeClaim:
      #    claimName: pxc-slave-pvc
      #- name: backup
      #  persistentVolumeClaim:
      #    claimName: pxc-backup-pvc
eof
kubectl apply -f pxc-slave-deployment.yamlkubectl get pods -o wide

大概等待3分钟后,Mysql的集群正常运行
在这里插入图片描述

6、验证

我们分别登录这两节点,查看一下wsrep_cluster的状态值

kubectl exec -it pxc-master-56d6757b7f-clwf4 /bin/bash
mysql -uroot -proot
show status like 'wsrep_cluster%';

结果显示
在这里插入图片描述
大功告成!!!

关于数据持久化,请看我的另外一篇文章。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐