什么是Redis群集?

Redis Cluster是一组Redis实例,旨在通过对数据库进行分区来扩展数据库,从而使其更具弹性。
群集中的每个成员(无论是主副本还是辅助副本)都管理哈希槽的子集。如果主机无法访问,则其从机将升级为主机。在由三个主节点组成的最小Redis群集中,每个主节点都有一个从节点(以实现最小的故障转移),每个主节点都分配有一个介于0到16,383之间的哈希槽范围。节点A包含从0到5000的哈希槽,节点B从5001到10000,节点C从10001到16383。
群集内部的通信是通过内部总线进行的,使用八卦协议传播有关群集的信息或发现新节点。

在Kubernetes中部署Redis集群面临挑战,因为每个Redis实例都依赖于一个配置文件,该文件可以跟踪其他集群实例及其角色。为此,我们需要结合使用Kubernetes StatefulSets和PersistentVolumes。
从 GitHub 上下载
git clone https://github.com/llmgo/redis-sts.git

部署NFS略

在NFS创建工作目录
mkdir -p /data/pv{1..6}

创建pv

apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-pv1
spec:
  capacity:
    storage: 5Gi
  accessModes: 
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: "redis-cluster"
  nfs:
    path: /data/pv1
    server: 192.168.253.50
---  
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-pv2
spec:
  capacity:
    storage: 5Gi
  accessModes: 
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: "redis-cluster"
  nfs:
    path: /data/pv2
    server: 192.168.253.50
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-pv3
spec:
  capacity:
    storage: 5Gi
  accessModes: 
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: "redis-cluster"
  nfs:
    path: /data/pv3
    server: 192.168.253.50
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-pv4
spec:
  capacity:
    storage: 5Gi
  accessModes: 
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: "redis-cluster"
  nfs:
    path: /data/pv4
    server: 192.168.253.50 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-pv5
spec:
  capacity:
    storage: 5Gi
  accessModes: 
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: "redis-cluster"
  nfs:
    path: /data/pv5
    server: 192.168.253.50 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-pv6
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: "redis-cluster"
  nfs:
    path: /data/pv6
    server: 192.168.253.50
ubectl apply -f redis-pv.yml
persistentvolume/redis-pv1 created
persistentvolume/redis-pv2 created
persistentvolume/redis-pv3 created
persistentvolume/redis-pv4 created
persistentvolume/redis-pv5 created
persistentvolume/redis-pv6 created
[root@master redis-1]# kubectl get pv
NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS    REASON   AGE
redis-pv1   5Gi        RWO            Recycle          Available           redis-cluster            3s
redis-pv2   5Gi        RWO            Recycle          Available           redis-cluster            3s
redis-pv3   5Gi        RWO            Recycle          Available           redis-cluster            3s
redis-pv4   5Gi        RWO            Recycle          Available           redis-cluster            3s
redis-pv5   5Gi        RWO            Recycle          Available           redis-cluster            3s
redis-pv6   5Gi        RWO            Recycle          Available           redis-cluster            3s

创建statefulset

apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-cluster
data:
  update-node.sh: |
    #!/bin/sh
    REDIS_NODES="/data/nodes.conf"
    sed -i -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${REDIS_NODES}
    exec "$@"
  redis.conf: |+
    cluster-enabled yes
    cluster-require-full-coverage no
    cluster-node-timeout 15000
    cluster-config-file /data/nodes.conf
    cluster-migration-barrier 1
    appendonly yes
    protected-mode no
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis-cluster
spec:
  serviceName: redis-cluster
  replicas: 6
  selector:
    matchLabels:
      app: redis-cluster
  template:
    metadata:
      labels:
        app: redis-cluster
    spec:
      containers:
      - name: redis
        image: redis:5.0.5-alpine
        ports:
        - containerPort: 6379
          name: client
        - containerPort: 16379
          name: gossip
        command: ["/conf/update-node.sh", "redis-server", "/conf/redis.conf"]
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        volumeMounts:
        - name: conf
          mountPath: /conf
          readOnly: false
        - name: data
          mountPath: /data
          readOnly: false
      volumes:
      - name: conf
        configMap:
          name: redis-cluster
          defaultMode: 0755
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 5Gi
      storageClassName: redis-cluster
[root@master redis-1]# kubectl apply -f redis-sts.yml 
configmap/redis-cluster created
statefulset.apps/redis-cluster created
[root@master redis-1]# kubectl get po
NAME                    READY   STATUS      RESTARTS   AGE
redis-cluster-0         1/1     Running     0          36s
redis-cluster-1         1/1     Running     0          33s
redis-cluster-2         1/1     Running     0          30s
redis-cluster-3         1/1     Running     0          28s
redis-cluster-4         1/1     Running     0          26s
redis-cluster-5         1/1     Running     0          24

创建service

apiVersion: v1
kind: Service
metadata:
  name: redis-cluster
spec:
  type: ClusterIP
  clusterIP: 10.1.0.106
  ports:
  - port: 6379
    targetPort: 6379
    name: client
  - port: 16379
    targetPort: 16379
    name: gossip
  selector:
    app: redis-cluster
[root@master redis-1]# kubectl apply -f redis-svc.yml 
service/redis-cluster created
[root@master redis-1]# kubectl get svc
NAME            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)              AGE
redis-cluster   ClusterIP   10.96.0.106   <none>        6379/TCP,16379/TCP   9s

初始化 Redis Cluster

查看pod ip
 kubectl get po -o wide|grep -n 2|awk -F' ' '{print $6}'
kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 10.244.1.147:6379 10.244.2.149:6379 10.244.2.150:6379 10.244.1.148:6379 10.244.2.151:6379 10.244.1.149:6379
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.244.2.151:6379 to 10.244.1.147:6379
Adding replica 10.244.1.149:6379 to 10.244.2.149:6379
Adding replica 10.244.1.148:6379 to 10.244.2.150:6379
M: 57195767b1e4e6a39573d38fc75e2f2dc9720aab 10.244.1.147:6379
   slots:[0-5460] (5461 slots) master
M: 9f97c198737a6160273f7ef97184f2e2ba9c2f46 10.244.2.149:6379
   slots:[5461-10922] (5462 slots) master
M: 44cf86b03995b0e596d99ae8388196e2ef627021 10.244.2.150:6379
   slots:[10923-16383] (5461 slots) master
S: d3ba8bc42e32c62ff9bace895c99914f8940ea26 10.244.1.148:6379
   replicates 44cf86b03995b0e596d99ae8388196e2ef627021
S: a3538f75037789f0c8cc6f6a8b6bc1593797ab8d 10.244.2.151:6379
   replicates 57195767b1e4e6a39573d38fc75e2f2dc9720aab
S: fc9116929df00450430fb59699016e0f009b28f2 10.244.1.149:6379
   replicates 9f97c198737a6160273f7ef97184f2e2ba9c2f46
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
........
>>> Performing Cluster Check (using node 10.244.1.147:6379)
M: 57195767b1e4e6a39573d38fc75e2f2dc9720aab 10.244.1.147:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: fc9116929df00450430fb59699016e0f009b28f2 10.244.1.149:6379
   slots: (0 slots) slave
   replicates 9f97c198737a6160273f7ef97184f2e2ba9c2f46
S: a3538f75037789f0c8cc6f6a8b6bc1593797ab8d 10.244.2.151:6379
   slots: (0 slots) slave
   replicates 57195767b1e4e6a39573d38fc75e2f2dc9720aab
M: 44cf86b03995b0e596d99ae8388196e2ef627021 10.244.2.150:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: d3ba8bc42e32c62ff9bace895c99914f8940ea26 10.244.1.148:6379
   slots: (0 slots) slave
   replicates 44cf86b03995b0e596d99ae8388196e2ef627021
M: 9f97c198737a6160273f7ef97184f2e2ba9c2f46 10.244.2.149:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...

验证集群部署

[root@master redis-1]# kubectl exec -it redis-cluster-0 -- redis-cli cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:85
cluster_stats_messages_pong_sent:93
cluster_stats_messages_sent:178
cluster_stats_messages_ping_received:88
cluster_stats_messages_pong_received:85
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:178
[root@master redis-1]# for x in $(seq 0 5); do echo "redis-cluster-$x"; kubectl exec redis-cluster-$x -- redis-cli role; echo; done
redis-cluster-0
master
126
10.244.2.151
6379
126

redis-cluster-1
master
126
10.244.1.149
6379
126

redis-cluster-2
master
126
10.244.1.148
6379
126

redis-cluster-3
slave
10.244.2.150
6379
connected
126

redis-cluster-4
slave
10.244.1.147
6379
connected
126

redis-cluster-5
slave
10.244.2.149
6379
connected
126
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐