k8s实践案例-基于StatefulSet运行Redis集群
故障测试,杀掉redis-0中的redis进程,验证salve是否会升级为master。创建redis集群,进入任意一个redis Pod执行创建集群的命令。进入redis-0的slave redis-4 查看数据是否同步。删除Pod重建redis,验证数据能否恢复。读写数据测试,在redis0写入数据。查看Pod和service。
·
制作redis镜像
Dockerfile内容如下:
FROM harbor-server.linux.io/base-images/ubuntu:20.04
ENV REDIS_VERSION="6.2.7"
LABEL author="admin@163.com"
ADD redis-$REDIS_VERSION.tar.gz /usr/local/src
RUN ln -sv /usr/local/src/redis-$REDIS_VERSION /usr/local/redis && \
cd /usr/local/redis/ && \
make && \
cp src/redis-cli src/redis-server /usr/sbin/ && \
mkdir -pv /data/redis-data
ADD redis.conf /usr/local/redis/redis.conf
ADD run_redis.sh /entrypoint.sh
EXPOSE 6379
ENTRYPOINT ["/entrypoint.sh"]
redis.conf内容如下:
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile ""
databases 16
always-show-logo yes
save 900 1
save 5 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error no
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /data/redis-data
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
requirepass 123456
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
slave-lazy-flush no
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble no
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
run_redis.sh内容如下:
#!/bin/bash
/usr/sbin/redis-server /usr/local/redis/redis.conf
tail -f /etc/hosts
执行构建,上传镜像
cat build_image_command.sh
#!/bin/bash
TAG=$1
nerdctl build -t harbor-server.linux.io/n70/redis:${TAG} .
sleep 3
nerdctl push harbor-server.linux.io/n70/redis:${TAG}
sh build_image_command.sh 6.2.7
部署redis(单机)
创建PV/PVC
PV部署文件:
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-data
spec:
accessModes: ["ReadWriteOnce"]
capacity:
storage: 5Gi
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.122.1
path: /data/k8s/n70/redis/redis-data
PVC部署文件
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data
spec:
accessModes: ["ReadWriteOnce"]
volumeName: redis-data
resources:
requests:
storage: 5Gi
limits:
storage: 5Gi
查看pv/pvc状态:
运行redis
部署文件如下:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-standalone
spec:
replicas: 1
selector:
matchLabels:
app: redis-standalone
template:
metadata:
labels:
app: redis-standalone
spec:
containers:
- name: redis
image: harbor-server.linux.io/n70/redis:6.2.7
imagePullPolicy: Always
ports:
- name: redis-port
containerPort: 6379
volumeMounts:
- name: redis-data
mountPath: /data/redis-data/
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: redis-data
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: NodePort
selector:
app: redis-standalone
ports:
- name: redis-port
port: 6379
targetPort: 6379
nodePort: 30001
查看Pod和service
验证
读写数据测试
删除Pod重建redis,验证数据能否恢复
部署redis(集群)
准备redis配置
准备redis.conf,集群部署的配置和单机部署的配置不一样:
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile ""
databases 16
always-show-logo yes
save 900 1
save 5 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error no
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /data/redis-data
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
requirepass 123456
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
slave-lazy-flush no
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble no
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
cluster-enabled yes
cluster-config-file /data/redis-data/nodes.conf
cluster-node-timeout 5000
将redis.conf保存为ConfigMap:
kubectl create configmap redis-conf --from-file=./redis.conf
创建pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-data-pv0
spec:
accessModes: ["ReadWriteOnce"]
capacity:
storage: 5Gi
nfs:
server: 192.168.122.1
path: /data/k8s/n70/redis/redis0
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-data-pv1
spec:
accessModes: ["ReadWriteOnce"]
capacity:
storage: 5Gi
nfs:
server: 192.168.122.1
path: /data/k8s/n70/redis/redis1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-data-pv2
spec:
accessModes: ["ReadWriteOnce"]
capacity:
storage: 5Gi
nfs:
server: 192.168.122.1
path: /data/k8s/n70/redis/redis2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-data-pv3
spec:
accessModes: ["ReadWriteOnce"]
capacity:
storage: 5Gi
nfs:
server: 192.168.122.1
path: /data/k8s/n70/redis/redis3
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-data-pv4
spec:
accessModes: ["ReadWriteOnce"]
capacity:
storage: 5Gi
nfs:
server: 192.168.122.1
path: /data/k8s/n70/redis/redis4
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-data-pv5
spec:
accessModes: ["ReadWriteOnce"]
capacity:
storage: 5Gi
nfs:
server: 192.168.122.1
path: /data/k8s/n70/redis/redis5
查看pv状态
部署redis集群
通过StatefulSet创建6个redis Pod(redis0-6),前3个为master,后3个为slave,部署文件如下:
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
clusterIP: None
selector:
app: redis
appCluster: redis-cluster
ports:
- name: redis
port: 6379
targetPort: 6379
#此Service用于集群内客户端访问redis
---
apiVersion: v1
kind: Service
metadata:
name: redis-access
spec:
selector:
app: redis
appCluster: redis-cluster
ports:
- name: redis
port: 6379
targetPort: 6379
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
replicas: 6
selector:
matchLabels:
app: redis
appCluster: redis-cluster
serviceName: redis
template:
metadata:
labels:
app: redis
appCluster: redis-cluster
spec:
terminationGracePeriodSeconds: 20
containers:
- name: redis
image: harbor-server.linux.io/n70/redis:6.2.7
imagePullPolicy: Always
ports:
- name: redis
containerPort: 6379
- name: cluster
containerPort: 6379
volumeMounts:
- name: redis-conf
mountPath: /usr/local/redis/
- name: data
mountPath: /data/redis-data
volumes:
- name: redis-conf
configMap:
name: redis-conf
items:
- key: redis.conf
path: redis.conf
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
limits:
storage: 5Gi
查看Pod和PVC状态:
创建redis集群,进入任意一个redis Pod执行创建集群的命令
kubectl exec -it pods/redis0 -- bash
apt -y install dnsutils #安装dig指令
redis-cli --cluster create --cluster-replicas 1 \
`dig +short redis-0.redis.default.svc.cluster.local`:6379 \
`dig +short redis-1.redis.default.svc.cluster.local`:6379 \
`dig +short redis-2.redis.default.svc.cluster.local`:6379 \
`dig +short redis-3.redis.default.svc.cluster.local`:6379 \
`dig +short redis-4.redis.default.svc.cluster.local`:6379 \
`dig +short redis-5.redis.default.svc.cluster.local`:6379 \
-a 123456
验证
查看集群状态
读写数据测试,在redis0写入数据
进入redis-0的slave redis-4 查看数据是否同步
故障测试,杀掉redis-0中的redis进程,验证salve是否会升级为master
更多推荐
已为社区贡献21条内容
所有评论(0)