机器配置

1、需要有网络,因为所有安装都是基于docker配置,没有网络无法进行。
2、机器配置需要Centos7+及以上版本,内存32g+及以上。
安装配置:
在这里插入图片描述
redis配置:
在这里插入图片描述

创建目录

分别在集群的各服务器(85/86/87)上创建目录:

sudo rm -rf /home/data/redis/ && sudo mkdir -p /home/data/redis/{7001,7002,7003,7004,7005,7006}/{data,conf} && chmod 777 -R /home/data/

在这里插入图片描述

创建网络

docker network create --driver overlay mynetwork

这里创建网络类型为overlay,网络类型主要使用比较多的是bridge、overlay ,由于这里使用的是swarm集群部署,要让所有服务在同一个网络中,则需要使用overlay 。
在这里插入图片描述

编写compose.yml模版文件

配置文件获取:https://gitee.com/starsky20/docker-compose-application.git

登录manager服务器(85),进入/home/data/redis目录下,创建redis-stack.yml文件。
具体内容如下:

version: "3.8"
services:
  redis7001:
    image: redis:alpine
    container_name: redis7001
    #设置主机名
    hostname: redis7001
    restart: always
    #privileged: true
    #挂载目录,相当于 docker run -v 主机目录:容器目录
    volumes:
      - /home/data/redis/7001/data:/data
      - /home/data/redis/7001/conf:/conf
    #启动容器执行命令,相当于docker run [镜像:tag]  [命令], 登录redis: redis-cli -h 192.168.0.80 -p 6379 -a Dszn@2020
    command: redis-server --appendonly yes --cluster-enabled yes --cluster-config-file /conf/nodes.conf --cluster-announce-ip 192.168.3.85 --cluster-announce-port 7001 --cluster-announce-bus-port 17001
    ports:
      - "7001:6379"
      - "17001:16379"
    #指定环境变量,相当于docker run -e 参数, 登录mysql: mysql -h192.168.3.80 -P3306 -uroot -pDs20Pwd@
    environment:
      - TZ=Asia/Shanghai
    networks:
      - mynetwork
    deploy:
      placement:
        constraints:
          - node.hostname == manager
          - node.role == manager
  redis7002:
    image: redis:alpine
    container_name: redis7002
    #设置主机名
    hostname: redis7002
    restart: always
    #privileged: true
    #挂载目录,相当于 docker run -v 主机目录:容器目录
    volumes:
      - /home/data/redis/7002/data:/data
      - /home/data/redis/7002/conf:/conf
    #启动容器执行命令,相当于docker run [镜像:tag]  [命令], 登录redis: redis-cli -h 192.168.0.80 -p 6379 -a Dszn@2020
    command: redis-server --appendonly yes --cluster-enabled yes --cluster-config-file /conf/nodes.conf --cluster-announce-ip 192.168.3.85 --cluster-announce-port 7002 --cluster-announce-bus-port 17002
    ports:
      - "7002:6379"
      - "17002:16379"
    #指定环境变量,相当于docker run -e 参数, 登录mysql: mysql -h192.168.3.80 -P3306 -uroot -pDs20Pwd@
    environment:
      - TZ=Asia/Shanghai
    networks:
      - mynetwork
    deploy:
      placement:
        constraints:
          - node.hostname == manager
          - node.role == manager

  redis7003:
    image: redis:alpine
    container_name: redis7003
    #设置主机名
    hostname: redis7003
    restart: always
    volumes:
      - /home/data/redis/7003/data:/data
      - /home/data/redis/7003/conf:/conf
    command: redis-server --appendonly yes --cluster-enabled yes --cluster-config-file /conf/nodes.conf --cluster-announce-ip 192.168.3.86 --cluster-announce-port 7003 --cluster-announce-bus-port 17003
    ports:
      - "7003:6379"
      - "17003:16379"
    environment:
      - TZ=Asia/Shanghai
    networks:
      - mynetwork
    deploy:
      placement:
        constraints:
          - node.hostname == worker1
  redis7004:
    image: redis:alpine
    container_name: redis7004
    #设置主机名
    hostname: redis7004
    restart: always
    volumes:
      - /home/data/redis/7004/data:/data
      - /home/data/redis/7004/conf:/conf
    command: redis-server --appendonly yes --cluster-enabled yes --cluster-config-file /conf/nodes.conf --cluster-announce-ip 192.168.3.86 --cluster-announce-port 7004 --cluster-announce-bus-port 17004
    ports:
      - "7004:6379"
      - "17004:16379"
    environment:
      - TZ=Asia/Shanghai
    networks:
      - mynetwork
    deploy:
      placement:
        constraints:
          - node.hostname == worker1

  redis7005:
    image: redis:alpine
    container_name: redis7005
    #设置主机名
    hostname: redis7005
    restart: always
    volumes:
      - /home/data/redis/7005/data:/data
      - /home/data/redis/7005/conf:/conf
    command: redis-server --appendonly yes --cluster-enabled yes --cluster-config-file /conf/nodes.conf --cluster-announce-ip 192.168.3.87 --cluster-announce-port 7005 --cluster-announce-bus-port 17005
    ports:
      - "7005:6379"
      - "17005:16379"
    environment:
      - TZ=Asia/Shanghai
    networks:
      - mynetwork
    deploy:
      placement:
        constraints:
          - node.hostname == worker2

  redis7006:
    image: redis:alpine
    container_name: redis7006
    #设置主机名
    hostname: redis7006
    restart: always
    volumes:
      - /home/data/redis/7006/data:/data
      - /home/data/redis/7006/conf:/conf
    command: redis-server --appendonly yes --cluster-enabled yes --cluster-config-file /conf/nodes.conf --cluster-announce-ip 192.168.3.87 --cluster-announce-port 7006 --cluster-announce-bus-port 17006
    ports:
      - "7006:6379"
      - "17006:16379"
    environment:
      - TZ=Asia/Shanghai
    networks:
      - mynetwork
    deploy:
      placement:
        constraints:
          - node.hostname == worker2

#声明网桥
networks:
  #定义服务网桥名称
  mynetwork:
    #指定网桥驱动,有bridge/overlay,默认是bridge
    driver: overlay
    #false-统自动创建网桥名,格式为: 目录名_网桥名,默认为false; true-使用外部创建的网桥,需要自己手动创建
    external: true

#挂载目录,声明服务使用的创建卷名
volumes:
  mysqldata:
    #false-系统自动创建的卷名,格式为: 目录名_卷名,默认为false; true-使用外部创建的卷面,需要自己手动创建
    external: false

配置说明:
这里使用了6个节点,3主3从,分别在manager/worker1/worker2上创建2个副本。
注意:挂载目录,如果不使用宿主机挂载目录,可以使用卷的方式挂载,卷挂载会自动创建,可以不用提前手动创建。

启动服务

sudo docker stack deploy -c redis-stack.yml redis

[root@manager redis]# ll
总用量 8
drwxrwxrwx. 4 root root   30 18 13:39 7001
drwxrwxrwx. 4 root root   30 18 13:39 7002
drwxrwxrwx. 4 root root   30 18 13:39 7003
drwxrwxrwx. 4 root root   30 18 13:39 7004
drwxrwxrwx. 4 root root   30 18 13:39 7005
drwxrwxrwx. 4 root root   30 18 13:39 7006
-rw-r--r--. 1 root root 5338 18 13:40 redis-stack.yml
[root@manager redis]# 
1、启动服务
[root@manager redis]# sudo docker stack deploy -c redis-stack.yml redis
Ignoring unsupported options: restart

Ignoring deprecated options:

container_name: Setting the container name is not supported.

Creating service redis_redis7001
Creating service redis_redis7002
Creating service redis_redis7003
Creating service redis_redis7004
Creating service redis_redis7005
Creating service redis_redis7006
[root@manager redis]#
2、查看启动节点
[root@manager redis]# sudo docker stack ls
NAME      SERVICES   ORCHESTRATOR
redis     6          Swarm
[root@manager redis]# 
3、查看启动的堆栈
[root@manager redis]# sudo docker stack ps redis
ID             NAME                IMAGE          NODE      DESIRED STATE   CURRENT STATE                ERROR     PORTS
k95uuc1qt175   redis_redis7001.1   redis:alpine   manager   Running         Running about a minute ago             
m6ysk4mgjqj6   redis_redis7002.1   redis:alpine   manager   Running         Running about a minute ago             
v7z0qsfx8jj5   redis_redis7003.1   redis:alpine   worker1   Running         Running about a minute ago             
z8pvn7807p7w   redis_redis7004.1   redis:alpine   worker1   Running         Running about a minute ago             
1gnyq8149sej   redis_redis7005.1   redis:alpine   worker2   Running         Running 56 seconds ago                 
bo5zy55mm193   redis_redis7006.1   redis:alpine   worker2   Running         Running 37 seconds ago                 
[root@manager redis]# 

可以看到,这里redis部署在3个节点上,分别是manager、worker1、worker2。

测试验证

使用redis客户端分别链接个服务器上的redis,可以看到redis服务都已经启动成功。
在这里插入图片描述

集群

登录manager节点,执行如下命令集群:

redis-cli -h 192.168.3.85 -p 7001 --cluster create 192.168.3.85:7001 192.168.3.85:7002 192.168.3.86:7003 192.168.3.86:7004 192.168.3.87:7005 192.168.3.87:7006 --cluster-replicas 1 --cluster-yes

[root@manager redis]# sudo docker ps 
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS      NAMES
0ed4a740f2c1   redis:alpine   "docker-entrypoint.s…"   3 minutes ago   Up 3 minutes   6379/tcp   redis_redis7002.1.m6ysk4mgjqj6v35hmoozyc3zw
b53bcbfa348c   redis:alpine   "docker-entrypoint.s…"   3 minutes ago   Up 3 minutes   6379/tcp   redis_redis7001.1.k95uuc1qt175j9qa32tithu80
[root@manager redis]# 
[root@manager redis]# sudo docker exec -it b53bcbfa348c /bin/sh
/data # redis-cli -h 192.168.3.85 -p 7001 --cluster create 192.168.3.85:7001 192.168.3.85:7002 192.168.3.86:7003 192.168.3.86:7004 192
.168.3.87:7005 192.168.3.87:7006 --cluster-replicas 1 --cluster-yes
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.3.86:7004 to 192.168.3.85:7001
Adding replica 192.168.3.87:7006 to 192.168.3.86:7003
Adding replica 192.168.3.85:7002 to 192.168.3.87:7005
M: a16a7725d6deda33ad24bec991ba691ec96b4232 192.168.3.85:7001
   slots:[0-5460] (5461 slots) master
S: fad79b0856386ed1f423a77476ad8d187eda0b5e 192.168.3.85:7002
   replicates 2ddc82f2c3d668e37ef7ed5098608d294fdb32a7
M: cfef07421934544692096ffa59e0a066b022a869 192.168.3.86:7003
   slots:[5461-10922] (5462 slots) master
S: 9f64666e3b567dbc10baf498b484e9dc62df8b89 192.168.3.86:7004
   replicates a16a7725d6deda33ad24bec991ba691ec96b4232
M: 2ddc82f2c3d668e37ef7ed5098608d294fdb32a7 192.168.3.87:7005
   slots:[10923-16383] (5461 slots) master
S: 7e2121a860cbb3b4deaefb33d2021e415d1f61bd 192.168.3.87:7006
   replicates cfef07421934544692096ffa59e0a066b022a869
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 192.168.3.85:7001)
M: a16a7725d6deda33ad24bec991ba691ec96b4232 192.168.3.85:7001
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: fad79b0856386ed1f423a77476ad8d187eda0b5e 192.168.3.85:7002
   slots: (0 slots) slave
   replicates 2ddc82f2c3d668e37ef7ed5098608d294fdb32a7
S: 7e2121a860cbb3b4deaefb33d2021e415d1f61bd 192.168.3.87:7006
   slots: (0 slots) slave
   replicates cfef07421934544692096ffa59e0a066b022a869
M: cfef07421934544692096ffa59e0a066b022a869 192.168.3.86:7003
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 9f64666e3b567dbc10baf498b484e9dc62df8b89 192.168.3.86:7004
   slots: (0 slots) slave
   replicates a16a7725d6deda33ad24bec991ba691ec96b4232
M: 2ddc82f2c3d668e37ef7ed5098608d294fdb32a7 192.168.3.87:7005
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
/data # redis-cli 
127.0.0.1:6379> cluster nodes
fad79b0856386ed1f423a77476ad8d187eda0b5e 192.168.3.85:7002@17002 slave 2ddc82f2c3d668e37ef7ed5098608d294fdb32a7 0 1641620749000 5 connected
7e2121a860cbb3b4deaefb33d2021e415d1f61bd 192.168.3.87:7006@17006 slave cfef07421934544692096ffa59e0a066b022a869 0 1641620750000 3 connected
cfef07421934544692096ffa59e0a066b022a869 192.168.3.86:7003@17003 master - 0 1641620751325 3 connected 5461-10922
9f64666e3b567dbc10baf498b484e9dc62df8b89 192.168.3.86:7004@17004 slave a16a7725d6deda33ad24bec991ba691ec96b4232 0 1641620750315 1 connected
2ddc82f2c3d668e37ef7ed5098608d294fdb32a7 192.168.3.87:7005@17005 master - 0 1641620749306 5 connected 10923-16383
a16a7725d6deda33ad24bec991ba691ec96b4232 192.168.3.85:7001@17001 myself,master - 0 1641620749000 1 connected 0-5460
127.0.0.1:6379> 
可以看到集群后,redis有3主3从,分别在manager、worker1、worker2节点上创建了2个服务。

集群测试验证

Manager节点192.168.3.85:7001@17001 myself,master 登录redis:
redis-cli -c
set name ‘lisi’
get name
在这里插入图片描述
Worker1节点192.168.3.86:7003@17003 master 登录redis:
在这里插入图片描述
Worker1节点192.168.3.86:7004@17004 slave登录redis:
在这里插入图片描述
Worker2节点192.168.3.87:7006@17006 slave 登录redis:
在这里插入图片描述
以上表示redis集群成功,可以链接任意节点操作redis,当其中一个redis挂了,仍然保持高可用。

异常

no suitable node (scheduling constraints not satisfied on 3 nodes)"

在这里插入图片描述
原因:docker-compose.yml中指定hostname错误导致找不到节点。Node.hostname指定为对应主机的hostname

4.11.2.Error response from daemon: rpc error: code = Unknown desc = The swarm does not have a leader.

执行docker node ls 查看节点提示错误信息:
Error response from daemon: rpc error: code = Unknown desc = The swarm does not have a leader. It’s possible that too few managers are online. Make sure more than half of the managers are online.

以上是由于过半节点挂了导致,需要从新节点即可。
解决办法:
docker swarm init --force-new-cluster
在这里插入图片描述

OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: “bash”: executable file not found in $PATH: unknown

登录redis容器时异常:
[root@manager redis]# sudo docker exec -it be046f2c3dd7 bash
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: “bash”: executable file not found in $PATH: unknown
[root@manager redis]#
以上是由于高版本redis登录使用的不是bash命令导致,使用/bin/sh即可

ERR] Node 192.168.3.87:7003 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0

[root@manager redis]# sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
07abfe1e7f6a redis:alpine “docker-entrypoint.s…” 44 seconds ago Up 42 seconds 6379/tcp redis_redis001.1.015t1ozijvl6mib5iqmu17iz2
[root@manager redis]#
[root@manager redis]# sudo docker exec -it 07abfe1e7f6a /bin/sh
/data #
/data # redis-cli -h 192.168.3.85 -p 7001 --cluster create 192.168.3.85:7001 192.168.3.86:7002 192.168.3.87:7003 --cluster-replicas 1
–cluster-yes
[ERR] Node 192.168.3.87:7003 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.
/data #
这是由于上次redis集群没有配置成功,生成了每个节点的配置文件和db的备份文件,所以才会产生这个错误。
1、删除每个redis节点的备份文件,数据库文件和集群配置文件
比如说我有7001~7003节点,那么每个节点中的appendonly.aof、dump.rdb、node_xxx.conf文件都要被删除
2、使用redis-cli -c -h -p登录每个redis节点,使用以下命令
flushdb
cluster reset
3、重启所有的redis服务,再试试redis集群连接命令,应该就没问题了

(error) MOVED 5798 192.168.3.86:7003

127.0.0.1:6379>
127.0.0.1:6379> set name ‘lisi’
(error) MOVED 5798 192.168.3.86:7003
127.0.0.1:6379> get name
(error) MOVED 5798 192.168.3.86:7003
127.0.0.1:6379>
解决办法:启动时使用-c参数来启动集群模式,命令如下:
redis-cli -c -h 192.168.3.85 -p 7000 -a 123456

java中使用redis集群

源代码:https://gitee.com/starsky20/springboot-redis-cluster.git

创建项目

在这里插入图片描述

pom.xml导入redis依赖包

通过idea创建一个springboot项目。

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.5.1</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.example</groupId>
    <artifactId>springdocker</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>springdocker</name>
    <description>Demo project for Spring Boot</description>

    <properties>
        <java.version>1.8</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <!-- 引入jedis客户端,需要把lettuce客户端移除 -->
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-redis</artifactId>
            <exclusions>
                <exclusion>
                    <groupId>io.lettuce</groupId>
                    <artifactId>lettuce-core</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>redis.clients</groupId>
            <artifactId>jedis</artifactId>
            <version>3.8.0</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>


    <build>
        <finalName>${project.artifactId}-${project.version}</finalName>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>

            <plugin>
                <groupId>com.spotify</groupId>
                <artifactId>docker-maven-plugin</artifactId>
                <version>1.1.1</version>
                <executions>
                    <execution>
                        <id>build-image</id>
                        <phase>package</phase>
                        <goals>
                            <goal>build</goal>
                        </goals>
                    </execution>
                </executions>
                <configuration>
                    <!--打包docker镜像的docker服务器-->
                    <dockerHost>http://192.168.0.85:2375</dockerHost>
                    <!--镜像名,这里用工程名 -->
                    <imageName>192.168.0.85:8085/${project.artifactId}:${project.version}</imageName>
                    <!--nexus3 hosted 仓库地址-->
                    <registryUrl>192.168.0.85:8085</registryUrl>
                    <!--TAG,这里用工程版本号-->
                    <imageTags>
                        <!-- 指定镜像标签,可以排至多个标签 -->
                        <imageTag>${project.version}</imageTag>
                    </imageTags>
                    <!--是否强制覆盖已有镜像-->
                    <forceTags>true</forceTags>
                    <!--方式一:1、指定Dockerfile文件所在目录,通过文件执行打包上传nexus私服-->
                    <dockerDirectory>src/main/docker</dockerDirectory>
                    <!-- 方式二:通过配置命令打包 -->
                    <!--<baseImage>java</baseImage>
                    <entryPoint>["java", "-jar", "/${project.build.finalName}.jar"]</entryPoint>-->
                    <resources>
                        <resource>
                            <targetPath>/</targetPath>
                            <directory>${project.build.directory}</directory>
                            <include>${project.build.finalName}.jar</include>
                        </resource>
                    </resources>
                    <!-- 运行命令 mvn clean package docker:build 打包并生成docker镜像 -->
                    <!-- serverId, 与maven配置文件settings.xml中配置的server.id一致,用于推送镜像
                    执行命令: mvn clean compile package docker:build -DpushImage
                    -->
                    <serverId>docker-proxy</serverId>
                </configuration>
            </plugin>

        </plugins>
    </build>

    <repositories>
        <repository>
            <id>spring-snapshots</id>
            <name>Spring Snapshots</name>
            <url>https://repo.spring.io/snapshot</url>
            <snapshots>
                <enabled>true</enabled>
            </snapshots>
        </repository>
        <repository>
            <id>spring-milestones</id>
            <name>Spring Milestones</name>
            <url>https://repo.spring.io/milestone</url>
        </repository>
    </repositories>
    <pluginRepositories>
        <pluginRepository>
            <id>spring-snapshots</id>
            <name>Spring Snapshots</name>
            <url>https://repo.spring.io/snapshot</url>
            <snapshots>
                <enabled>true</enabled>
            </snapshots>
        </pluginRepository>
        <pluginRepository>
            <id>spring-milestones</id>
            <name>Spring Milestones</name>
            <url>https://repo.spring.io/milestone</url>
        </pluginRepository>
    </pluginRepositories>

</project>

application.yml增加redis集群配置信息

server:
  port: 9090
#yaml配置
spring:
  application:
    name: /demo
  redis:
    database: 0
    cluster:
      #集群节点
      nodes: 192.168.3.85:7001,192.168.3.85:7002,192.168.3.86:7003,192.168.3.86:7004,192.168.3.87:7005,192.168.3.87:7006
      #密码
    #password: xxxx
    #访问超时时间
    timeout: 5000
    jedis:
      pool:
        #连接池最大连接数
        max-active: 10
        #连接池中最大空闲连接数
        max-idle: 8
        #连接池最大等待阻塞时间
        max-wait: -1
        #连接池中最小空闲数
        min-idle: 0


编写redis集群配置

package com.example.springdemo.config;

import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.cache.CacheManager;
import org.springframework.context.annotation.Bean;
import org.springframework.data.redis.cache.RedisCacheConfiguration;
import org.springframework.data.redis.cache.RedisCacheManager;
import org.springframework.data.redis.connection.RedisClusterConfiguration;
import org.springframework.data.redis.connection.RedisNode;
import org.springframework.data.redis.connection.jedis.JedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.RedisSerializationContext;
import org.springframework.data.redis.serializer.StringRedisSerializer;
import redis.clients.jedis.JedisPoolConfig;

import java.io.Serializable;
import java.util.HashSet;
import java.util.Set;
@Configuration
public class RedisClusterConfig {
    @Value("${spring.redis.cluster.nodes}")
    private String host;
    @Value("${spring.redis.password}")
    private String password;
    @Value("${spring.redis.timeout}")
    private int connectionTimeout;
    @Value("${spring.redis.jedis.pool.max-active}")
    private int maxTotal;
    @Value("${spring.redis.jedis.pool.min-idle}")
    private int minIdle;
    @Value("${spring.redis.jedis.pool.max-idle}")
    private int maxIdle;
    @Value("${spring.redis.jedis.pool.max-wait}")
    private int maxWaitMillis;

    @Bean
    public RedisClusterConfiguration redisClusterConfiguration() {
        RedisClusterConfiguration configuration = new RedisClusterConfiguration();
        String[] hosts = host.split(",");
        Set<RedisNode> nodeList = new HashSet<RedisNode>();
        for (String hostAndPort : hosts) {
            String[] hostOrPort = hostAndPort.split(":");
            nodeList.add(new RedisNode(hostOrPort[0], Integer.parseInt(hostOrPort[1])));
        }
        configuration.setClusterNodes(nodeList);
//		configuration.setMaxRedirects();

        return configuration;
    }

    @Bean
    public JedisPoolConfig jedisPoolConfig() {
        JedisPoolConfig poolConfig = new JedisPoolConfig();
        poolConfig.setMaxIdle(this.maxIdle);
        poolConfig.setMinIdle(this.minIdle);
        poolConfig.setTestOnCreate(true);
        poolConfig.setTestOnBorrow(true);
        poolConfig.setTestOnReturn(true);
        poolConfig.setTestWhileIdle(true);
        return poolConfig;
    }

    @Bean("myJedisConnectionFactory")
    public JedisConnectionFactory jedisConnectionFactory(RedisClusterConfiguration redisClusterConfiguration,
                                                         JedisPoolConfig jedisPoolConfig) {
        JedisConnectionFactory jedisConnectionFactory = new JedisConnectionFactory(
                redisClusterConfiguration, jedisPoolConfig);
        jedisConnectionFactory.setPassword(password);
        return jedisConnectionFactory;
    }

    @Bean
    RedisTemplate<String, Serializable> redisTemplate(@Qualifier("myJedisConnectionFactory") JedisConnectionFactory jedisConnectionFactory) {
        RedisTemplate<String, Serializable> redisTemplate = new RedisTemplate<>();
        redisTemplate.setConnectionFactory(jedisConnectionFactory);
        Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class);
        // 设置值(value)的序列化采用Jackson2JsonRedisSerializer。
        redisTemplate.setValueSerializer(jackson2JsonRedisSerializer);
        // 设置键(key)的序列化采用StringRedisSerializer。
        redisTemplate.setKeySerializer(new StringRedisSerializer());
        redisTemplate.setHashKeySerializer(new StringRedisSerializer());
        redisTemplate.afterPropertiesSet();
        return redisTemplate;
    }


    @Bean
    public CacheManager cacheManager(@Qualifier("myJedisConnectionFactory") JedisConnectionFactory jedisConnectionFactory) {
        RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig();
        RedisCacheConfiguration redisCacheConfiguration = config
                .serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(new StringRedisSerializer()))
                .serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(new GenericJackson2JsonRedisSerializer()));
        return RedisCacheManager.builder(jedisConnectionFactory)
                .cacheDefaults(redisCacheConfiguration).build();
    }

}

编写controller

package com.example.springdemo.web;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class RedisController {

       @Autowired
    private RedisTemplate redisTemplate;

    @RequestMapping("/set")
    public String set(String name) {
        redisTemplate.opsForValue().set("name", name);
        return name;
    }

    @RequestMapping("/get")
    public Object get(String key) {
        Object obj = redisTemplate.opsForValue().get(key);
        return obj;
    }
}

测试

package com.example.springdemo.web;
import com.example.springdemo.SpirngDemoApplication;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.test.context.junit4.SpringRunner;

@RunWith(SpringRunner.class)
@SpringBootTest(classes = SpirngDemoApplication.class)
public class RedisTest {
    @Autowired
    private RedisTemplate redisTemplate;

    @Test
    public void set() {
        String key = "hello";
        String name = "helloworld";
        redisTemplate.opsForValue().set(key, name);

        Object obj = redisTemplate.opsForValue().get(key);
        System.out.println("obj: " + obj);
    }
}

启动服务

"D:\Program Files\Java\jdk1.8.0_92\bin\java.exe" -agentlib:jdwp=transport=dt_socket,address=127.0.0.1:53403,suspend=y,server=n -XX:TieredStopAtLevel=1 -noverify -Dspring.output.ansi.enabled=always -Dcom.sun.management.jmxremote -Dspring.jmx.enabled=true -Dspring.liveBeansView.mbeanDomain -Dspring.application.admin.enabled=true -javaagent:C:\Users\Administrator\.IntelliJIdea2019.3\system\captureAgent\debugger-agent.jar -Dfile.encoding=UTF-8 -classpath "D:\Program Files\Java\jdk1.8.0_92\jre\lib\charsets.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\deploy.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\ext\access-bridge-64.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\ext\cldrdata.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\ext\dnsns.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\ext\jaccess.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\ext\jfxrt.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\ext\localedata.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\ext\nashorn.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\ext\sunec.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\ext\sunjce_provider.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\ext\sunmscapi.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\ext\sunpkcs11.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\ext\zipfs.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\javaws.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\jce.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\jfr.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\jfxswt.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\jsse.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\management-agent.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\plugin.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\resources.jar;D:\Program Files\Java\jdk1.8.0_92\jre\lib\rt.jar;E:\WorkSpace\daison\spirng-demo\target\classes;D:\maven-repository\org\springframework\boot\spring-boot-starter-web\2.5.1\spring-boot-starter-web-2.5.1.jar;D:\maven-repository\org\springframework\boot\spring-boot-starter\2.5.1\spring-boot-starter-2.5.1.jar;D:\maven-repository\org\springframework\boot\spring-boot\2.5.1\spring-boot-2.5.1.jar;D:\maven-repository\org\springframework\boot\spring-boot-autoconfigure\2.5.1\spring-boot-autoconfigure-2.5.1.jar;D:\maven-repository\org\springframework\boot\spring-boot-starter-logging\2.5.1\spring-boot-starter-logging-2.5.1.jar;D:\maven-repository\ch\qos\logback\logback-classic\1.2.3\logback-classic-1.2.3.jar;D:\maven-repository\ch\qos\logback\logback-core\1.2.3\logback-core-1.2.3.jar;D:\maven-repository\org\apache\logging\log4j\log4j-to-slf4j\2.14.1\log4j-to-slf4j-2.14.1.jar;D:\maven-repository\org\apache\logging\log4j\log4j-api\2.14.1\log4j-api-2.14.1.jar;D:\maven-repository\org\slf4j\jul-to-slf4j\1.7.30\jul-to-slf4j-1.7.30.jar;D:\maven-repository\jakarta\annotation\jakarta.annotation-api\1.3.5\jakarta.annotation-api-1.3.5.jar;D:\maven-repository\org\yaml\snakeyaml\1.28\snakeyaml-1.28.jar;D:\maven-repository\org\springframework\boot\spring-boot-starter-json\2.5.1\spring-boot-starter-json-2.5.1.jar;D:\maven-repository\com\fasterxml\jackson\core\jackson-databind\2.12.3\jackson-databind-2.12.3.jar;D:\maven-repository\com\fasterxml\jackson\core\jackson-annotations\2.12.3\jackson-annotations-2.12.3.jar;D:\maven-repository\com\fasterxml\jackson\core\jackson-core\2.12.3\jackson-core-2.12.3.jar;D:\maven-repository\com\fasterxml\jackson\datatype\jackson-datatype-jdk8\2.12.3\jackson-datatype-jdk8-2.12.3.jar;D:\maven-repository\com\fasterxml\jackson\datatype\jackson-datatype-jsr310\2.12.3\jackson-datatype-jsr310-2.12.3.jar;D:\maven-repository\com\fasterxml\jackson\module\jackson-module-parameter-names\2.12.3\jackson-module-parameter-names-2.12.3.jar;D:\maven-repository\org\springframework\boot\spring-boot-starter-tomcat\2.5.1\spring-boot-starter-tomcat-2.5.1.jar;D:\maven-repository\org\apache\tomcat\embed\tomcat-embed-core\9.0.46\tomcat-embed-core-9.0.46.jar;D:\maven-repository\org\apache\tomcat\embed\tomcat-embed-el\9.0.46\tomcat-embed-el-9.0.46.jar;D:\maven-repository\org\apache\tomcat\embed\tomcat-embed-websocket\9.0.46\tomcat-embed-websocket-9.0.46.jar;D:\maven-repository\org\springframework\spring-web\5.3.8\spring-web-5.3.8.jar;D:\maven-repository\org\springframework\spring-beans\5.3.8\spring-beans-5.3.8.jar;D:\maven-repository\org\springframework\spring-webmvc\5.3.8\spring-webmvc-5.3.8.jar;D:\maven-repository\org\springframework\spring-aop\5.3.8\spring-aop-5.3.8.jar;D:\maven-repository\org\springframework\spring-context\5.3.8\spring-context-5.3.8.jar;D:\maven-repository\org\springframework\spring-expression\5.3.8\spring-expression-5.3.8.jar;D:\maven-repository\org\springframework\spring-core\5.3.8\spring-core-5.3.8.jar;D:\maven-repository\org\springframework\spring-jcl\5.3.8\spring-jcl-5.3.8.jar;D:\maven-repository\org\springframework\boot\spring-boot-starter-data-redis\2.5.1\spring-boot-starter-data-redis-2.5.1.jar;D:\maven-repository\org\springframework\data\spring-data-redis\2.5.1\spring-data-redis-2.5.1.jar;D:\maven-repository\org\springframework\data\spring-data-keyvalue\2.5.1\spring-data-keyvalue-2.5.1.jar;D:\maven-repository\org\springframework\data\spring-data-commons\2.5.1\spring-data-commons-2.5.1.jar;D:\maven-repository\org\springframework\spring-tx\5.3.8\spring-tx-5.3.8.jar;D:\maven-repository\org\springframework\spring-oxm\5.3.8\spring-oxm-5.3.8.jar;D:\maven-repository\org\springframework\spring-context-support\5.3.8\spring-context-support-5.3.8.jar;D:\maven-repository\redis\clients\jedis\3.8.0\jedis-3.8.0.jar;D:\maven-repository\org\slf4j\slf4j-api\1.7.30\slf4j-api-1.7.30.jar;D:\maven-repository\org\apache\commons\commons-pool2\2.9.0\commons-pool2-2.9.0.jar;D:\Program Files\JetBrains\IntelliJ IDEA 2019.3.3\lib\idea_rt.jar" com.example.springdemo.SpirngDemoApplication
Connected to the target VM, address: '127.0.0.1:53403', transport: 'socket'

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::                (v2.5.1)

2022-01-08 21:11:37.235  INFO 8256 --- [           main] c.e.springdemo.SpirngDemoApplication     : Starting SpirngDemoApplication using Java 1.8.0_192 on WIN-20210127PQJ with PID 8256 (E:\WorkSpace\daison\spirng-demo\target\classes started by Administrator in E:\WorkSpace\daison\spirng-demo)
2022-01-08 21:11:37.242  INFO 8256 --- [           main] c.e.springdemo.SpirngDemoApplication     : No active profile set, falling back to default profiles: default
2022-01-08 21:11:38.553  INFO 8256 --- [           main] .s.d.r.c.RepositoryConfigurationDelegate : Multiple Spring Data modules found, entering strict repository configuration mode!
2022-01-08 21:11:38.558  INFO 8256 --- [           main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data Redis repositories in DEFAULT mode.
2022-01-08 21:11:38.603  INFO 8256 --- [           main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 13 ms. Found 0 Redis repository interfaces.
2022-01-08 21:11:39.480  INFO 8256 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 9090 (http)
2022-01-08 21:11:39.502  INFO 8256 --- [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2022-01-08 21:11:39.502  INFO 8256 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet engine: [Apache Tomcat/9.0.46]
2022-01-08 21:11:39.671  INFO 8256 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2022-01-08 21:11:39.672  INFO 8256 --- [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 2274 ms
2022-01-08 21:11:41.140  INFO 8256 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 9090 (http) with context path ''
2022-01-08 21:11:41.161  INFO 8256 --- [           main] c.e.springdemo.SpirngDemoApplication     : Started SpirngDemoApplication in 5.15 seconds (JVM running for 7.757)

1、浏览器访问接口设置值:
http://localhost:9090/set?name=redis hello world
在这里插入图片描述

1、浏览器访问接口获取值:
http://localhost:9090/get?key=name
在这里插入图片描述

Logo

权威|前沿|技术|干货|国内首个API全生命周期开发者社区

更多推荐