1-) Introduction
In the previous tutorial, I mentioned about how Resilience4j can be used to rate limit requests. In this tutorial, we will take a look at how Bucket4j library can be used with Spring Boot to rate-limit requests based on the client IP address. Each IP address will be allowed to make at most N requests per time unit.
We will be covering 3 different ways to rate-limit requests:
- Local rate-limiting via Caffeine
- Distributed rate-limiting via Redis
- Distributed rate-limiting via Hazelcast
2-) Adding Dependencies
You can choose one of the three options and skip the others.
2.1-) Caffeine Dependencies:
Add the following dependencies to pom.xml file:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>com.giffing.bucket4j.spring.boot.starter</groupId>
<artifactId>bucket4j-spring-boot-starter</artifactId>
<version>0.7.0</version>
</dependency>
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>jcache</artifactId>
</dependency>
2.2-) Redis Dependencies:
Add the following dependencies to pom.xml file:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>com.giffing.bucket4j.spring.boot.starter</groupId>
<artifactId>bucket4j-spring-boot-starter</artifactId>
<version>0.7.0</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
2.3-) Hazelcast Dependencies:
Add the following dependencies to pom.xml file:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>com.giffing.bucket4j.spring.boot.starter</groupId>
<artifactId>bucket4j-spring-boot-starter</artifactId>
<version>0.7.0</version>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-spring</artifactId>
</dependency>
3-) Defining the Controller and Service:
Let’s add a sample REST controller as follows:
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;/**
* A sample controller to test Bucket4j rate limiter.
*
* @author Okan ARDIC
*/
@RestController
public class SampleController { @GetMapping("/first")
public String getMessageA() {
return "first";
} @GetMapping("/second")
public String getMessageB() {
return "second";
}
}
Now let’s define a Service class which will determine the client IP whenever a new request is made:
import org.springframework.stereotype.Service;
import javax.servlet.http.HttpServletRequest;@Service
public class SecurityService { /**
* Returns client IP from which the request was made. {@code X-Forwarded-For} header is also checked in case the
* service is located behind a load balancer.
*
* @return client IP.
*/
public String getClientIP(HttpServletRequest request) {
String xForwardedHeader = request.getHeader("X-Forwarded-For");
if (xForwardedHeader == null) {
return request.getRemoteAddr();
}
return xForwardedHeader.split(",")[0];
}
}
X-Forwarded-For header is important here to determine the actual IP if the requests are coming through a proxy server (i.e. load balancer). From the documentation:
The
X-Forwarded-For(XFF) request header is a de-facto standard header for identifying the originating IP address of a client connecting to a web server through a proxy server.When a client connects directly to a server, the client’s IP address is sent to the server (and is often written to server access logs). But if a client connection passes through any forward or reverse proxies, the server only sees the final proxy’s IP address, which is often of little use.
4-) Configuring Bucket4j
We will configure Bucket4j to define separate rate-limits for different request paths.
Define Bucket4j configuration in application.yml as follows:
bucket4j:
enabled: true
filters:
- cache-name: buckets
url: /first(/|\?)?.*
rate-limits:
- expression: "@securityService.getClientIP(#this)"
bandwidths:
- capacity: 5
time: 10
unit: seconds
- cache-name: buckets
url: /second(/|\?)?.*
rate-limits:
- expression: "@securityService.getClientIP(#this)"
bandwidths:
- capacity: 2
time: 5
unit: seconds
You can define as many paths as you want to be rate-limited under the filters section. Here we have 2 endpoints (/first and /second), and each of them is defined in url parameter as a regular expression. The expression @securityService.getClientIP(#this) will call our SecurityService.getClientIP() method to determine the client IP address. It can be defined using Spring Expression Language (SpEL). In short, requests will be counted based on the URL and result of that expression. If you don’t want to rate-limit requests based on a particular expression, you can just remove that attribute. You can check this link for more configuration options.
An important note; caching should be enabled via @EnableCaching annotation on any configuration class. For example, you can define it on application class as follows:
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cache.annotation.EnableCaching;@SpringBootApplication
@EnableCaching
public class SpringbootBucket4jExampleApplication {public static void main(String[] args) {
SpringApplication.run(SpringbootBucket4jExampleApplication.class, args);
}
}
4.1-) Local Rate-Limiting via Caffeine
If you don’t need distributed rate-limiting features, you can easily configure Caffeine as the caching provider and apply rate-limiting locally.
Add the following configuration to application.yml:
spring:
cache:
caffeine:
spec: maximumSize=1000000,expireAfterAccess=600s
cache-names:
- buckets
4.2-) Distributed Rate-Limiting via Redis:
If you have multiple replicas of the same application, Redis can be used as an in-memory cache for rate-limiting, so multiple instances can consume from the cache. For example, let’s say that you have 3 instances of a public service behind a load balancer and you have configured the rate-limiter to allow 20req/sec. When concurrent requests are distributed to all 3 services in round-robin fashion, a total of 20 requests will be allowed per second and rest of the requests will be rejected.
You need a Redis cluster to test this functionality. If you don’t have a Redis cluster installed, you can run a single Redis instance via Docker using the following command:
docker run --name redis -d -p 6379:6379 redis
The above command starts a new Redis instance in the background (detached mode via -d argument) and exposes the port 6379 to the host machine, so you can connect Redis instance via localhost:6379 on the same machine.
Add the following configuration to application.yml:
spring:
cache:
cache-names:
- buckets
redis:
host: localhost
port: 6379
Set spring.redis.host and spring.redis.port properties to connect to a Redis instance. For all available Redis properties please check this link.
4.3-) Distributed Rate-Limiting via Hazelcast:
To configure Hazelcast for rate-limiting, add the following configuration to application.yml:
spring:
cache:
jcache:
provider: com.hazelcast.cache.impl.HazelcastServerCachingProvider
config: classpath:hazelcast.xml
cache-names:
- buckets
Set spring.cache.jcache.config property to point to a hazelcast configuration file. A sample hazelcast.xml file looks like the following:
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/config
http://www.hazelcast.com/schema/config/hazelcast-config-5.1.xsd">
<cluster-name>rate-limiting-cluster</cluster-name>
<network>
<port port-count="100" auto-increment="true">5701</port>
<join>
<tcp-ip enabled="true">
<member>127.0.0.1</member>
<!-- Add list of IP addresses / host names that need to join the cluster -->
</tcp-ip>
</join>
</network>
<map name="buckets">
<time-to-live-seconds>120</time-to-live-seconds>
<in-memory-format>BINARY</in-memory-format>
<metadata-policy>CREATE_ON_UPDATE</metadata-policy>
<statistics-enabled>true</statistics-enabled>
</map>
<cache name="buckets">
</cache>
</hazelcast>
5-) Running the Application:
Now we are all set. You can run the application and test the endpoints /first and /second to see the results.
6-) Unit Testing:
A sample unit test can be configured with @RepeatedTest annotation and HTTP status codes can be checked to see if everything works as expected:
@RepeatedTest(3)
public void whenGetFirstThenFirstCall200AndThen429(RepetitionInfo repetitionInfo) throws Exception {
ResultMatcher result = repetitionInfo.getCurrentRepetition() == 1 ? status().isOk() :
status().isTooManyRequests();
mockMvc.perform(MockMvcRequestBuilders.get("/first"))
.andExpect(result);
}
In the above method, the test is repeated 3 times. The first test is expected to return 200 (OK) response, and the rest 2 tests are expected to return 429 (Too Many Requests) response.
7-) Conclusion:
Bucket4j is a rate-limiting library that can be integrated with Spring easily. It is also possible to rate-limit requests based on specific conditions such as IP-based rate-limiting.
You can download the source code from GitHub.
References:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For
https://docs.spring.io/spring-boot/docs/2.0.x/reference/html/boot-features-caching.html
https://docs.spring.io/spring-boot/docs/current/reference/html/application-properties.html#application-properties.data.spring.redis.client-name
https://github.com/MarcGiffing/bucket4j-spring-boot-starter
所有评论(0)