背景
我们圣诞节在生产上碰到了每秒万级并发,经过WAF结合相关日志分析我们发觉我们在小程序上有几个接口被人泛滥了很利害。
而这几个接口我们前端是使用了varnish来做缓存的,理论上因该都是毫秒级返回的。不应该对生产由其首页造成过多的压力呀?
于是我们找了近百个用户的实际请求,进行了“回放”,发觉这几个请求的response time远远高于了我们的varnish对前端返回的速度。
于是我们进一步分析,发现问题出在了这几个请求-都是get方法且问号(?)后面带的参数的value竟然都不是我们站内所该有的商品、渠道、模块。而是随机生成的且每次请求这些参数都不一样。有些value竟然还传进来了老K、S、皮蛋、丁勾、嘻嘻、哈哈。你有看到过sku_id传一个哈哈或者helloworld吗?真是古今中外之没有过!
结合了WAF进行了进一步分析,我们发觉这些带随机产生系统不存在模块id、产品id的IP都只是访问一次性,但是在访问一次时这些请求的ip很多,多达5位数ip同时在某一个点一起来请求一次。这已经很明显了,这是典型的黑产或者是爬虫试图绕开我们前端的varnish、并且进一步绕开了我们的redis、打中了db导致了首页加载时DB压力很大。
提出改进
黑产、一些数据公司、爬虫,它们本身拥有的IP是大量的。它们根本不需要高频来做网站数据的爬取或者是恶意攻击,它们只要发动6位数甚至7位数的IP每隔几秒来访问一下你的网站,你的网站就扛不住了。
因此从业务逻辑判断,我们说这个商品数据,它有一个product_id的,算你10万个sku不得了了吧,如果来访问时带着的sku_id在系统中都不存在,这种访问你要它干什么呢?
因此我们得到了相应的防击穿解决方案如下
我们在上一篇SpringBoot+Redis布隆过滤器防恶意流量击穿缓存的正确姿势中给出的代码是依赖于redis本身要load布隆过滤器模块的。
而这次我们坚持使用云原生,直接用google的guava的工具类使用的bloom算法然后把它用setBit存入redis中。因为如果单纯的使用guava的话,应用在重启后内存中的bloom filter的内容会被清空,因此我们很好的结合了guava的算法以及使用redis来做存储介质这一手法而不需要像我在上一篇中,要给redis装bloom filter插件。
毕竟,在生产上的redis你给他装插件,是件很夸张的事。同时事出紧急,给到我们的反应只有20分钟时间,因此我们需要马上上一套代码来对这样的恶意请求做拦截,否则小程序应用的首页是扛不住的,因此我们再次使用了这种“聪明”的手法来做实施。
下面给出工程代码。
生产上用的全代码(我的开源版比生产代码要用的新和更强)^_^
application_local.yml
server: port: 9080 tomcat: max-http-post-size: -1 max-http-header-size: 10240000 spring: application: name: redis-demo servlet: multipart: max-file-size: 10MB max-request-size: 10MB redis: password: 111111 sentinel: nodes: localhost:27001,localhost:27002,localhost:27003 master: master1 database: 0 switchFlag: 1 lettuce: pool: max-active: 50 max-wait: 10000 max-idle: 10 min-idl: 5 shutdown-timeout: 2000 timeBetweenEvictionRunsMillis: 5000 timeout: 5000
pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.mk.demo</groupId> <artifactId>springboot-demo</artifactId> <version>0.0.1</version> </parent> <artifactId>redis-demo</artifactId> <name>rabbitmq-demo</name> <packaging>jar</packaging> <dependencies> <dependency> <groupId>com.auth0</groupId> <artifactId>java-jwt</artifactId> </dependency> <dependency> <groupId>cn.hutool</groupId> <artifactId>hutool-crypto</artifactId> </dependency> <!-- redis must --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> </exclusions> </dependency> <!-- jedis must --> <dependency> <groupId>redis.clients</groupId> <artifactId>jedis</artifactId> </dependency> <!--redisson must start --> <dependency> <groupId>org.redisson</groupId> <artifactId>redisson-spring-boot-starter</artifactId> <version>3.13.6</version> <exclusions> <exclusion> <groupId>org.redisson</groupId> <artifactId>redisson-spring-data-23</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> </dependency> <!--redisson must end --> <dependency> <groupId>org.redisson</groupId> <artifactId>redisson-spring-data-21</artifactId> <version>3.13.1</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-thymeleaf</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-jdbc</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-configuration-processor</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjweaver</artifactId> </dependency> <dependency> <groupId>com.lmax</groupId> <artifactId>disruptor</artifactId> </dependency> <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> </dependency> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjweaver</artifactId> </dependency> <dependency> <groupId>com.lmax</groupId> <artifactId>disruptor</artifactId> </dependency> <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> </dependency> </dependencies> </project>
给出几个关键的pom.xml中使用的parent的信息吧
不要抄网上的,大部分都是错的。一定记得这边的spring boot的版本、和guava的版本、redis的版本、redission的版本、jackson的版本是有严格讲究的,log4j相关的内容根据我之前的博客自行替换成2.7.1好了(目前apache log4j2.7.1是比较安全的版本)
<properties> <java.version>1.8</java.version> <jacoco.version>0.8.3</jacoco.version> <aldi-sharding.version>0.0.1</aldi-sharding.version> <!-- <spring-boot.version>2.4.2</spring-boot.version> --> <spring-boot.version>2.3.1.RELEASE</spring-boot.version> <!-- spring-boot.version>2.0.6.RELEASE</spring-boot.version> <spring-cloud-zk-discovery.version>2.1.3.RELEASE</spring-cloud-zk-discovery.version --> <zookeeper.version>3.4.13</zookeeper.version> <spring-cloud.version>Greenwich.SR5</spring-cloud.version> <dubbo.version>2.7.3</dubbo.version> <curator-framework.version>4.0.1</curator-framework.version> <curator-recipes.version>2.8.0</curator-recipes.version> <!-- druid.version>1.1.20</druid.version --> <druid.version>1.2.6</druid.version> <guava.version>27.0.1-jre</guava.version> <fastjson.version>1.2.59</fastjson.version> <dubbo-registry-nacos.version>2.7.3</dubbo-registry-nacos.version> <nacos-client.version>1.1.4</nacos-client.version> <!-- mysql-connector-java.version>8.0.13</mysql-connector-java.version --> <mysql-connector-java.version>5.1.46</mysql-connector-java.version> <disruptor.version>3.4.2</disruptor.version> <aspectj.version>1.8.13</aspectj.version> <spring.data.redis>1.8.14-RELEASE</spring.data.redis> <seata.version>1.0.0</seata.version> <netty.version>4.1.42.Final</netty.version> <nacos.spring.version>0.1.4</nacos.spring.version> <lombok.version>1.16.22</lombok.version> <javax.servlet.version>3.1.0</javax.servlet.version> <mybatis.version>2.1.0</mybatis.version> <pagehelper-mybatis.version>1.2.3</pagehelper-mybatis.version> <spring.kafka.version>1.3.10.RELEASE</spring.kafka.version> <kafka.client.version>1.0.2</kafka.client.version> <shardingsphere.jdbc.version>4.0.0</shardingsphere.jdbc.version> <xmemcached.version>2.4.6</xmemcached.version> <swagger.version>2.9.2</swagger.version> <swagger.bootstrap.ui.version>1.9.6</swagger.bootstrap.ui.version> <swagger.model.version>1.5.23</swagger.model.version> <swagger-annotations.version>1.5.22</swagger-annotations.version> <swagger-models.version>1.5.22</swagger-models.version> <swagger-bootstrap-ui.version>1.9.5</swagger-bootstrap-ui.version> <sky-sharding-jdbc.version>0.0.1</sky-sharding-jdbc.version> <cxf.version>3.1.6</cxf.version> <jackson-databind.version>2.11.1</jackson-databind.version> <gson.version>2.8.6</gson.version> <groovy.version>2.5.8</groovy.version> <logback-ext-spring.version>0.1.4</logback-ext-spring.version> <jcl-over-slf4j.version>1.7.25</jcl-over-slf4j.version> <spock-spring.version>2.0-M2-groovy-2.5</spock-spring.version> <xxljob.version>2.2.0</xxljob.version> <java-jwt.version>3.10.0</java-jwt.version> <commons-lang.version>2.6</commons-lang.version> <hutool-crypto.version>5.0.0</hutool-crypto.version> <maven.compiler.source>${java.version}</maven.compiler.source> <maven.compiler.target>${java.version}</maven.compiler.target> <compiler.plugin.version>3.8.1</compiler.plugin.version> <war.plugin.version>3.2.3</war.plugin.version> <jar.plugin.version>3.1.1</jar.plugin.version> <quartz.version>2.2.3</quartz.version> <h2.version>1.4.197</h2.version> <zkclient.version>3.4.14</zkclient.version> <httpcore.version>4.4.10</httpcore.version> <httpclient.version>4.5.6</httpclient.version> <mockito-core.version>3.0.0</mockito-core.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <oseq-aldi.version>2.0.22-RELEASE</oseq-aldi.version> <poi.version>4.1.0</poi.version> <poi-ooxml.version>4.1.0</poi-ooxml.version> <poi-ooxml-schemas.version>4.1.0</poi-ooxml-schemas.version> <dom4j.version>1.6.1</dom4j.version> <xmlbeans.version>3.1.0</xmlbeans.version> <java-jwt.version>3.10.0</java-jwt.version> <commons-lang.version>2.6</commons-lang.version> <hutool-crypto.version>5.0.0</hutool-crypto.version> <nacos-discovery.version>2.2.5.RELEASE</nacos-discovery.version> <spring-cloud-alibaba.version>2.2.1.RELEASE</spring-cloud-alibaba.version> <redission.version>3.16.1</redission.version> </properties>
然后是自动装配类
BloomFilterHelper
package org.mk.demo.redisdemo.bloomfilter; import com.google.common.base.Preconditions; import com.google.common.hash.Funnel; import com.google.common.hash.Hashing; public class BloomFilterHelper<T> { private int numHashFunctions; private int bitSize; private Funnel<T> funnel; public BloomFilterHelper(Funnel<T> funnel, int expectedInsertions, double fpp) { Preconditions.checkArgument(funnel != null, "funnel不能为空"); this.funnel = funnel; // 计算bit数组长度 bitSize = optimalNumOfBits(expectedInsertions, fpp); // 计算hash方法执行次数 numHashFunctions = optimalNumOfHashFunctions(expectedInsertions, bitSize); } public int[] murmurHashOffset(T value) { int[] offset = new int[numHashFunctions]; long hash64 = Hashing.murmur3_128().hashObject(value, funnel).asLong(); int hash1 = (int) hash64; int hash2 = (int) (hash64 >>> 32); for (int i = 1; i <= numHashFunctions; i++) { int nextHash = hash1 + i * hash2; if (nextHash < 0) { nextHash = ~nextHash; } offset[i - 1] = nextHash % bitSize; } return offset; } /** * 计算bit数组长度 */ private int optimalNumOfBits(long n, double p) { if (p == 0) { // 设定最小期望长度 p = Double.MIN_VALUE; } int sizeOfBitArray = (int) (-n * Math.log(p) / (Math.log(2) * Math.log(2))); return sizeOfBitArray; } /** * 计算hash方法执行次数 */ private int optimalNumOfHashFunctions(long n, long m) { int countOfHash = Math.max(1, (int) Math.round((double) m / n * Math.log(2))); return countOfHash; } }
RedisBloomFilter自动装配类
package org.mk.demo.redisdemo.bloomfilter; import com.google.common.base.Preconditions; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.stereotype.Service; @Service public class RedisBloomFilter { private Logger logger = LoggerFactory.getLogger(this.getClass()); @Autowired private RedisTemplate redisTemplate; /** * 根据给定的布隆过滤器添加值 */ public <T> void addByBloomFilter(BloomFilterHelper<T> bloomFilterHelper, String key, T value) { Preconditions.checkArgument(bloomFilterHelper != null, "bloomFilterHelper不能为空"); int[] offset = bloomFilterHelper.murmurHashOffset(value); for (int i : offset) { logger.info(">>>>>>add into bloom filter->key : " + key + " " + "value : " + i); redisTemplate.opsForValue().setBit(key, i, true); } } /** * 根据给定的布隆过滤器判断值是否存在 */ public <T> boolean includeByBloomFilter(BloomFilterHelper<T> bloomFilterHelper, String key, T value) { Preconditions.checkArgument(bloomFilterHelper != null, "bloomFilterHelper不能为空"); int[] offset = bloomFilterHelper.murmurHashOffset(value); for (int i : offset) { logger.info(">>>>>>check key from bloomfilter : " + key + " " + "value : " + i); if (!redisTemplate.opsForValue().getBit(key, i)) { return false; } } return true; } }
RedisSentinelConfig-Redis核心配置类
在里面我申明了一个BloomFilterHelper<String>返回类型的initBloomFilterHelper方法
package org.mk.demo.redisdemo.config; import com.fasterxml.jackson.annotation.JsonAutoDetect; import com.fasterxml.jackson.annotation.PropertyAccessor; import com.fasterxml.jackson.databind.ObjectMapper; import redis.clients.jedis.HostAndPort; import org.apache.commons.lang3.StringUtils; import org.apache.commons.pool2.impl.GenericObjectPoolConfig; import org.mk.demo.redisdemo.bloomfilter.BloomFilterHelper; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean; import org.springframework.cache.annotation.EnableCaching; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.data.redis.connection.*; import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory; import org.springframework.data.redis.connection.lettuce.LettucePoolingClientConfiguration; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer; import org.springframework.data.redis.serializer.StringRedisSerializer; import org.springframework.stereotype.Component; import java.time.Duration; import java.util.LinkedHashSet; import java.util.Set; import com.google.common.base.Charsets; import com.google.common.hash.Funnel; @Configuration @EnableCaching @Component public class RedisSentinelConfig { private Logger logger = LoggerFactory.getLogger(this.getClass()); @Value("${spring.redis.nodes:localhost:7001}") private String nodes; @Value("${spring.redis.max-redirects:3}") private Integer maxRedirects; @Value("${spring.redis.password}") private String password; @Value("${spring.redis.database:0}") private Integer database; @Value("${spring.redis.timeout}") private int timeout; @Value("${spring.redis.sentinel.nodes}") private String sentinel; @Value("${spring.redis.lettuce.pool.max-active:8}") private Integer maxActive; @Value("${spring.redis.lettuce.pool.max-idle:8}") private Integer maxIdle; @Value("${spring.redis.lettuce.pool.max-wait:-1}") private Long maxWait; @Value("${spring.redis.lettuce.pool.min-idle:0}") private Integer minIdle; @Value("${spring.redis.sentinel.master}") private String master; @Value("${spring.redis.switchFlag}") private String switchFlag; @Value("${spring.redis.lettuce.pool.shutdown-timeout}") private Integer shutdown; @Value("${spring.redis.lettuce.pool.timeBetweenEvictionRunsMillis}") private long timeBetweenEvictionRunsMillis; public String getSwitchFlag() { return switchFlag; } /** * 连接池配置信息 * * @return */ @Bean public LettucePoolingClientConfiguration getPoolConfig() { GenericObjectPoolConfig config = new GenericObjectPoolConfig(); config.setMaxTotal(maxActive); config.setMaxWaitMillis(maxWait); config.setMaxIdle(maxIdle); config.setMinIdle(minIdle); config.setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis); LettucePoolingClientConfiguration pool = LettucePoolingClientConfiguration.builder().poolConfig(config) .commandTimeout(Duration.ofMillis(timeout)).shutdownTimeout(Duration.ofMillis(shutdown)).build(); return pool; } /** * 配置 Redis Cluster 信息 */ @Bean @ConditionalOnMissingBean public LettuceConnectionFactory lettuceConnectionFactory() { LettuceConnectionFactory factory = null; String[] split = nodes.split(","); Set<HostAndPort> nodes = new LinkedHashSet<>(); for (int i = 0; i < split.length; i++) { try { String[] split1 = split[i].split(":"); nodes.add(new HostAndPort(split1[0], Integer.parseInt(split1[1]))); } catch (Exception e) { logger.error(">>>>>>出现配置错误!请确认: " + e.getMessage(), e); throw new RuntimeException(String.format("出现配置错误!请确认node=[%s]是否正确", nodes)); } } // 如果是哨兵的模式 if (!StringUtils.isEmpty(sentinel)) { logger.info(">>>>>>Redis use SentinelConfiguration"); RedisSentinelConfiguration redisSentinelConfiguration = new RedisSentinelConfiguration(); String[] sentinelArray = sentinel.split(","); for (String s : sentinelArray) { try { String[] split1 = s.split(":"); redisSentinelConfiguration.addSentinel(new RedisNode(split1[0], Integer.parseInt(split1[1]))); } catch (Exception e) { logger.error(">>>>>>出现配置错误!请确认: " + e.getMessage(), e); throw new RuntimeException(String.format("出现配置错误!请确认node=[%s]是否正确", sentinelArray)); } } redisSentinelConfiguration.setMaster(master); redisSentinelConfiguration.setPassword(password); factory = new LettuceConnectionFactory(redisSentinelConfiguration, getPoolConfig()); } // 如果是单个节点 用Standalone模式 else { if (nodes.size() < 2) { logger.info(">>>>>>Redis use RedisStandaloneConfiguration"); for (HostAndPort n : nodes) { RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration(); if (!StringUtils.isEmpty(password)) { redisStandaloneConfiguration.setPassword(RedisPassword.of(password)); } redisStandaloneConfiguration.setPort(n.getPort()); redisStandaloneConfiguration.setHostName(n.getHost()); factory = new LettuceConnectionFactory(redisStandaloneConfiguration, getPoolConfig()); } } else { logger.info(">>>>>>Redis use RedisClusterConfiguration"); RedisClusterConfiguration redisClusterConfiguration = new RedisClusterConfiguration(); nodes.forEach(n -> { redisClusterConfiguration.addClusterNode(new RedisNode(n.getHost(), n.getPort())); }); if (!StringUtils.isEmpty(password)) { redisClusterConfiguration.setPassword(RedisPassword.of(password)); } redisClusterConfiguration.setMaxRedirects(maxRedirects); factory = new LettuceConnectionFactory(redisClusterConfiguration, getPoolConfig()); } } return factory; } @Bean public RedisTemplate<String, Object> redisTemplate(LettuceConnectionFactory lettuceConnectionFactory) { RedisTemplate<String, Object> template = new RedisTemplate<>(); template.setConnectionFactory(lettuceConnectionFactory); Jackson2JsonRedisSerializer jacksonSerial = new Jackson2JsonRedisSerializer<>(Object.class); ObjectMapper om = new ObjectMapper(); // 指定要序列化的域,field,get和set,以及修饰符范围,ANY是都有包括private和public om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY); jacksonSerial.setObjectMapper(om); StringRedisSerializer stringSerial = new StringRedisSerializer(); template.setKeySerializer(stringSerial); // template.setValueSerializer(stringSerial); template.setValueSerializer(jacksonSerial); template.setHashKeySerializer(stringSerial); template.setHashValueSerializer(jacksonSerial); template.afterPropertiesSet(); return template; } // 初始化布隆过滤器,放入到spring容器里面 @Bean public BloomFilterHelper<String> initBloomFilterHelper() { return new BloomFilterHelper<>( (Funnel<String>) (from, into) -> into.putString(from, Charsets.UTF_8).putString(from, Charsets.UTF_8), 100000000, 0.001); } }
代码的使用
布降过滤器记得只可以追加或者删除后重新喂数据。
在生产上我们可以这么干,以两个例子来告诉大家如何操作。
例子一、对于所有的在活动日期开始前的剑客级会员进行防护
设活动开始日期为2022年1月1号元旦,在此之前所有的剑客级会员需要进入防护。
于是我们写一个JOB,一次性把上千万会员一次load进布隆过滤器内。然后在1月1号早上9:00活动开始生效时,只要带着系统内不存在的会员token如:ut这个值进来访问,全部在bloom filter一层就把请求给挡掉了。
例子二、对于所有的类目中的sku_id进行防护。
设小程序或者是前端app的商品类目有16类,每类有1千种sku,差不多有12万的sku。这些sku会伴随着每天会有那么2-3次的上下架操作。那么我们会做一个job,这个job在5分钟内运行一下。从数据库内捞出所有“上架状态”的sku,喂入bloom filter。喂入前先把bloom filter的这个key删了。当然,为了做到更精准,我们会使用mq异步,上下架全完成了后点一下【生效】这个按钮后,紧接着一条MQ通知一个Service先删除原来的bloom filter里的key再喂入全量的sku_id,全过程不地秒级内完成。
下面就以例子1来看业务代码的实现。在此,我们做了一个Service,它在应用一启动时装载所有的用户的token。
UTBloomFilterInit
package org.mk.demo.redisdemo.bloomfilter; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.ApplicationArguments; import org.springframework.boot.ApplicationRunner; import org.springframework.core.annotation.Order; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.stereotype.Component; @Component @Order(1) // 如果有多个类实现了ApplicationRunner接口,可以使用此注解指定执行顺序 public class UTBloomFilterInit implements ApplicationRunner { private final static String BLOOM_UT = "bloom_ut"; private Logger logger = LoggerFactory.getLogger(this.getClass()); @Autowired private BloomFilterHelper bloomFilterHelper; @Autowired private RedisBloomFilter redisBloomFilter; @Autowired private RedisTemplate redisTemplate; @Override public void run(ApplicationArguments args) throws Exception { try { if (redisTemplate.hasKey(BLOOM_UT)) { logger.info(">>>>>>bloom filter key->" + BLOOM_UT + " existed, delete it first then init"); redisTemplate.delete(BLOOM_UT); } for (int i = 0; i < 10000; i++) { StringBuffer ut = new StringBuffer(); ut.append("ut_"); ut.append(i); redisBloomFilter.addByBloomFilter(bloomFilterHelper, BLOOM_UT, ut.toString()); } logger.info(">>>>>>init ut into redis bloom successfully"); } catch (Exception e) { logger.info(">>>>>>init ut into redis bloom failed:" + e.getMessage(), e); } } }
这个Service会在spring boot启动时模拟加载全量的用户进到布隆过滤器中。
RedisBloomController
package org.mk.demo.redisdemo.controller; import org.mk.demo.redisdemo.bean.UserBean; import org.mk.demo.redisdemo.bloomfilter.BloomFilterHelper; import org.mk.demo.redisdemo.bloomfilter.RedisBloomFilter; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.ResponseBody; import org.springframework.web.bind.annotation.RestController; @RestController @RequestMapping("demo") public class RedisBloomController { private final static String BLOOM_UT = "bloom_ut"; private Logger logger = LoggerFactory.getLogger(this.getClass()); @Autowired private BloomFilterHelper bloomFilterHelper; @Autowired private RedisBloomFilter redisBloomFilter; @PostMapping(value = "/redis/checkBloom", produces = "application/json") @ResponseBody public String check(@RequestBody UserBean user) { try { boolean b = redisBloomFilter.includeByBloomFilter(bloomFilterHelper, BLOOM_UT, user.getUt()); if (b) { return "existed"; } else { return "not existed"; } } catch (Exception e) { logger.error(">>>>>>init bloom error: " + e.getMessage(), e); return "check error"; } } }
测试业务拦截
我们使用布隆过滤器中存在的ut来访问,得到如下结果
我们使用布隆过滤器中不存在的ut来访问,得到如下结果
这种判断都是在3,000并发下毫秒级别响应
总结
事实上生产我们可以load上亿条数据进入bloom过滤器。而bloom的大小远远比hash或者是md5要小、且几乎不重复。
布隆内数据的大小依赖于这两个值来决定的:
这两个值解读为亿条数据内,出错(遗漏)精度在万分之1.布隆过滤器返回false,100%可以认为不存在。精度越高redis内存储的空间所需越大。它是一次性划分掉的,并不是现在只有1万条因此需要200k下次变成了10万条因此需要12兆。我们看一下亿条数据万分之一精度在redis内占用多少资源吧。
亿条数据不过200兆,这对生产Redis来说太小case了。
《本文》有 0 条评论