默认ForkJoinPool引发的Redis lettuceP99升高

2024-01-01 10:38

本文主要是介绍默认ForkJoinPool引发的Redis lettuceP99升高,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

背景:

推荐系统升级RedisCluster4的SDK后,与之前的redis2.8的jedis客户端相比性能下降,具体表现在对应接口P99升高

问题原因:

在项目中使用了parallelStream的并行执行,其和lettuce的异步获取结果的CompletableFuture线程共用了一个ForkJoinPool

解决方案:

去除对于parallelStream的依赖,使用单独的线程池

通过排查堆栈,发现parallelStream产生大量的ForkJoin线程,怀疑其和lettuce的future线程之间产生资源竞争,将parallelStream去掉之后,P99明显改善

这个是接口的P99

 

这个是REDIS4Client的hmget P99

 

这个是REDIS4Client的firstResponse的 P99

 

http://matrix.snowballfinance.com/d/RGsiCO7Zz/recommend-recall?orgId=1&from=1616569005047&to=1616583596889

另外CPU和线程总数也不再出现大的波动

 

 

原理分析:

ParallelStream的执行线程池

对应forEach流

ForEachOps::compute方法打个断点,

或者直接forEach方法的输出语句打个断点,找到ForkJoinWorkerThread类

public class ForkJoinWorkerThread extends Thread {

   final ForkJoinPool pool;                // the pool this thread works in

   final ForkJoinPool.WorkQueue workQueue; // work-stea

   public void run() {

       ....

       pool.runWorker(workQueue);

       ....

    }

 }

completableFuture的执行线程池

private static final Executor asyncPool = useCommonPool ?

    ForkJoinPool.commonPool() : new ThreadPerTaskExecutor();

//useCommonPool是什么?

private static final boolean useCommonPool =

    (ForkJoinPool.getCommonPoolParallelism() > 1);

public static int getCommonPoolParallelism() {

    return commonParallelism;

}

 

 

private static ForkJoinPool makeCommonPool() {

    int parallelism = -1;  //这个并发的线程数默认是-1

    ForkJoinWorkerThreadFactory factory = null;

  。。。。。。

    if (parallelism < 0 &&

        (parallelism = Runtime.getRuntime().availableProcessors() - 1) <= 0)  //看到了吧,线程池中的处理线程数=电脑核数-1

        parallelism = 1;

    if (parallelism > MAX_CAP)

        parallelism = MAX_CAP;

    return new ForkJoinPool(parallelism, factory, handler, LIFO_QUEUE,

                            "ForkJoinPool.commonPool-worker-");  //指定线程的名字

}

而lettuce中对于结果的返回使用的LettuceFutures--awaitOrCancel(RedisFuture<T> cmd, long timeout, TimeUnit unit)获取执行结果,

其中RedisFuture的awite实现AsyncCommand类的就是靠CompletableFuture完成的,就会和上面的parallelStream共用一个ForkJoinPool

/**

 * Wait until futures are complete or the supplied timeout is reached. Commands are canceled if the timeout is reached but

 * the command is not finished.

 *

 * @param cmd Command to wait for

 * @param timeout Maximum time to wait for futures to complete

 * @param unit Unit of time for the timeout

 * @param <T> Result type

 *

 * @return Result of the command.

 */

public static <T> T awaitOrCancel(RedisFuture<T> cmd, long timeout, TimeUnit unit) {

 

    try {

        if (!cmd.await(timeout, unit)) {

            cmd.cancel(true);

            throw ExceptionFactory.createTimeoutException(Duration.ofNanos(unit.toNanos(timeout)));

        }

        return cmd.get();

    catch (RuntimeException e) {

        throw e;

    catch (ExecutionException e) {

 

        if (e.getCause() instanceof RedisCommandExecutionException) {

            throw ExceptionFactory.createExecutionException(e.getCause().getMessage(), e.getCause());

        }

 

        if (e.getCause() instanceof RedisCommandTimeoutException) {

            throw new RedisCommandTimeoutException(e.getCause());

        }

 

        throw new RedisException(e.getCause());

    catch (InterruptedException e) {

 

        Thread.currentThread().interrupt();

        throw new RedisCommandInterruptedException(e);

    catch (Exception e) {

        throw ExceptionFactory.createExecutionException(null, e);

    }

}

隔离了这种线程池的资源,这样对redis这种快速的线程就不会被队列中慢的线程影响获取时间片

这里留下一个问题:并发和并行的区别是?

测试代码:

1.使用parallelStream

RedisCluster redisCluster = RedisClusterImpl.create("192.168.64.169:8056,192.168.64.169:8053"4);

Thread thread1 = new Thread(() -> {

    int i = 0;

    while (true) {

        try {

            redisCluster.setex("k" + i, 10000"v" + i);

            Long start = System.currentTimeMillis();

            logger.info("RedisCluster4 info key:{}, value:{}""k" + i, redisCluster.get("k" + i));

            Long costTime = System.currentTimeMillis() - start;

            if (costTime > 10) {

                logger.info("RedisCluster4 slowlog :{}", costTime);

            }

            i++;

        catch (Exception ex) {

            logger.error("RedisCluster4 error, {}", ex.getMessage(), ex);

        }

    }

});

thread1.start();

 

Thread thread2 = new Thread(() -> {

    while (true) {

        try {

            List<Integer> list = new ArrayList<>();

            for (int j = 0; j < 10000; j++) {

                list.add(j);

            }

            list.parallelStream().forEach(f-> {

                logger.info("parallelStream log :{}", f);

                for (int j = 0; j < 10000; j++) {

                }

            });

        catch (Exception ex) {

            logger.error("RedisCluster4 error, {}", ex.getMessage(), ex);

        }

    }

});

 

thread2.start();

打印的监控日志:平均P99≈100ms

2021-03-25 11:38:33.976|192.168.18.128|sep|UNKNOWN|app|TIMER|REDIS4.get||{"count":7509,"delta":7509,"min":0.22,"max":183.88,"mean":20.53,"stddev":27.56,"median":20.53,"p50":5.44,"p75":38.77,"p95":59.57,"p98":104.12,"p99":150.59,"p999":177.77,"mean_rate":739.0,"m1":660.86,"m5":647.65,"m15":645.36,"ratio":7.33,"rate_unit":"events/second","duration_unit":"milliseconds"}
2021-03-25 11:38:33.979|192.168.18.128|sep|UNKNOWN|app|TIMER|REDIS4.setex||{"count":275,"delta":275,"min":0.27,"max":215.22,"mean":19.2,"stddev":30.82,"median":19.2,"p50":3.88,"p75":38.08,"p95":56.55,"p98":107.48,"p99":176.4,"p999":215.22,"mean_rate":27.3,"m1":21.87,"m5":21.02,"m15":20.87,"ratio":9.19,"rate_unit":"events/second","duration_unit":"milliseconds"}

2.使用多线程,但是得做到控制的和ForkJoinPool一样 

RedisCluster redisCluster = RedisClusterImpl.create("192.168.64.169:8056,192.168.64.169:8053"4);

Thread thread1 = new Thread(() -> {

    int i = 0;

    while (true) {

        try {

            redisCluster.setex("k" + i, 10000"v" + i);

            Long start = System.currentTimeMillis();

            logger.info("RedisCluster4 info key:{}, value:{}""k" + i, redisCluster.get("k" + i));

            Long costTime = System.currentTimeMillis() - start;

            if (costTime > 10) {

                logger.info("RedisCluster4 slowlog :{}", costTime);

            }

            i++;

        catch (Exception ex) {

            logger.error("RedisCluster4 error, {}", ex.getMessage(), ex);

        }

    }

});

thread1.start();

 

ForkJoinPool forkJoinPool = new ForkJoinPool(Runtime.getRuntime().availableProcessors() - 1, ForkJoinPool.defaultForkJoinWorkerThreadFactory, nulltrue);

forkJoinPool.submit(new Runnable() {

    @Override

    public void run() {

        while (true) {

            try {

                for (int j = 0; j < 10000; j++) {

                    logger.info("parallelStream log :{}", j);

                    redisCluster.get("k" + j);

                }

            catch (Exception ex) {

                logger.error("RedisCluster4 error, {}", ex.getMessage(), ex);

            }

        }

    }

});

}

打印的监控日志:平均P99≈36ms

2021-03-25 11:43:58.565|192.168.18.128|sep|UNKNOWN|app|TIMER|REDIS4.get||{"count":22924,"delta":4670,"min":0.2,"max":91.88,"mean":3.45,"stddev":9.19,"median":3.45,"p50":0.8,"p75":1.24,"p95":34.76,"p98":35.57,"p99":36.17,"p999":40.72,"mean_rate":456.99,"m1":445.99,"m5":433.02,"m15":430.13,"ratio":10.5,"rate_unit":"events/second","duration_unit":"milliseconds"}
2021-03-25 11:43:58.575|192.168.18.128|sep|UNKNOWN|app|TIMER|REDIS4.setex||{"count":7421,"delta":1510,"min":0.22,"max":152.41,"mean":3.36,"stddev":9.85,"median":3.36,"p50":0.88,"p75":1.3,"p95":34.92,"p98":36.06,"p99":37.13,"p999":152.41,"mean_rate":147.83,"m1":145.05,"m5":141.57,"m15":140.81,"ratio":11.06,"rate_unit":"events/second","duration_unit":"milliseconds"}

与forkJoin一起在池里面的那个线程栈

java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1695)
java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1775)
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915)
io.lettuce.core.protocol.AsyncCommand.await(AsyncCommand.java:83)
io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:112)

3.最单纯redis4Client查询

RedisCluster redisCluster = RedisClusterImpl.create("192.168.64.169:8056,192.168.64.169:8053"4);

Thread thread1 = new Thread(() -> {

    int i = 0;

    while (true) {

        try {

            redisCluster.setex("k" + i, 10000"v" + i);

            Long start = System.currentTimeMillis();

            logger.info("RedisCluster4 info key:{}, value:{}""k" + i, redisCluster.get("k" + i));

            Long costTime = System.currentTimeMillis() - start;

            if (costTime > 10) {

                logger.info("RedisCluster4 slowlog :{}", costTime);

            }

            i++;

        catch (Exception ex) {

            logger.error("RedisCluster4 error, {}", ex.getMessage(), ex);

        }

    }

});

thread1.start();

打印的监控日志:平均P99≈35ms

2021-03-25 13:47:05.137|192.168.18.128|sep|UNKNOWN|app|TIMER|REDIS4.get||{"count":12846,"delta":2362,"min":0.21,"max":85.23,"mean":2.27,"stddev":7.77,"median":2.27,"p50":0.54,"p75":0.71,"p95":2.55,"p98":34.67,"p99":35.12,"p999":85.23,"mean_rate":213.46,"m1":195.49,"m5":164.06,"m15":156.74,"ratio":15.48,"rate_unit":"events/second","duration_unit":"milliseconds"}
2021-03-25 13:47:05.146|192.168.18.128|sep|UNKNOWN|app|TIMER|REDIS4.setex||{"count":12847,"delta":2362,"min":0.22,"max":84.04,"mean":1.96,"stddev":6.91,"median":1.96,"p50":0.62,"p75":0.79,"p95":1.9,"p98":34.53,"p99":35.03,"p999":84.04,"mean_rate":213.32,"m1":195.42,"m5":163.9,"m15":156.56,"ratio":17.9,"rate_unit":"events/second","duration_unit":"milliseconds"}

所有的监控日志文件: text fileMyselfRedis4Test.java

从三者的对比可以验证上面的那个结论,就是做了资源隔离,是有一定的帮助

建议:

不要在高并发的接口中使用并行流,有i/o操作的一定不要使用并行流,有线程休眠的也一定不要使用并行流,如果有需要,那就全局创建一个Fork-Join线程池自己切分任务来执行。

彩蛋:

对上面的遗留小问题解答:

并行同时执行,并发可以交替执行

 

这篇关于默认ForkJoinPool引发的Redis lettuceP99升高的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/559000

相关文章

redis数据结构之String详解

《redis数据结构之String详解》Redis以String为基础类型,因C字符串效率低、非二进制安全等问题,采用SDS动态字符串实现高效存储,通过RedisObject封装,支持多种编码方式(如... 目录一、为什么Redis选String作为基础类型?二、SDS底层数据结构三、RedisObject

Redis分布式锁中Redission底层实现方式

《Redis分布式锁中Redission底层实现方式》Redission基于Redis原子操作和Lua脚本实现分布式锁,通过SETNX命令、看门狗续期、可重入机制及异常处理,确保锁的可靠性和一致性,是... 目录Redis分布式锁中Redission底层实现一、Redission分布式锁的基本使用二、Red

redis和redission分布式锁原理及区别说明

《redis和redission分布式锁原理及区别说明》文章对比了synchronized、乐观锁、Redis分布式锁及Redission锁的原理与区别,指出在集群环境下synchronized失效,... 目录Redis和redission分布式锁原理及区别1、有的同伴想到了synchronized关键字

Spring Integration Redis 使用示例详解

《SpringIntegrationRedis使用示例详解》本文给大家介绍SpringIntegrationRedis的配置与使用,涵盖依赖添加、Redis连接设置、分布式锁实现、消息通道配置及... 目录一、依赖配置1.1 Maven 依赖1.2 Gradle 依赖二、Redis 连接配置2.1 配置 R

更改linux系统的默认Python版本方式

《更改linux系统的默认Python版本方式》通过删除原Python软链接并创建指向python3.6的新链接,可切换系统默认Python版本,需注意版本冲突、环境混乱及维护问题,建议使用pyenv... 目录更改系统的默认python版本软链接软链接的特点创建软链接的命令使用场景注意事项总结更改系统的默

redis中session会话共享的三种方案

《redis中session会话共享的三种方案》本文探讨了分布式系统中Session共享的三种解决方案,包括粘性会话、Session复制以及基于Redis的集中存储,具有一定的参考价值,感兴趣的可以了... 目录三种解决方案粘性会话(Sticky Sessions)Session复制Redis统一存储Spr

使用Redis快速实现共享Session登录的详细步骤

《使用Redis快速实现共享Session登录的详细步骤》在Web开发中,Session通常用于存储用户的会话信息,允许用户在多个页面之间保持登录状态,Redis是一个开源的高性能键值数据库,广泛用于... 目录前言实现原理:步骤:使用Redis实现共享Session登录1. 引入Redis依赖2. 配置R

shell脚本批量导出redis key-value方式

《shell脚本批量导出rediskey-value方式》为避免keys全量扫描导致Redis卡顿,可先通过dump.rdb备份文件在本地恢复,再使用scan命令渐进导出key-value,通过CN... 目录1 背景2 详细步骤2.1 本地docker启动Redis2.2 shell批量导出脚本3 附录总

批量导入txt数据到的redis过程

《批量导入txt数据到的redis过程》用户通过将Redis命令逐行写入txt文件,利用管道模式运行客户端,成功执行批量删除以Product*匹配的Key操作,提高了数据清理效率... 目录批量导入txt数据到Redisjs把redis命令按一条 一行写到txt中管道命令运行redis客户端成功了批量删除k

Redis客户端连接机制的实现方案

《Redis客户端连接机制的实现方案》本文主要介绍了Redis客户端连接机制的实现方案,包括事件驱动模型、非阻塞I/O处理、连接池应用及配置优化,具有一定的参考价值,感兴趣的可以了解一下... 目录1. Redis连接模型概述2. 连接建立过程详解2.1 连php接初始化流程2.2 关键配置参数3. 最大连