本文介绍了由于从boss线程传递给工作线程的请求导致的netty延迟?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

限时删除!!

我对Netty(服务器端),TCP / IP应用程序有一些疑问;

I have some questions about Netty (Server Side), TCP/IP applications;

我想知道在将请求从boss线程传递给工作线程时是否会因为netty(由于缺少配置等)而导致延迟?

I am wondering if there can be latency because of netty (due to missing configuration etc.) while passing the request from boss thread to worker thread ?

我正在使用:

new OrderedMemoryAwareThreadPoolExecutor(350, 0, 0, 1, TimeUnit.SECONDS);

实际上,我设置最大线程数 350 因为我不确定最佳数字。我每分钟记录同时工作的线程数,似乎平均值太低(几乎不超过 10 )。所以我会减少这个数字,因为它不是必需的。

Actually, I set max thread count 350 as I am not sure about the optimal number. I log simultaneous working thread count every minute and it seems that average is too low (barely exceeds 10). So I will decrease this number as it is not required.

是否还有其他参数,为了获得最佳性能,我应该注意哪些要点?

Is there any other parameters,important points that I should be aware of for to get best performance ?

bootstrap.setOption(tcpNoDelay,true); - 设置此参数有什么不利之处吗?考虑到交货时间非常重要。

bootstrap.setOption("tcpNoDelay", true); - Is there any disadvantage of setting this parameter? Considering that delivery time is very important.

线程池执行者:

OrderedMemoryAwareThreadPoolExecutor executor = new OrderedMemoryAwareThreadPoolExecutor(48, 0, 0, 1, TimeUnit.SECONDS);

这是我的管道工厂:

    ChannelPipeline pipeline = pipeline();
    pipeline.addLast("frameDecoder", new DelimiterBasedFrameDecoder(GProperties.getIntProperty("decoder.maxFrameLength", 8000 * 1024), Delimiters.nulDelimiter()));
    pipeline.addLast("stringDecoder", new StringDecoder( CharsetUtil.UTF_8 ));
    pipeline.addLast("frameEncoder", new NullTermMessageEncoder());
    pipeline.addLast("stringEncoder", new JSONEncoder( CharsetUtil.UTF_8 ));
        pipeline.addLast("timeout", new IdleStateHandler(idleTimer, 42 , 0, 0));
    pipeline.addLast("executor", new ExecutionHandler(executor));
    pipeline.addLast("handler", new GServerHandler());

和ServerBootstrap:

and the ServerBootstrap:

gServerBootstrap = new ServerBootstrap(new NioServerSocketChannelFactory(Executors.newCachedThreadPool(), Executors.newCachedThreadPool()));
        gServerBootstrap.setPipelineFactory(new GServerPipelineFactory());
                gServerBootstrap.setOption("backlog", 8129);
                gServerBootstrap.setOption("child.tcpNoDelay", true);
        gServerBootstrap.bind(new InetSocketAddress(GProperties.getIntProperty("server.port", 7679)));

您可以为此配置建议什么?

What can you suggest for this configuration ?

推荐答案

Netty Boss线程仅用于设置连接,工作线程用于运行NioWorker(非阻塞读/写)或OioWorker(阻塞读/写)。

Netty Boss threads are only used to setup connection, worker threads are used to run NioWorker (non blocking read/write) or OioWorker (blocking read/write).

如果你有一个执行处理程序,工作线程会将消息事件提交给OrderedMemoryAwareThreadPoolExecutor。

If you have an execution handler, worker thread will submit the message event to OrderedMemoryAwareThreadPoolExecutor.

1)将Netty I / O工作线程数增加到超过处理器数量* 2将无济于事。如果您正在使用分阶段执行程序,为非I / O任务提供多个分阶段执行处理程序,可能会增加延迟。

1) Increasing the Netty I/O worker thread count to more than number of processors * 2 won't help. If you are using staged executors, Having more than one staged execution handler for non I/O tasks, may increase latency.

注意:最好设置自己的ObjectSizeEstimator实现在OMTPE
构造函数中,因为计算使用的通道内存花费了很多CPU周期。

Note: Its better to set your own ObjectSizeEstimator implementation in OMTPE constructor, because many CPU cycles are spent on calculating used channel memory.

2)您可以尝试其他一些Netty参数

2) There are some other Netty parameters you can try

   //setting buffer size can improve I/O
   bootstrap.setOption("child.sendBufferSize", 1048576);
   bootstrap.setOption("child.receiveBufferSize", 1048576);

   // better to have an receive buffer predictor
   bootstrap.setOption("receiveBufferSizePredictorFactory", new AdaptiveReceiveBufferSizePredictorFactory(MIN_PACKET_SIZE, INITIAL_PACKET_SIZE, MAX_PACKET_SIZE))

   //if the server is sending 1000 messages per sec, optimum write buffer water marks will
   //prevent unnecessary throttling, Check NioSocketChannelConfig doc
   bootstrap.setOption("writeBufferLowWaterMark", 32 * 1024);
   bootstrap.setOption("writeBufferHighWaterMark", 64 * 1024);

3)服务器引导程序应该是bootstrap.setOption(child.tcpNoDelay,true)。

3) It should be bootstrap.setOption("child.tcpNoDelay", true) for server bootstrap.

有一个实验隐藏参数:

Netty NioWorker使用SelectorUtil.select等待选择器事件,等待时间在SelectorUtil中进行了硬编码,

Netty NioWorker is using SelectorUtil.select to wait for selector events, the wait time is hard coded in SelectorUtil,

selector.select(500);

设置较小的值可以在netty sctp传输实现中获得更好的性能。不确定TCP。

setting a small value gave better performance in netty sctp transport implementation. Not sure about TCP.

这篇关于由于从boss线程传递给工作线程的请求导致的netty延迟?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

1403页,肝出来的..

09-08 22:38