本文介绍了Haproxy + netty:防止连接重置异常的方法?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们在netty-3.6-run后端使用haproxy。我们正在处理大量的连接,其中一些可能是长期的。

We're using haproxy in front of a netty-3.6-run backend. We are handling a huge number of connections, some of which can be longstanding.

现在的问题是,当haproxy关闭连接以实现重新平衡时,它会通过发送tcp-RST。当netty使用的sun.nio.ch-class看到这个时,它抛出一个IOException:Connection by peer。

Now the problem is that when haproxy closes a connection for means of rebalancing, it does so by sending a tcp-RST. When the sun.nio.ch-class employed by netty sees this, it throws an IOException: "Connection reset by peer".

Trace:

sun.nio.ch.FileDispatcherImpl.read0(Native Method):1 in ""
sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39):1 in ""
sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:225):1 in ""
sun.nio.ch.IOUtil.read(IOUtil.java:193):1 in ""
sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:375):1 in ""
org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64):1 in ""
org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109):1 in ""
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312):1 in ""
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90):1 in ""
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178):1 in ""
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145):1 in ""
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615):1 in ""
java.lang.Thread.run(Thread.java:724):1 in ""

这会导致每个配置出现以下问题:

This causes the following problems per configuration:

这是最好的(因为haproxy似乎关闭了大多数与FIN而不是RST的连接),但仍然每个服务器每秒产生大约3个异常。此外,它实际上是neuters负载平衡,因为一些传入连接非常长,并且具有非常高的吞吐量:使用pretend-keepalive,它们永远不会被haproxy重新平衡到另一台服务器。

This is what works best (as haproxy seems to close most connections with a FIN rather than RST), but still produces about 3 exceptions per server per second. Also, it effectively neuters loadbalancing, because some incoming connections are very longstanding whith very high throughput: with pretend-keepalive, they never get rebalanced to another server by haproxy.

因为我们的后端期望保持活动连接真的保持活着(和因此,它不会自行关闭它们,这个设置相当于每个连接最终会导致一个异常,从而导致我们的服务器崩溃。我们尝试添加prefer-last-server,但它没有多大帮助。

Since our backend expects keep-alive connections to really be kept alive (and hence does not close them on its own), this setting amounts to every connection eventually netting one exception, which in turn crashes our servers. We tried adding prefer-last-server, but it doesn't help much.

理论上,这应该适用于正确的负载均衡和无异常。然而,似乎在我们的后端服务器响应后,有一个竞争对手首先发送其RST:haproxy或我们注册的ChannelFutureListener.CLOSE。在实践中,我们仍然会遇到太多异常而且我们的服务器崩溃。

This should theoretically work for both proper loadbalancing and no exceptions. However, it seems that after our backend-servers respond, there is a race as to which side sends its RST first: haproxy or our registered ChannelFutureListener.CLOSE. In practice, we still get too many exceptions and our servers crash.

有趣的是,例外情况通常会越多,我们为渠道提供的工作人员就越多。我想它的阅读速度超过了写作速度。

Interestingly, the exceptions generally get more, the more workers we supply our channels with. I guess it speeds up reading more than writing.

无论如何,我已经阅读了netty中的不同频道和套接字以及haproxy一段时间了并没有真正找到任何听起来像解决方案的东西(或者在我尝试时发挥作用)。

Anyways, I've read up on the different channel- and socketoptions in netty as well as haproxy for a while now and didn't really find anything that sounded like a solution (or worked when I tried it).

推荐答案

Tomcat Nio-handler只是这样做:

The Tomcat Nio-handler just does:

} catch (java.net.SocketException e) {
    // SocketExceptions are normal
    Http11NioProtocol.log.debug
        (sm.getString
         ("http11protocol.proto.socketexception.debug"), e);

} catch (java.io.IOException e) {
    // IOExceptions are normal
    Http11NioProtocol.log.debug

        (sm.getString
         ("http11protocol.proto.ioexception.debug"), e);

}

所以它似乎是内部太阳的最初投掷 - 除非你自己重新实现,否则类(sun.nio.ch.FileDispatcherImpl)确实是不可避免的。

So it seems like the initial throw by the internal sun-classes (sun.nio.ch.FileDispatcherImpl) really is inevitable unless you reimplement them yourself.

这篇关于Haproxy + netty:防止连接重置异常的方法?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-06 03:44