本文介绍了在处理程序中组装一个Netty消息的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在为我的项目制作Netty原型。我试图在Netty上实现一个简单的面向文本/字符串的协议。在我的管道中,我使用以下内容:

I am in the process of prototyping Netty for my project. I am trying to implement a simple Text/String oriented protocol on top of Netty. In my pipeline I am using the following:

public class TextProtocolPipelineFactory implements ChannelPipelineFactory
{
@Override
public ChannelPipeline getPipeline() throws Exception
{
    // Create a default pipeline implementation.
    ChannelPipeline pipeline = pipeline();

    // Add the text line codec combination first,
    pipeline.addLast("framer", new DelimiterBasedFrameDecoder(2000000, Delimiters.lineDelimiter()));
    pipeline.addLast("decoder", new StringDecoder());
    pipeline.addLast("encoder", new StringEncoder());

    // and then business logic.
    pipeline.addLast("handler", new TextProtocolHandler());

    return pipeline;
}
}

我有一个DelimiterBasedFrameDecoder,一个字符串解码器和一个管道中的字符串编码器。

I have a DelimiterBasedFrameDecoder, a String Decoder, and a String Encoder in the pipeline.

由于此设置,我的传入消息被拆分为多个字符串。这导致我的处理程序的messageReceived方法的多次调用。这可以。但是,这需要我在内存中累积这些消息,并在收到消息的最后一个字符串包时重新构造消息。

As a result of this setup my incoming message is split into multiple Strings. This results in multiple invocations of the "messageReceived" method of my handler. This is fine. However , this requires me to accumulate these messages in memory and re-construct the message when the last string packet of the message is received.

我的问题是,什么是大多数内存有效的方法来累积字符串,然后将它们重新构造成最终的消息。到目前为止我有3个选项。它们是:

My question is, what is the most memory efficient way to "accumulate the strings" and then "re-construct them into the final message". I have 3 options so far. They are:


  • 使用StringBuilder累积和toString构造。 (这给出了最差的内存性能。事实上,对于具有大量并发用户的大型有效负载,这会产生不可接受的性能)

  • Use a StringBuilder to accumulate and toString to construct. (This gives the worst memory performance. In fact for large payloads with lots of concurrent users this gives non-acceptable performance)

通过ByteArrayOutputStream累积到ByteArray中然后使用字节数组构造(这提供了比选项1更好的性能,但它仍然占用了相当多的内存)

Accumulate into a ByteArray via a ByteArrayOutputStream and then construct using the byte-array (this gives a much better performance than option 1, but it still hogs quite a bit of memory)

累积到Dymamic Channel Buffer并使用toString(charset)来构造。我还没有介绍过这个设置,但我很好奇这与上面两个选项相比如何。有没有人使用动态通道缓冲区解决了这个问题?

Accumulate into a Dymamic Channel Buffer and use toString(charset) to construct. I have not profiled this setup yet but I am curious how this compares to the above two options. Has anyone solved this issue using the Dynamic Channel Buffer?

我是Netty的新手,我可能正在做的事情在建筑上有些不对劲。非常感谢您的投入。

I am new to Netty and its possible I may be doing something wrong architecturally. Your input will be greatly appreciated.

提前致谢
Sohil

Thanks in advanceSohil

添加我的为Norman执行自定义FrameDecoder审核

public final class TextProtocolFrameDecoder extends FrameDecoder
{
public static ChannelBuffer messageDelimiter()
{
      return ChannelBuffers.wrappedBuffer(new byte[] {'E','O','F'});
    }

@Override
protected Object decode(ChannelHandlerContext ctx, Channel channel,ChannelBuffer buffer)
throws Exception
{
    int eofIndex = find(buffer, messageDelimiter());

    if(eofIndex != -1)
    {
        ChannelBuffer frame = buffer.readBytes(buffer.readableBytes());
        return frame;
    }

    return null;
}

private static int find(ChannelBuffer haystack, ChannelBuffer needle) {
    for (int i = haystack.readerIndex(); i < haystack.writerIndex(); i ++) {
        int haystackIndex = i;
        int needleIndex;
        for (needleIndex = 0; needleIndex < needle.capacity(); needleIndex ++) {
            if (haystack.getByte(haystackIndex) != needle.getByte(needleIndex)) {
                break;
            } else {
                haystackIndex ++;
                if (haystackIndex == haystack.writerIndex() &&
                    needleIndex != needle.capacity() - 1) {
                    return -1;
                }
            }
        }

        if (needleIndex == needle.capacity()) {
            // Found the needle from the haystack!
            return i - haystack.readerIndex();
        }
    }
    return -1;
   }
  }


推荐答案

我如果您要实现自己的FrameDecoder,请认为您将获得最佳性能。这将允许您缓冲所有数据,直到您真正需要将其分派给链中的下一个处理程序。请参阅 apidocs。

I think you would get the best performance if you would implement your own FrameDecoder. This would allow you to buffer all the data till you really need to dispatch it to the next Handler in the chain. Please refer to the FrameDecoder apidocs.

如果您不想自己处理CRLF的检测,还可以保留DelimiterBasedFrameDecoder并在其后面添加一个自定义FrameDecoder来组装ChannelBuffers代表一行文本。

If you don't want to handle the detect of CRLF by yourself it would also be possible to keep the DelimiterBasedFrameDecoder and just add a custom FrameDecoder behind it to assemble the ChannelBuffers that represent a line of text.

在这两种情况下,FrameDecoder都会尽量减少内存副本,尝试只是包装缓冲区而不是每次都复制它们。

In both cases FrameDecoder will take care to minimize memory copies as much as possible by try to just "wrap" buffers and not copy them each time.

如果你想要获得最好的表现,那就选择第一种方法,如果你想要第二种方法,那就很容易了;)

That said if you want to have the best performance go with the first approach, if you want it easy go with the second ;)

这篇关于在处理程序中组装一个Netty消息的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-19 20:53