buffers按照发布这些操作的顺序填充.这些操作的完成处理程序的调用顺序未指定.给出: socket.async_read_some(buffer1, handler1); // op1socket.async_read_some(buffer2, handler2); // op2 由于op1是在op2之前启动的,因此可以保证buffer1包含的流中的数据要早于buffer2所包含的数据,但是handler2可以在handler1之前被调用组合操作合成操作由零个或多个中间操作组成.例如, async_read() 组成的异步操作由零个或多个中间stream.async_read_some()操作组成.当前实现使用操作链来创建一个连续性,在该连续性中启动单个async_read_some()操作,并在其内部完成句柄内确定是否启动另一个async_read_some()操作或调用用户的完成处理程序.由于继续,async_read文档要求在组合操作完成之前,不得进行其他读取: 程序必须确保该流完成之前,该流不执行其他任何读取操作(例如async_read,该流的async_read_some函数或任何其他执行读取的组合操作).如果程序违反了这一要求,则由于上述填充缓冲区的顺序,人们可能会观察到交织的数据.作为一个具体示例,考虑启动async_read()操作以从套接字读取26字节数据的情况: buffer.resize(26); // buffer is a std::vector managed elsewhereboost::asio::async_read(socket, boost::asio::buffer(buffer), handler); 如果套接字接收到"Strawberry","fields",然后是"forever.",则async_read()操作可能由一个或多个socket.async_read_some()操作组成.例如,它可以由3个中间操作组成:第一个async_read_some()操作从偏移量0开始将11个包含草莓"的字节读入缓冲区.不满足读取26个字节的完成条件,因此启动了另一个async_read_some()操作以继续该操作第二个async_read_some()操作从偏移量11开始将7个包含字段"的字节读入缓冲区.不满足读取26个字节的完成条件,因此启动了另一个async_read_some()操作以继续该操作第三个async_read_some()操作读取8个包含永远"的字节.以18的偏移量开始进入缓冲区.由于已满足读取26个字节的完成条件,因此handler进入了io_service 队列.调用handler完成处理程序时,buffer包含永远的草莓字段".子线 strand 是用于以保证的顺序提供 handler 的序列化执行.鉴于:钢绞线对象s 通过s.post()或s.dispatch()在s.running_in_this_thread() == false 上添加到链s的功能对象f1通过s.post()或s.dispatch()在s.running_in_this_thread() == false 上添加到链s的功能对象f2然后,该链提供了有序性和非并发性的保证,这样就不会同时调用f1和f2.此外,如果f1的添加发生在f2的添加之前,则f1将在f2之前被调用.使用: auto wrapped_handler1 = strand.wrap(handler1);auto wrapped_handler2 = strand.wrap(handler2);socket.async_read_some(buffer1, wrapped_handler1); // op1socket.async_read_some(buffer2, wrapped_handler2); // op2 由于op1是在op2之前启动的,因此保证buffer1包含的流中接收的数据比buffer2中包含的数据要早,但wrapped_handler1和wrapped_handler2会被调用,但未指定. strand保证: handler1和handler2不会被同时调用如果wrapped_handler1在wrapped_handler2之前被调用,那么handler1将在handler2 之前被调用如果wrapped_handler2在wrapped_handler1之前被调用,那么handler2将在handler1 之前被调用类似于组合的操作实现,strand实现使用操作链接来创建延续. strand管理在FIFO队列中发布到它的所有处理程序.当队列为空并且将处理程序发布到链上时,链将向io_service发布内部句柄.在内部处理程序中,将使处理程序从strand的FIFO队列中出队,执行该命令,然后,如果该队列不为空,则内部处理程序会将其自身发回到io_service.考虑阅读此答案,以了解组合操作如何使用 asio_handler_invoke() 将中间处理程序包装在完成的同一上下文(即strand)中处理程序.可以在对此问题的评论中找到实施细节. Consider an echo server implemented using Boost.asio. Read events from connected clients result in blocks of data being placed on to an arrival event queue. A pool of threads works through these events - for each event, a thread takes the data in the event and echos it back to the connected client. As shown in the diagram above, there could be multiple events in the event queue all from a single client. In order to ensure that these events for a given client are executed and delivered in order, strands are used. In this case, all events from a given connected client with be executed in a strand for the client.My question is: how do strands guarantee the correct order of processing of events? I presume there must be some kind of lock-per-strand, but even that won't be sufficient, so there must be more to it, and I was hoping someone could perhaps explain it our point me to some code which does this?I found this document:How strands work and why you should use themIt sheds some light on the mechanism, but says that in a strand "Handler execution order is not guaranteed". Does that mean that we could end up with receiving back "Strawberry forever. fields"?Also - whenever a new client connects, do we have to create a new strand, so that there is one strand per client?Finally - when a read event arrives, how do we know which strand to add it to? The strand has to be looked up form all strands using the connection as a key? 解决方案 strand provides a guarantee for non-concurrency and the invocation order of handlers; strand does not control the order in which operations are executed and demultiplexed. Use a strand if you have either:multiple threads accessing a shared object that is not thread safea need for a guaranteed sequential ordering of handlersThe io_service will provide the desired and expected ordering of buffers being filled or used in the order in which operations are initiated. For instance, if the socket has "Strawberry fields forever." available to be read, then given:buffer1.resize(11); // buffer is a std::vector managed elsewherebuffer2.resize(7); // buffer is a std::vector managed elsewherebuffer3.resize(8); // buffer is a std::vector managed elsewheresocket.async_read_some(boost::asio::buffer(buffer1), handler1);socket.async_read_some(boost::asio::buffer(buffer2), handler2);socket.async_read_some(boost::asio::buffer(buffer3), handler3);When the operations complete:handler1 is invoked, buffer1 will contain "Strawberry "handler2 is invoked, buffer2 will contain "fields "handler3 is invoked, buffer3 will contain "forever."However, the order in which the completion handlers are invoked is unspecified. This unspecified ordering remains true even with a strand.Operation DemultiplexingAsio uses the Proactor design pattern to demultiplex operations. On most platforms, this is implemented in terms of a Reactor. The official documentation mentions the components and their responsibilities. Consider the following example:socket.async_read_some(buffer, handler);The caller is the initiator, starting an async_read_some asynchronous operation and creating the handler completion handler. The asynchronous operation is executed by the StreamSocketService operation processor:Within the initiating function, if the socket has no other outstanding asynchronous read operations and data is available, then StreamSocketService will read from the socket and enqueue the handler completion handler into the io_serviceOtherwise, the read operation is queued onto the socket, and the reactor is informed to notify Asio once data becomes available on the socket. When the io_service is ran and data is available on the socket, then the reactor will inform Asio. Next, Asio will dequeue an outstanding read operation from the socket, execute it, and enqueue the handler completion handler into the io_serviceThe io_service proactor will dequeue a completion handler, demultiplex the handler to threads that are running the io_service, from which the handler completion handler will be executed. The order of invocation of the completion handlers is unspecified.Multiple OperationsIf multiple operations of the same type are initiated on a socket, it is currently unspecified as to the order in which the buffers will be used or filled. However, in the current implementation, each socket uses a FIFO queue for each type of pending operation (e.g. a queue for read operations; a queue for write operations; etc). The networking-ts draft, which is based partially on Asio, specifies: the buffers are filled in the order in which these operations were issued. The order of invocation of the completion handlers for these operations is unspecified.Given:socket.async_read_some(buffer1, handler1); // op1socket.async_read_some(buffer2, handler2); // op2As op1 was initiated before op2, then buffer1 is guaranteed to contain data that was received earlier in the stream than the data contained in buffer2, but handler2 may be invoked before handler1.Composed OperationsComposed operations are composed of zero or more intermediate operations. For example, the async_read() composed asynchronous operation is composed of zero or more intermediate stream.async_read_some() operations.The current implementation uses operation chaining to create a continuation, where a single async_read_some() operation is initiated, and within its internal completion handle, it determines whether or not to initiate another async_read_some() operation or to invoke the user's completion handler. Because of the continuation, the async_read documentation requires that no other reads occur until the composed operation completes: The program must ensure that the stream performs no other read operations (such as async_read, the stream's async_read_some function, or any other composed operations that perform reads) until this operation completes.If a program violates this requirement, one may observe interwoven data, because of the aforementioned order in which buffers are filled.For a concrete example, consider the case where an async_read() operation is initiated to read 26 bytes of data from a socket:buffer.resize(26); // buffer is a std::vector managed elsewhereboost::asio::async_read(socket, boost::asio::buffer(buffer), handler);If the socket receives "Strawberry ", "fields ", and then "forever.", then the async_read() operation may be composed of one or more socket.async_read_some() operations. For instance, it could be composed of 3 intermediate operations:The first async_read_some() operation reads 11 bytes containing "Strawberry " into the buffer starting at an offset of 0. The completion condition of reading 26 bytes has not been satisfied, so another async_read_some() operation is initiated to continue the operationThe second async_read_some() operation reads 7 byes containing "fields " into the buffer starting at an offset of 11. The completion condition of reading 26 bytes has not been satisfied, so another async_read_some() operation is initiated to continue the operationThe third async_read_some() operation reads 8 byes containing "forever." into the buffer starting at an offset of 18. The completion condition of reading 26 bytes has been satisfied, so handler is enqueued into the io_serviceWhen the handler completion handler is invoked, buffer contains "Strawberry fields forever."Strandstrand is used to provide serialized execution of handlers in a guaranteed order. Given:a strand object sa function object f1 that is added to strand s via s.post(), or s.dispatch() when s.running_in_this_thread() == falsea function object f2 that is added to strand s via s.post(), or s.dispatch() when s.running_in_this_thread() == falsethen the strand provides a guarantee of ordering and non-concurrency, such that f1 and f2 will not be invoked concurrently. Furthermore, if the addition of f1 happens before the addition of f2, then f1 will be invoked before f2.With:auto wrapped_handler1 = strand.wrap(handler1);auto wrapped_handler2 = strand.wrap(handler2);socket.async_read_some(buffer1, wrapped_handler1); // op1socket.async_read_some(buffer2, wrapped_handler2); // op2As op1 was initiated before op2, then buffer1 is guaranteed to contain data that was received earlier in the stream than the data contained in buffer2, but the order in which the wrapped_handler1 and wrapped_handler2 will be invoked is unspecified. The strand guarantees that:handler1 and handler2 will not be invoked concurrentlyif wrapped_handler1 is invoked before wrapped_handler2, then handler1 will be invoked before handler2if wrapped_handler2 is invoked before wrapped_handler1, then handler2 will be invoked before handler1Similar to the composed operation implementation, the strand implementation uses operation chaining to create a continuation. The strand manages all handlers posted to it in a FIFO queue. When the queue is empty and a handler is posted to the strand, then the strand will post an internal handle to the io_service. Within the internal handler, a handler will be dequeued from the strand's FIFO queue, executed, and then if the queue is not empty, the internal handler posts itself back to the io_service.Consider reading this answer to find out how a composed operation uses asio_handler_invoke() to wrap intermediate handlers within the same context (i.e. strand) of the completion handler. The implementation details can be found in the comments on this question. 这篇关于链如何保证boost.asio中未决事件的正确执行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!
11-03 15:00