我有一些对python 3应用程序中的多个文件执行相同操作的代码,因此似乎很适合multiprocessing。我正在尝试使用Pool将工作分配给一些进程。我希望代码在进行这些计算时继续做其他事情(主要是为用户显示事情),因此我想为此使用map_async类的multiprocessing.Pool函数。我希望在调用此代码之后,代码将继续执行,并且结果将由我指定的回调处理,但是这似乎没有发生。下面的代码显示了我尝试调用map_async的三种方式以及看到的结果:

import multiprocessing
NUM_PROCS = 4
def func(arg_list):
    arg1 = arg_list[0]
    arg2 = arg_list[1]
    print('start func')
    print ('arg1 = {0}'.format(arg1))
    print ('arg2 = {0}'.format(arg2))
    time.sleep(1)
    result1 = arg1 * arg2
    print('end func')
    return result1

def callback(result):
    print('result is {0}'.format(result))


def error_handler(error1):
    print('error in call\n {0}'.format(error1))


def async1(arg_list1):
    # This is how my understanding of map_async suggests i should
    # call it. When I execute this, the target function func() is not called
    with multiprocessing.Pool(NUM_PROCS) as p1:
        r1 = p1.map_async(func,
                          arg_list1,
                          callback=callback,
                          error_callback=error_handler)


def async2(arg_list1):
    with multiprocessing.Pool(NUM_PROCS) as p1:
        # If I call the wait function on the result for a small
        # amount of time, then the target function func() is called
        # and executes sucessfully in 2 processes, but the callback
        # function is never called so the results are not processed
        r1 = p1.map_async(func,
                          arg_list1,
                          callback=callback,
                          error_callback=error_handler)
        r1.wait(0.1)


def async3(arg_list1):
    # if I explicitly call join on the pool, then the target function func()
    # successfully executes in 2 processes and the callback function is also
    # called, but by calling join the processing is not asynchronous any more
    # as join blocks the main process until the other processes are finished.
    with multiprocessing.Pool(NUM_PROCS) as p1:
        r1 = p1.map_async(func,
                          arg_list1,
                          callback=callback,
                          error_callback=error_handler)
        p1.close()
        p1.join()


def main():
    arg_list1 = [(5, 3), (7, 4), (-8, 10), (4, 12)]
    async3(arg_list1)

    print('pool executed successfully')


if __name__ == '__main__':
    main()


当主要调用async1async2async3时,结果在每个功能的注释中描述。任何人都可以解释为什么不同的通话表现如何吗?最终,我想像在map_async中那样调用async1,因此我可以在辅助进程繁忙时在其他主进程中执行某些操作。我已经在较旧的RH6 linux机器和较新的ubuntu VM上使用python 2.7和3.6测试了此代码,结果相同。

最佳答案

发生这种情况的原因是,当您将multiprocessing.Pool用作上下文管理器时,pool.terminate() is called when you leave the with block会立即退出所有工作程序,而无需等待正在进行的任务完成。


  3.3版中的新增功能:Pool对象现在支持上下文管理协议–请参阅上下文管理器Types. __enter__()返回池对象,并且__exit__()调用terminate()


IMO使用terminate()作为上下文管理器的__exit__方法不是一个好的设计选择,因为似乎大多数人直观地期望close()将被调用,这将等待正在进行的任务完成再退出。不幸的是,您所能做的就是将代码重构为不使用上下文管理器,或者重构代码,以确保在with完成其工作之前,您不会离开Pool块。

关于python - 多处理池map_async的意外行为,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/48870608/

10-15 01:48