本文介绍了如何在 Python 的后台运行长时间运行的作业的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个运行长时间运行的作业(大约几个小时)的网络服务.我正在使用 Flask、Gunicorn 和 nginx 开发它.

我想做的是让需要很长时间才能完成的路由,调用一个创建线程的函数.然后该函数将返回一个 guid 返回到路由,路由将返回一个 url(使用 guid),用户可以使用它来检查进度.我正在使线程成为守护进程(thread.daemon = True),以便在我的调用代码退出(意外)时线程退出.

这是正确的使用方法吗?它有效,但这并不意味着它是正确的.

my_thread = threading.Thread(target=self._run_audit, args=())my_thread.daemon = 真my_thread.start()
解决方案

处理此类问题的更常规方法是从基本应用程序中提取操作并在外部调用它,使用像 芹菜.

使用教程,您可以创建您的任务并从您的网络应用程序.

from flask import Flaskapp = Flask(__name__)app.config.update(CELERY_BROKER_URL='redis://localhost:6379',CELERY_RESULT_BACKEND='redis://localhost:6379')芹菜 = make_celery(app)@celery.task()def add_together(a, b):返回 a + b

然后你可以运行:

>>>结果 = add_together.delay(23, 42)>>>结果.等待()65

记住你需要单独运行worker:

celery -A your_application worker

I have a web-service that runs long-running jobs (in the order of several hours). I am developing this using Flask, Gunicorn, and nginx.

What I am thinking of doing is to have the route which takes a long time to complete, call a function that creates a thread. The function will then return a guid back to the route, and the route will return a url (using the guid) that the user can use to check progress. I am making the thread a daemon (thread.daemon = True) so that the thread exits if my calling code exits (unexpectedly).

Is this the correct approach to use? It works, but that doesn't mean that it is correct.

my_thread = threading.Thread(target=self._run_audit, args=())
my_thread.daemon = True
my_thread.start()
解决方案

The more regular approch to handle such issue is extract the action from the base application and call it outside, using a task manager system like Celery.

Using this tutorial you can create your task and trigger it from your web application.

from flask import Flask

app = Flask(__name__)
app.config.update(
    CELERY_BROKER_URL='redis://localhost:6379',
    CELERY_RESULT_BACKEND='redis://localhost:6379'
)
celery = make_celery(app)


@celery.task()
def add_together(a, b):
    return a + b

Then you can run:

>>> result = add_together.delay(23, 42)
>>> result.wait()
65

Just remember you need to run worker separately:

celery -A your_application worker

这篇关于如何在 Python 的后台运行长时间运行的作业的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-07 07:30