本文介绍了Python 的分布式锁管理器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一堆服务器,其中有多个实例访问对每秒请求数有严格限制的资源.

I have a bunch of servers with multiple instances accessing a resource that has a hard limit on requests per second.

我需要一种机制来锁定所有正在运行的服务器和实例对该资源的访问.

I need a mechanism to lock the access on this resource for all servers and instances that are running.

我在github上找到了一个restful分布式锁管理器:https://github.com/thefab/restful-分布式锁管理器

There is a restful distributed lock manager I found on github: https://github.com/thefab/restful-distributed-lock-manager

不幸的是,似乎有一分钟.锁定时间为 1 秒,相对不可靠.在几次测试中,解锁一个 1 秒的锁需要 1 到 3 秒.

Unfortunately there seems to be a min. lock time of 1 second and it's relatively unreliable. In several tests it took between 1 and 3 seconds to unlock a 1 second lock.

是否有一些经过良好测试的 Python 接口可以用于此目的?

Is there something well tested with a python interface I can use for this purpose?

我需要在 1 秒内自动解锁的东西.锁永远不会在我的代码中释放.

I need something that auto unlocks in under 1 second. The lock will never be released in my code.

推荐答案

我的第一个想法是使用 Redis.但是有更多很棒的工具,有些甚至更轻,所以我的解决方案建立在 zmq 上.为此,您不必运行Redis,运行小型Python脚本就足够了.

My first idea was using Redis. But there are more great tools and some are even lighter, so my solution builds on zmq. For this reason you do not have to run Redis, it is enough to run small Python script.

在描述解决方案之前,让我回顾一下您的要求.

Let me review your requirements before describing solution.

  • 将某些资源的请求次数限制在固定时间段内的请求次数.

  • limit number of request to some resource to a number of requests within fixed period of time.

自动解锁

资源(自动)解锁应在小于 1 秒的时间内发生.

resource (auto) unlocking shall happen in time shorter than 1 second.

它应该被分发.我会假设,你的意思是多个分布式服务器消耗一些资源应该能够并且只有一个储物柜服务很好(更多关于它在结论中)

it shall be distributed. I will assume, you mean that multiple distributed servers consuming some resource shall be able and it is fine to have just one locker service (more on it at Conclusions)

时间段可以是一秒、更多秒或更短的时间.唯一的限制是 Python 中时间测量的精度.

Timeslot can be a second, more seconds, or shorter time. The only limitation is precision of time measurement in Python.

如果你的资源有每秒硬性限制,你应该使用timeslot 1.0

If your resource has hard limit defined per second, you shall use timeslot 1.0

对于访问您的资源的第一个请求,设置下一个时隙的开始时间并初始化请求计数器.

With first request for accessing your resource, set up start time for next timeslot and initialize request counter.

对于每个请求,增加请求计数器(针对当前时隙)并允许请求,除非您已达到当前时隙中允许的最大请求数.

With each request, increase request counter (for current time slot) and allow the request unless you have reached max number of allowed requests in current time slot.

您的消费服务器可以分布在更多计算机上.要提供对 LockerServer 的访问,您将使用 zmq.

Your consuming servers could be spread across more computers. To provide access to LockerServer, you will use zmq.

zmqlocker.py:

zmqlocker.py:

import time
import zmq

class Locker():
    def __init__(self, max_requests=1, in_seconds=1.0):
        self.max_requests = max_requests
        self.in_seconds = in_seconds
        self.requests = 0
        now = time.time()
        self.next_slot = now + in_seconds

    def __iter__(self):
        return self

    def next(self):
        now = time.time()
        if now > self.next_slot:
            self.requests = 0
            self.next_slot = now + self.in_seconds
        if self.requests < self.max_requests:
            self.requests += 1
            return "go"
        else:
            return "sorry"


class LockerServer():
    def __init__(self, max_requests=1, in_seconds=1.0, url="tcp://*:7777"):
        locker=Locker(max_requests, in_seconds)
        cnt = zmq.Context()
        sck = cnt.socket(zmq.REP)
        sck.bind(url)
        while True:
            msg = sck.recv()
            sck.send(locker.next())

class LockerClient():
    def __init__(self, url="tcp://localhost:7777"):
        cnt = zmq.Context()
        self.sck = cnt.socket(zmq.REQ)
        self.sck.connect(url)
    def next(self):
        self.sck.send("let me go")
        return self.sck.recv()

运行您的服务器:

run_server.py:

Run your server:

run_server.py:

from zmqlocker import LockerServer

svr = LockerServer(max_requests=5, in_seconds=0.8)

从命令行:

$ python run_server.py

这将开始在本地主机的默认端口 7777 上提供储物柜服务.

This will start serving locker service on default port 7777 on localhost.

run_client.py:

run_client.py:

from zmqlocker import LockerClient
import time

locker_cli = LockerClient()

for i in xrange(100):
    print time.time(), locker_cli.next()
    time.sleep(0.1)

从命令行:

$ python run_client.py

您将看到go"、go"、sorry"...回复打印出来.

You shall see "go", "go", "sorry"... responses printed.

尝试运行更多客户端.

您可以先启动客户端,然后再启动服务器.客户端会阻塞,直到服务器启动,然后才会愉快地运行.

You may start clients first and server later on. Clients will block until the server is up, and then will happily run.

  • 满足描述的要求
    • 请求数量有限
    • 无需解锁,只要有下一个可用时间段,它就会允许更多请求
    • LockerService 可通过网络或本地套接字使用.

    另一方面,您可能会发现,您的资源限制并不像您想象的那么可预测,因此请准备好调整参数以找到适当的平衡,并始终为这方面的异常做好准备.

    On the other hand, you may find, that limits of your resource are not so predictable as you assume, so be prepared to play with parameters to find proper balance and be always prepared for exceptions from this side.

    还有一些空间可以优化提供锁"——例如如果储物柜用完了允许的请求,但当前的时间段已经快完成了,你可以考虑等一下你的对不起",然后在几分之一秒后提供走".

    There is also some space for optimization of providing "locks" - e.g. if locker runs out of allowed requests, but current timeslot is already almost completed, you might consider waiting a bit with your "sorry" and after a fraction of second provide "go".

    通过分布式",我们也可以理解多个储物柜服务器一起运行.这更难做到,但也是可能的.zmq 允许非常容易地连接到多个 url,因此客户端可以真正轻松地连接到多个储物柜服务器.有一个问题,如何协调储物柜服务器不允许对您的资源提出太多请求.zmq 允许服务器间通信.一种模型可能是,每个储物柜服务器将在 PUB/SUB 上发布每个提供的go".所有其他储物柜服务器都将被订阅,并使用每个go"来增加它们的本地请求计数器(稍微修改逻辑).

    By "distributed" we might also understand multiple locker servers running together. This is more difficult to do, but is also possible. zmq allows very easy connection to multiple urls, so clients could really easily connect to multiple locker servers. There is a question, how to coordinate locker servers not to allow too many request to your resource. zmq allows inter-server communication. One model could be, that each locker server would publish each provided "go" on PUB/SUB. All other locker servers would be subscribed, and used each "go" to increase their local request counter (with a bit modified logic).

    这篇关于Python 的分布式锁管理器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-07 03:32