本文介绍了Kubernetes消息使用者可扩展性的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何在kubernetes中为kafka,amqp或任何其他可以扩大和缩小规模的消息代理部署消息使用者?我的假设是,使用者运行一个循环来拉出消息.

How To deploy in kubernetes a message consumer for kafka, amqp or any other message broker which scales up and down ? My hypothesis is that the consumer runs a loop which pulls messages.

我希望kubernetes在代理队列中有很多消息到达时创建更多容器,而在消息队列中有太多消息时删除一些容器.

I d like kubernetes To create more pods when many messages arrive in the broker queue and remove some pods when too few messages arrive in the queue.

哪个组件具有终止豆荚的主动性? pod本身是因为它无法从队列中获取消息?还是Kubernetes,因为Pod不消耗CPU?

Which component has the initiative of the ending of the pods? The pod itself because it can't fetch a message from the queue? Or kubernetes because the pod doesnt consume cpu?

如果在队列为空时任何吊舱结束,那么只要队列为空,恐怕吊舱将继续生还并死亡.

If any pod ends when the queue is empty, i m afraid that pods Will keep born and die as long as the queue is empty.

推荐答案

Kubernetes Horizo​​ntal Pod Autoscaler支持自定义指标和外部指标.使用更传统的消息传递代理(例如AMQP(1个队列/许多竞争的使用者)),您应该能够轻松地根据队列深度扩展使用者(例如如果队列深度> = 10000 msg,则进行扩展.如果队列深度为< = 1000 msg缩小).您也可以根据平均客户端吞吐量(例如,,如果平均吞吐量> = 5000 msg/s,按比例放大)或平均延迟来进行操作. Horizo​​ntal Pod Autoscaler会为您进行放大和缩小.它将观察度量标准并决定何时应关闭或启动Pod.消费者应用程序不知道这一点-它不需要任何特殊支持.但是您将需要获取这些指标并将其公开,以便Kubernetes可以消耗它们,而这目前还不是很简单.

The Kubernetes Horizontal Pod Autoscaler has support for custom and external metrics. With more traditional messaging brokers like AMQP (1 queue / many competing consumers) you should be able to easily scale the consumer based on queue depth (such as If queue depth is >= 10000 msg, scale up. If queue depth is <= 1000 msg scale down). You could also do it based on your the average client throughput (such as if average throughput is >= 5000 msg/s, scale up) or average latency. The Horizontal Pod Autoscaler would do the scale up and scale down for you. It will observer the metrics and decide when a pod should be shutdown or started. The consumer application is not aware of this - it doesn't need any special support for this. But you will need to get these metrics and expose them so that Kubernetes can consume them which is currently not completely trivial.

对于Kafka而言,这将变得有些困难,因为Kafka实施竞争性消费者的方式与AMQP等更传统的消息经纪人截然不同. Kafka主题分为多个分区.每个分区只能有一个消费者组中的一个消费者.因此,无论您进行自动缩放,它都将无法处理以下情况:

With Kafka, this will be a bit harder since Kafka implements competing consumers very differently from more traditional messaging brokers like AMQP. Kafka topics are split into partitions. And each partition can have only one consumer from a single consumer group. So whatever autoscaling you do, it will not be able to handle situations such as:

  • 给定主题的分区数量很少(您永远不会拥有比分区数量更多的活跃使用者)
  • 不对称的分区负载(某些分区很忙,而另一些则空)

Kafka也没有队列深度之类的东西.但是,例如,您可以使用有关消费者延迟的信息(显示给定分区的生产者背后的消费者有多少)进行缩放.

Kafka also doesn't have anything like queue depth. But you can for example use the information about the consumer lag (which shows how much is the consumer behind the producer for given partition) to do the scaling.

这篇关于Kubernetes消息使用者可扩展性的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-22 06:47