本文介绍了Kubernetes吊舱(有些)跑了一天后就死了的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们正在尝试在AWS上使用Kubernetes版本1.0.6进行测试设置.

We are trying a test setup with Kubernetes version 1.0.6 on AWS.

此设置涉及用于Cassandra(2个节点),Spark(主,2个工人,驱动程序),RabbitMQ(1个节点)的容器.大约一天后,这种设置的一些豆荚就会死掉

This setup involves pods for Cassandra (2-nodes), Spark (master, 2-workers, driver), RabbitMQ(1-node). Some the pods this setup die after a day or so

有没有办法从Kubernetes获取有关其死因/原因的日志?

Is there way to get logs from Kubernetes on how/why they died?

当您尝试手动重新启动已失效的Pod时,您会看到一些Pod状态,因为类别/火花工人已准备就绪,容器正在创建",并且Pod启动从未完成.

When you try to restart died pods manually, you get some pods status as ''category/spark-worker is ready, container is creating' and the pod start never completes.

该方案中的唯一选项是先"kube-down.sh,然后kube-up.sh",然后从头开始进行整个设置.

Only option in the scenario is to "kube-down.sh and then kube-up.sh" and go through entire setup from scratch.

推荐答案

由于 Kubernetes中的问题.

最近发布的 Kubernetes v1.0.7 .

但是如上述问题中所述,在这一领域仍有许多工作要做.

but as described in the above-mentioned issue there's still some work to do in this area.

这篇关于Kubernetes吊舱(有些)跑了一天后就死了的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-01 20:07