本文介绍了AWS VPC - k8s - 负载均衡的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

抱歉新手问题;我是 k8s 世界的新手.目前的部署方式是在 EC2 上部署应用程序.我尝试将容器化应用程序部署到 VPC 的新方法.

Sorry for newbie question; I am new to the k8s world.The current way of deploying is to deploy the app on EC2. The new way I am trying to deploy the containerized app to VPC.

以旧方式,AWS 会将 aaa.bbb.com 的流量路由到 vpc-ip:443 ELB,后者将进一步路由到 上的 ASG私有子网:443 和应用程序可以正常工作.

In the old way AWS would route the traffic for aaa.bbb.com to vpc-ip:443 ELB which would further route it to ASG on private subnet:443 and app would work fine.

图中有k8s,流量是怎样的?

With k8s in the picture, how does traffic flow look like?

我想弄清楚是否可以在 ELB 上使用多个端口和各自的 dns,并将流量路由到工作节点上的某个端口.

I'm trying to figure out if I could use multiple ports on ELB with respective dns and route traffic to on certain port on worker nodes.

xxx.yyy.com -> vpc-ip:443/ -> ec2:443/
aaa.bbb.com -> vpc-ip:9000/ -> ec2:9000/

在同一个 VPC 上使用 k8s 是否可行?任何指导和文档链接都会有很大帮助.

Is it doable with k8s on the same VPC? Any guidance and links to documentation would be of great help.

推荐答案

通常,您会有一个 AWS 负载均衡器实例,该实例将有多个 K8s 工作线程作为具有特定端口的后端服务器.在流量进入工作节点后,K8s 内部的网络将接管工作.

In general, you would have a AWS Load-balancer instance that would have multiple K8s workers as backend server with a specific port. After traffic entering worker nodes, networking inside K8s would take the job.

假设您已经为两个域分别设置了两个 K8S 服务作为负载均衡器,端口为 38473 和 38474:

Suppose you have setup two K8S services as load-balancer with port 38473 and 38474 for your two domains, respectively:

xxx.yyy.com -> AWS LoadBalancer1 -> Node1:38473 -> K8s service1 -> K8s Pod1
                                 -> Node2:38473 -> K8s service1 -> K8s Pod2
aaa.bbb.com -> AWS LoadBalancer2 -> Node1:38474 -> K8s service2 -> K8s Pod3
                                 -> Node2:38474 -> K8s service2 -> K8s Pod4

上面这个简单的解决方案需要您创建不同的服务作为负载均衡器,这会增加您的成本,因为它们是实际的 AWS 负载均衡器实例.为了降低成本,您可以在集群中有一个 ingress-controller 实例并编写 ingress 配置.这只需要一个实际的 AWS 负载均衡器来完成您的网络:

This simple solution above would need to have you create different services as load-balancer, which would increase your cost because they are actual AWS load-balancer instances. To reduce cost, you could have an ingress-controller instance in your cluster and write ingress config. This would only require one actual AWS load-balancer to finish your networking:

xxx.yyy.com -> AWS LoadBalancer1 -> Node1:38473 -> Ingress-service -> K8s service1 -> K8s Pod1
                                 -> Node2:38473 -> Ingress-service -> K8s service1 -> K8s Pod2
aaa.bbb.com -> AWS LoadBalancer1 -> Node1:38473 -> Ingress-service -> K8s service2 -> K8s Pod3
                                 -> Node2:38473 -> Ingress-service -> K8s service2 -> K8s Pod4

有关更多信息,您可以在此处参考更多信息:

For more information, you could refer more information here:

这篇关于AWS VPC - k8s - 负载均衡的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

06-19 11:16