本文介绍了oauth2-proxy身份验证调用在kubernetes集群上速度较慢,带有nginx入口的身份验证注释的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们已使用此页面.具体来说,我们有:

We have secured some of our services on the K8S cluster using the approach described on this page. Concretely, we have:

  nginx.ingress.kubernetes.io/auth-url: "https://oauth2.${var.hosted_zone}/oauth2/auth"
  nginx.ingress.kubernetes.io/auth-signin: "https://oauth2.${var.hosted_zone}/oauth2/start?rd=/redirect/$http_host$escaped_request_uri"

设置要保护的服务,我们已遵循本教程每个集群只有一个oauth2_proxy部署.我们设置了2个代理,它们都与nginx入口放置在同一节点上.

set on the service to be secured and we have followed this tutorial to only have one deployment of oauth2_proxy per cluster. We have 2 proxies set up, both with affinity to be placed on the same node as the nginx ingress.

$ kubectl get pods -o wide -A | egrep "nginx|oauth"                                                                    
infra-system   wer-exp-nginx-ingress-exp-controller-696f5fbd8c-bm5ld        1/1     Running   0          3h24m   10.76.11.65    ip-10-76-9-52.eu-central-1.compute.internal     <none>           <none>
infra-system   wer-exp-nginx-ingress-exp-controller-696f5fbd8c-ldwb8        1/1     Running   0          3h24m   10.76.14.42    ip-10-76-15-164.eu-central-1.compute.internal   <none>           <none>
infra-system   wer-exp-nginx-ingress-exp-default-backend-7d69cc6868-wttss   1/1     Running   0          3h24m   10.76.15.52    ip-10-76-15-164.eu-central-1.compute.internal   <none>           <none>
infra-system   wer-exp-nginx-ingress-exp-default-backend-7d69cc6868-z998v   1/1     Running   0          3h24m   10.76.11.213   ip-10-76-9-52.eu-central-1.compute.internal     <none>           <none>
infra-system   oauth2-proxy-68bf786866-vcdns                                 2/2     Running   0          14s     10.76.10.106   ip-10-76-9-52.eu-central-1.compute.internal     <none>           <none>
infra-system   oauth2-proxy-68bf786866-wx62c                                 2/2     Running   0          14s     10.76.12.107   ip-10-76-15-164.eu-central-1.compute.internal   <none>           <none>

但是,简单的网站加载通常需要10秒钟左右,而在安全服务上不存在代理注释的情况下,则需要2-3秒钟.

However, a simple website load usually takes around 10 seconds, compared to 2-3 seconds with the proxy annotations not being present on the secured service.

我们在auth.domain.com服务中添加了proxy_cache,该服务通过添加来托管我们的代理

We added a proxy_cache to the auth.domain.com service which hosts our proxy by adding

        "nginx.ingress.kubernetes.io/server-snippet": <<EOF
          proxy_cache auth_cache;
          proxy_cache_lock on;
          proxy_ignore_headers Cache-Control;
          proxy_cache_valid any 30m;
          add_header X-Cache-Status $upstream_cache_status;
        EOF

但是这也没有改善延迟.我们仍会在代理中看到所有HTTP请求触发一条日志行.奇怪的是,只有一些请求需要5秒钟.

but this didn't improve the latency either. We still see all HTTP requests triggering a log line in our proxy. Oddly, only some of the requests take 5 seconds.

我们不确定是否:-代理将每个请求转发给oauth提供者(github)或-缓存身份验证

We are unsure if:- the proxy forwards each request to the oauth provider (github) or- caches the authentications

我们使用cookie身份验证,因此,从理论上讲,oauth2_proxy 应该只需解密cookie,然后向nginx入口返回200.由于它们都在同一节点上,因此应该是快速的.但事实并非如此.有任何想法吗?

We use cookie authentication, therefore, in theory, the oauth2_proxy should just decrypt the cookie and then return a 200 to the nginx ingress. Since they are both on the same node it should be fast. But it's not. Any ideas?

我已经进一步分析了情况.在浏览器中使用https://oauth2.domain.com/auth访问我的身份验证服务器并复制请求copy for curl,我发现:

I have analyzed the situation further. Visiting my auth server with https://oauth2.domain.com/auth in the browser and copying the request copy for curl I found that:

  1. 从我的本地计算机(通过curl)对我的oauth服务器运行10.000个查询非常快
  2. 在具有相同curl的nginx入口上运行100个请求很慢
  3. 使用auth服务的群集IP替换curl中的主机名会使性能急剧提高
  4. 将注释设置为nginx.ingress.kubernetes.io/auth-url: http://172.20.95.17/oauth2/auth(例如,设置主机==群集IP)可使GUI加载达到预期的速度(快速)
  5. 无论是在nginx入口上还是在其他Pod(例如测试Debian)上运行curl,结果都相同
  1. running 10.000 queries against my oauth server from my local machine (via curl) is very fast
  2. running 100 requests on the nginx ingress with the same curl is slow
  3. replacing the host name in the curl with the cluster IP of the auth service makes the performance increase drastically
  4. setting the annotation to nginx.ingress.kubernetes.io/auth-url: http://172.20.95.17/oauth2/auth (e.g. setting the host == cluster IP) makes the GUI load as expected (fast)
  5. it doesn't matter if the curl is run on the nginx-ingress or on any other pod (e.g. a test debian), the result is the same

编辑2

我发现一个更好的解决方法是将注释设置为以下

Edit 2

A better fix I found was to set the annotation to the following

  nginx.ingress.kubernetes.io/auth-url: "http://oauth2.infra-system.svc.cluster.local/oauth2/auth"
  nginx.ingress.kubernetes.io/auth-signin: "https://oauth2.domain.com/oauth2/start?rd=/redirect/$http_host$escaped_request_uri"

auth-url是入口使用用户的cookie查询的内容.因此,oauth2服务的本地DNS与外部dns名称相同,但是没有SSL通信,并且由于它是DNS,因此它是永久的(而群集IP不是)

The auth-url is what the ingress queries with the cookie of the user. Hence, a local DNS of the oauth2 service is the same as the external dns name, but without the SSL communication and since it's DNS, it's permanent (while the cluster IP is not)

推荐答案

鉴于有人不太可能想到为什么发生这种情况,我将回答我的解决方法.

Given that it's unlikely that someone comes up with the why this happens, I'll answer my workaround.

我发现的一个解决方法是将注释设置为以下

A fix I found was to set the annotation to the following

  nginx.ingress.kubernetes.io/auth-url: "http://oauth2.infra-system.svc.cluster.local/oauth2/auth"
  nginx.ingress.kubernetes.io/auth-signin: "https://oauth2.domain.com/oauth2/start?rd=/redirect/$http_host$escaped_request_uri"

auth-url是入口使用用户的cookie查询的内容.因此,oauth2服务的本地DNS与外部dns名称相同,但是没有SSL通信,并且由于它是DNS,因此它是永久的(而群集IP不是)

The auth-url is what the ingress queries with the cookie of the user. Hence, a local DNS of the oauth2 service is the same as the external dns name, but without the SSL communication and since it's DNS, it's permanent (while the cluster IP is not)

这篇关于oauth2-proxy身份验证调用在kubernetes集群上速度较慢,带有nginx入口的身份验证注释的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-24 06:08