I get a lot of 502 Bad Gateway errors when browsin...
# ask-the-community
I get a lot of 502 Bad Gateway errors when browsing the Flyte Web UI. I'm not sure what does it, maybe authentication timeout? The URL it sticks on looks like: callback?code=hhXLeSRGfUVxXFKV0bxpFLMHAqFoNUbE8V-K3u5pn1Q&state=5ddf9e086dd7b8a1a34c06885d624561a686e148e89465152c884a921fd821ec I'm using flyte-binary, how would I debug this issue?
what version of flyte?
UI Version 1.10.2 Package Version 0.0.51
looks like an auth timeout yes. There's a similar behavior recently reported that should be fixed here Chances that you upgrade and test?
I'd be happy to, but I'm not certain how
I think I'm on the latest flyte-binary
I was playing around, and the following has yielded some benefit: # helm repo update # helm upgrade flyte-binary flyteorg/flyte-binary --values local-values.yaml -n flyte --version 1.12.0
However, the little "circle i" that I previously utilized to get the version number from the web UI has gone, so I can't tell you precisely what version of the UI I am using
Well, seeing a similar error when redirects to the 502 when I click 'back to sign in'. I'll try it in a different browser and some other things.
I get the following log entries:
Copy code
{"json":{"src":"cookie.go:77"},"level":"info","msg":"Could not detect existing cookie [flyte_idt]. Error: http: named cookie not present","ts":"2024-05-23T02:49:59Z"}
{"json":{"src":"handlers.go:86"},"level":"error","msg":"Failed to retrieve tokens from request, redirecting to login handler. Error: [EMPTY_OAUTH_TOKEN] Failure to retrieve cookie [flyte_idt], caused by: http: named cookie not present","ts":"2024-05-23T02:49:59Z"}
{"json":{"src":"handlers.go:142"},"level":"debug","msg":"Setting CSRF state cookie to 40ey0x8ind and state to f03211768b5a96d659293448156e57a48e165d94b269182846ad1240e84c27a3\n","ts":"2024-05-23T02:49:59Z"}
{"json":{"src":"handler_utils.go:169"},"level":"debug","msg":"validating whether redirect url: <https://flyte.local>.******/console/select-project is authorized","ts":"2024-05-23T02:49:59Z"}
{"json":{"src":"handler_utils.go:172"},"level":"debug","msg":"authorizing redirect url: <https://flyte.local>.*********/console/select-project against authorized uri: <https://flyte.local>.***********/","ts":"2024-05-23T02:49:59Z"}
{"json":{"src":"composite_workqueue.go:88"},"level":"debug","msg":"Subqueue handler batch round","ts":"2024-05-23T02:50:00Z"}
{"json":{"src":"composite_workqueue.go:98"},"level":"debug","msg":"Dynamically configured batch size [-1]","ts":"2024-05-23T02:50:00Z"}
{"json":{"src":"composite_workqueue.go:129"},"level":"debug","msg":"Exiting SubQueue handler batch round","ts":"2024-05-23T02:50:00Z"}
{"json":{"src":"handlers.go:182"},"level":"debug","msg":"Running callback handler... for RequestURI /callback?code=t7SQUxEYPIMcf_ow7-6uLWnRREWjZfN3veZcBq0_WsQ&state=f03211768b5a96d659293448156e57a48e165d94b269182846ad1240e84c27a3","ts":"2024-05-23T02:50:00Z"
and now I cannot login at all, just 502's
I was reading old posts, and saw someone had a similar issue, I think the error is in the nginx ingress controller: 2024/05/23 135533 [error] 40#40: *357590300 upstream sent too big header while reading response header from upstream
Ok, I think I've got it. In the local-values.yml file, I added some additional nginx directives to increase the buffer size:
Copy code
 create: true
  <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: nginx
  <http://nginx.ingress.kubernetes.io/app-root|nginx.ingress.kubernetes.io/app-root>: /console
  <http://nginx.ingress.kubernetes.io/proxy-buffer-size|nginx.ingress.kubernetes.io/proxy-buffer-size>: 256k
  <http://nginx.ingress.kubernetes.io/proxy-buffers|nginx.ingress.kubernetes.io/proxy-buffers>: 4 256k
  <http://nginx.ingress.kubernetes.io/backend-protocol|nginx.ingress.kubernetes.io/backend-protocol>: GRPC
The new lines are the annotations:
Copy code
<http://nginx.ingress.kubernetes.io/proxy-buffer-size|nginx.ingress.kubernetes.io/proxy-buffer-size>: 256k
  <http://nginx.ingress.kubernetes.io/proxy-buffers|nginx.ingress.kubernetes.io/proxy-buffers>: 4 256k
Which I have used before with nginx to solve this issue. I used the nginx ingress documentation here to get the formats: https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations
hey @Garret Cook so in summary you: • upgraded your Helm release to 1.12.0 and • added the NGINX buffer size annotation and then, no longer having
I didn't know it would be that simple of a fix ahead of time, so there was a little meandering during the solution process, but that is what it boiled down to 🙂
I do have 1 task that isn't working due to an authentication issue, but I'm just starting to look at that. New UI is nice
it's weird in any case. Because not even the latest chart includes yet the flyteconsole bug fix I shared. But there's got to be something good in there 🙂 I've seen that buffer size error also in situations where there's some mismatch on SSL certs, but it's not the case here
I think I was getting the 502's for the same proxy reason, I probably could have fixed the proxy buffers and stayed on the old version.
No regrets though, I'm glad to be on a newer version