I am working on deploying flyte on GCP using a man...
# flyte-on-gcp
m
I am working on deploying flyte on GCP using a managed HTTPS load balancer and HTTP/2 to communicate to the flyteadmin GRPC backend. I got most things working but ran into some issues getting GRPC healthcheck / requests to work. I came across this thread https://discuss.flyte.org/t/835/u01ubdc4e1l-yes-i-am-going-to-send-an-email-soon-to-gcp-ther and was wondering if anyone here has a successful approach to the issue
f
I’m currently working on integrating Flyte with GCP Identity aware proxy which also requires a managed https load balancer. The integration is not merged yet but you can follow these instructions to get the load balancer to work. Just the IAP part is not working with the latest flytekit.
In addition to dealing with the health checks, http2 will require TLS between the load balancer and backend but the readme proposes a solution for this. (not the only solution but im my opinion a good one)
That being said: deploying with nginx ingress is easier
m
Thank you for the helpful response Fabio! That is exciting you are working on the IAP integration. I am using an NEG backend for the flyteadmin gRPC service, and the google documentation indicated that TLS is used between the load balancer and NEGs, so i thought it should work. However i haven't found any documentation supporting the use of NEG targets directly from a load balancer for a gRPC service, so maybe that is not supported. The google docs for ingress to an NEG-gRPC service use traffic director for the service mesh, and i see your solution uses istio. It seems like TLS termination for a gRPC request has to be handled by a proxy / service mesh, but I don't quite understand why. Perhaps HTTP2 between an external managed load balancer and an NEG backend service might not be supported, hence the need for a proxy/service mesh to handle the TLS termination.
f
It seems like TLS termination for a gRPC request has to be handled by a proxy / service mesh, but I don’t quite understand why.
I first tried putting flyteadmin directly behind the ingress and make it use a self-signed TLS certificate. Flyteadmin has the option to serve TLS and the ingress was also healthy. However, some of the other flyte backends refused to talk to it due to the self-signed cert and ignored the insecureSkipVerify flag in their config which is supposed to tell them to accept the self-signed cert. Also, if I let flyteadmin use the cert itself, I would have had to find a way to restart it when the cert is renewed. There are some open source tools that monitor secrets and restart pods when the secrets update, this would have worked. However, because of the first issue, i found it easier to terminate TLS at a reverse proxy / service mesh and then let flyte itself run without any encryption. Since we have been running flyte with istio for a while and I personally think istio is amazing, I went for it. One could do the same thing with an nginx pod though. Just the restarting of the pod would have to be solved again …