<@U06RKAMK0UC> where can i put the ssl cert in the...
# ask-ai
m
@ancient-wolf-19325 where can i put the ssl cert in the flyte binary helmchart for the client. my db now only accepts SSL connections. this is on GCP
a
It seems likely that the Flyte binary Helm chart does not provide a direct field in
values.yaml
for mounting a client SSL certificate for database connections. Instead, you may need to create a Kubernetes Secret containing your SSL cert and key, then mount it into the Flyte pod and reference the cert path in the database config. The
options
field under
configuration.database
in the Helm chart (e.g.,
options: sslmode=verify-full sslrootcert=/path/to/ca.crt
) is where you would specify SSL options, but mounting the cert itself must be handled via Kubernetes manifests or by customizing the Helm chart templates. This approach is not explicitly documented in the data sources, but is referenced in discussions about mounting secrets for other sensitive values and in the structure of the chart itself (charts/flyte-binary/values.yaml, c9b3aa88). If you need to mount a custom CA or client cert for the DB, you would create a Kubernetes Secret, then use a
volumeMount
in the Flyte deployment to expose it at the desired path, and update the
options
string accordingly. This is not handled automatically by the chart and would require a manual patch or Helm chart customization. Would you like more detail on how to do this step-by-step? Sources: - charts/flyte-binary/values.yaml - c9b3aa88 πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 yes
a
My understanding is that to enable SSL connections for your database in Flyte using the flyte-binary Helm chart, you need to: 1. Create a Kubernetes Secret containing your SSL certificate (and key, if needed). For example, if your CA cert is in `ca.crt`:
Copy code
kubectl create secret generic db-ssl-cert --from-file=ca.crt=/path/to/ca.crt -n <namespace>
2. Mount the Secret in the Flyte Pod by editing the Helm chart values or patching the deployment. The flyte-binary chart does not have a direct field for this, so you must use the
extraVolumes
and
extraVolumeMounts
fields (if available), or manually patch the deployment. For example, in your `values.yaml`:
Copy code
extraVolumes:
  - name: db-ssl-cert
    secret:
      secretName: db-ssl-cert

extraVolumeMounts:
  - name: db-ssl-cert
    mountPath: /etc/db-ssl
    readOnly: true
If these fields are not present, you may need to patch the deployment after installation. 3. Configure the Database Connection in your Helm values to use SSL, referencing the mounted cert. In the
configuration.database.options
field, add:
Copy code
configuration:
  database:
    options: "sslmode=verify-full sslrootcert=/etc/db-ssl/ca.crt"
This tells Flyte to use SSL and the provided CA cert for DB connections. 4. Apply the Helm Chart with your updated values:
Copy code
helm upgrade --install flyte-binary flyteorg/flyte-binary -f values.yaml
This process is not explicitly documented for SSL DB certs in the data sources, but is consistent with how other secrets and files are mounted and referenced in Flyte deployments (charts/flyte-binary/values.yaml, c9b3aa88). Sources: - c9b3aa88 πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 i am creating the cert on gcp using terraform google_sql_ssl_cert
a
The Flyte documentation I have ingested does not mention how to use a client SSL certificate generated with Terraform's
google_sql_ssl_cert
resource for connecting Flyte to a Cloud SQL database on GCP. Typically, you would use Terraform to generate the client cert and key, then create a Kubernetes Secret from those files, mount them into the Flyte pod, and reference their paths in the Flyte database configuration (e.g., using the
sslcert
and
sslkey
options in the connection string). If you can clarify whether you need to mount both the client cert/key and the CA cert, or provide more details about your Flyte deployment method, that would help tailor the guidance. If this doesn't help, please reach out to the Flyte support team for further assistance. πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 is this how i should be creating the secret?
Copy code
resource "google_sql_ssl_cert" "flyte_client" {
  common_name = "flyte-client"
  instance    = google_sql_database_instance.flyte_db.name
}

resource "kubernetes_secret" "flyte_db_ssl" {
  metadata {
    name      = "flyte-db-ssl"
    namespace = "flyte"
  }

  data = {
    "client-cert.pem" = google_sql_ssl_cert.flyte_client.cert
    "client-key.pem"  = google_sql_ssl_cert.flyte_client.private_key
    "server-ca.pem"   = google_sql_ssl_cert.flyte_client.server_ca_cert
  }

  type = "Opaque"
}
a
It seems likely that your approach is correct: you are using Terraform to generate a Cloud SQL client certificate and then creating a Kubernetes Secret containing the client cert, client key, and server CA cert. This matches the general pattern Flyte expects for mounting sensitive files, though the Flyte documentation does not explicitly mention this exact workflow or the use of
google_sql_ssl_cert
(charts/flyte-binary/values.yaml, c9b3aa88). You will need to mount this secret into the Flyte pod and reference the file paths in your DB connection options. Would you like more detail on how to mount this secret and configure Flyte to use these files? Sources: - c9b3aa88 πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 yes
a
My understanding is that after creating your Kubernetes Secret as shown, you need to mount it into the Flyte pod and update the Flyte database configuration to use the correct SSL options. The Flyte Helm chart does not provide a direct field for mounting arbitrary secrets, so you will likely need to use
extraVolumes
and
extraVolumeMounts
(if available), or patch the deployment manually. Step-by-step: 1. Create the Secret (as you already have):
Copy code
resource "kubernetes_secret" "flyte_db_ssl" {
  metadata {
    name      = "flyte-db-ssl"
    namespace = "flyte"
  }
  data = {
    "client-cert.pem" = google_sql_ssl_cert.flyte_client.cert
    "client-key.pem"  = google_sql_ssl_cert.flyte_client.private_key
    "server-ca.pem"   = google_sql_ssl_cert.flyte_client.server_ca_cert
  }
  type = "Opaque"
}
2. Mount the Secret in the Flyte Pod If your Helm chart supports it, add to your `values.yaml`:
Copy code
extraVolumes:
  - name: db-ssl
    secret:
      secretName: flyte-db-ssl

extraVolumeMounts:
  - name: db-ssl
    mountPath: /etc/db-ssl
    readOnly: true
If not, patch the deployment to mount the secret at
/etc/db-ssl
. 3. Configure Flyte to Use SSL In your
values.yaml
under
configuration.database.options
, add:
Copy code
configuration:
  database:
    options: "sslmode=verify-full sslrootcert=/etc/db-ssl/server-ca.pem sslcert=/etc/db-ssl/client-cert.pem sslkey=/etc/db-ssl/client-key.pem"
This tells Flyte to use the mounted certs for the DB connection. 4. Apply the Helm Chart Deploy or upgrade Flyte with your updated values:
Copy code
helm upgrade --install flyte-binary flyteorg/flyte-binary -f values.yaml
This process is not explicitly documented for client certs in the data sources, but is consistent with Flyte's general approach to mounting secrets and configuring DB options (charts/flyte-binary/values.yaml, c9b3aa88). Sources: - c9b3aa88 πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 I can curl https://flyte.solo.com/console but when i use this in my flyte config https://flyte.solo.com i get Error: Connection Info: [Endpoint: dns:///flyte.solo.com/console, InsecureConnection?: false, AuthMode: Pkce]: rpc error: code = Unavailable desc = name resolver error: produced zero addresses {"json":{"src":"main.go:13"},"level":"error","msg":"Connection Info: [Endpoint: dns:///flyte.solo.com/console, InsecureConnection?: false, AuthMode: Pkce]: rpc error: code = Unavailable desc = name resolver error: produced zero addresses","ts":"2025-10-24T112847+01:00"}
a
The error occurs because the Flyte config
endpoint
should point to the Flyte Admin gRPC endpoint, not the web console URL. The correct value is typically
dns:///flyte.solo.com
(without
/console
), and it must resolve to the gRPC service, not the HTTP UI. Also, ensure the gRPC port (often 443 for TLS or a custom port) is open and accessible. Using the console URL (
/console
) will not work for the SDK or CLI, as it is not a gRPC endpoint (Flyte config reference, local setup docs). Would you like more detail on how to determine the correct endpoint and port for your deployment? Sources: - Local setup docs πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 i deployed the flyte binary helmchart on gke. but the grpc and http ingress class have two different IP addresses. My dns can only point to one.
a
Flyte's default Helm chart creates separate ingress resources (and thus separate IPs) for HTTP (web console/API) and gRPC (SDK/CLI) endpoints. This is a known limitation: many ingress controllers (like NGINX) cannot serve both HTTP 1.x and gRPC on the same ingress due to protocol differences, so Flyte splits them by default (flyte_helm_generated.yaml). If your DNS can only point to one IP, you must use an ingress controller or configuration that supports multiplexing both protocols on a single IP, such as using a service mesh (e.g., Istio) or a custom ingress setup. This limitation and workaround are not explicitly documented in the official sources, but are discussed in deployment manifests and community issues. Would you like more detail on how to configure a single ingress for both HTTP and gRPC in GKE? Sources: - flyte_helm_generated.yaml πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 yes
a
To configure Flyte on GKE with a single ingress and IP for both HTTP and gRPC, you need to use an ingress controller that supports multiplexing both protocols. NGINX ingress cannot serve HTTP 1.x and gRPC on the same ingress, but solutions like Istio or Traefik (with IngressRoute) can. For Istio, you would deploy a Gateway and VirtualService that route both HTTP and gRPC traffic to the appropriate Flyte services behind a single external IP (flytekit-identity-aware-proxy/README.md). For Traefik, you can define an IngressRoute that matches both HTTP and gRPC paths and points to the same backend service (flyte/issues/4962). If you want to keep using NGINX, you would need to use two DNS names or switch to a controller that supports both protocols on one IP. Would you like a step-by-step example for Istio or Traefik? Sources: - flytekit-identity-aware-proxy/README.md - flyte/issues/4962 πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 the health check for grpc is failing on the load balancer on gcp. this is what's in my helm values
Copy code
separateGrpcIngress: true
  # commonAnnotations Add common annotations to all ingress resources
  commonAnnotations:
    <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: "gce"
    <http://networking.gke.io/managed-certificates|networking.gke.io/managed-certificates>: "flyte-ssl-cert"
    <http://ingress.kubernetes.io/app-root|ingress.kubernetes.io/app-root>: "/console"
  # httpAnnotations Add annotations to http ingress resource
  httpAnnotations:
    <http://kubernetes.io/ingress.global-static-ip-name|kubernetes.io/ingress.global-static-ip-name>: "flyte-http-ip"
  # grpcAnnotations Add annotations to grpc ingress resource
  grpcAnnotations:
    <http://kubernetes.io/ingress.global-static-ip-name|kubernetes.io/ingress.global-static-ip-name>: "flyte-grpc-ip"
    <http://cloud.google.com/app-protocols|cloud.google.com/app-protocols>: '{"grpc-port":"HTTP2"}'
a
It seems likely that the gRPC health check is failing because GCP's HTTP(S) Load Balancer requires a valid HTTP/2 health check path and expects the backend (FlyteAdmin) to serve TLS and respond to health checks on the correct port. By default, Flyte's gRPC ingress may not be configured to serve HTTPS with a valid health check endpoint, or the health check path/port may not match what FlyteAdmin exposes (flyte_helm_generated.yaml, flytekit-identity-aware-proxy/README.md). You may need to: - Ensure FlyteAdmin is serving TLS on the gRPC port (required for HTTP/2 health checks). - Set the health check path to
/grpc.health.v1.Health
or another valid endpoint that FlyteAdmin responds to. - Use a BackendConfig to customize the health check for the gRPC ingress, specifying the correct port and path. Would you like more detail on how to configure the BackendConfig and FlyteAdmin for GCP health checks? Sources: - flyte_helm_generated.yaml - flytekit-identity-aware-proxy/README.md πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 yes
a
To configure GCP's load balancer health check for Flyte's gRPC ingress, you should create a BackendConfig resource that specifies an HTTP/2 health check on the correct port and path. For FlyteAdmin, the health check path is typically
/grpc.health.v1.Health
and the port is the gRPC port (often 81 or 443, depending on your setup). You must also ensure that FlyteAdmin is serving TLS on this port, as GCP requires HTTP/2 with TLS for gRPC health checks (flytekit-identity-aware-proxy/README.md). Example BackendConfig:
Copy code
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: flyte-grpc-backendconfig
  namespace: flyte
spec:
  healthCheck:
    port: 443
    requestPath: /grpc.health.v1.Health
    type: HTTP
Then, annotate your gRPC ingress with:
Copy code
networking.gke.io/backend-config: '{"default": "flyte-grpc-backendconfig"}'
This setup ensures the load balancer uses the correct health check for the gRPC backend. Make sure your FlyteAdmin is configured to serve TLS on the gRPC port and responds to the health check path. Sources: - flytekit-identity-aware-proxy/README.md πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 i have set up another dns pointing to the grpc ingress ip. but it still doesn't work.
a
Pointing a DNS record to the gRPC ingress IP is necessary, but not sufficient: you must also ensure that the gRPC ingress is correctly configured to route traffic to the FlyteAdmin gRPC service, that the correct port (usually 443 for TLS) is open, and that your Flyte client config uses the new DNS name without any path (e.g.,
dns:///grpc.flyte.solo.com
). Additionally, the ingress must support HTTP/2 and TLS for gRPC, and the backend service must be healthy and reachable (flyte_helm_generated.yaml, flyte/issues/4962). If it still doesn't work, check: - The ingress resource is correctly routing gRPC traffic to FlyteAdmin. - The health check is passing and the backend is marked healthy. - Your client config uses the correct endpoint and port. Would you like more detail on how to debug each of these steps? Sources: - flyte/issues/4962 - flyte_helm_generated.yaml πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 overwrite grpcAnnotations to have different host
a
The Flyte Helm chart does not provide a direct way to set a different host for the gRPC ingress via
grpcAnnotations
. The
host
for the gRPC ingress is controlled by the
host
field under the
ingress
section in your
values.yaml
, not by annotations (charts/flyte-binary/values.yaml). If you want the gRPC ingress to use a different host than the HTTP ingress, you must set
grpcIngressClassName
,
grpcTls
, and
host
fields accordingly, but there is no supported way to set a different host just via
grpcAnnotations
. Would you like more detail on how to configure separate hosts for HTTP and gRPC ingresses? Sources: - charts/flyte-binary/values.yaml πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 run me through deploying flyte binary onto gke and expose the service using cloud dns
a
To deploy Flyte binary on GKE and expose it using Cloud DNS, follow these summarized steps: 1. Provision GKE and Cloud SQL (if needed): Create a GKE cluster and, if using a managed database, a Cloud SQL instance. Ensure you have a GCS bucket for Flyte data. 2. Install Flyte Binary via Helm: Add the Flyte Helm repo, then install the chart with a customized
values.yaml
specifying GCP storage, database, and ingress settings. Set up ingress with the GCE ingress class and managed certificate, and specify a static IP for the ingress. 3. Configure Cloud DNS: Reserve a static IP for the ingress, then create a Cloud DNS A record pointing your domain (e.g.,
flyte.example.com
) to this IP. Ensure your managed certificate matches the domain. 4. Access Flyte: Once DNS propagates and the certificate is active, access the Flyte console at
https://flyte.example.com/console
and configure your Flyte CLI/SDK to use the same domain for gRPC. This process is referenced in the flyte-binary Helm chart and deployment manifests, but not as a single end-to-end guide. Would you like a step-by-step example Helm values file and the exact commands for each step? Sources: - flyte-binary Helm chart - deployment manifests πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 yes
a
My understanding is that you want a step-by-step guide for deploying Flyte binary on GKE and exposing it via Cloud DNS. Here’s a concise walkthrough based on the official Helm chart and deployment manifests: 1. Prerequisites - GKE cluster and kubectl configured. - GCS bucket for Flyte data. - (Optional) Cloud SQL instance for the database. - A domain managed in Cloud DNS. 2. Reserve a Static IP and Managed Certificate
Copy code
gcloud compute addresses create flyte-ip --global
gcloud compute addresses describe flyte-ip --global
- Note the reserved IP address. - Create a managed certificate:
Copy code
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
  name: flyte-ssl-cert
  namespace: flyte
spec:
  domains:
    - flyte.example.com
Apply with
kubectl apply -f managed-cert.yaml
. 3. Configure Cloud DNS - In Cloud DNS, create an A record for
flyte.example.com
pointing to the reserved static IP. 4. Prepare Helm values.yaml Example (edit for your environment):
Copy code
ingress:
  enabled: true
  ingressClassName: gce
  host: flyte.example.com
  commonAnnotations:
    kubernetes.io/ingress.class: "gce"
    networking.gke.io/managed-certificates: "flyte-ssl-cert"
  httpAnnotations:
    kubernetes.io/ingress.global-static-ip-name: "flyte-ip"
  tls: true

configuration:
  storage:
    type: gcs
    gcs:
      bucket: <your-gcs-bucket>
  database:
    host: <cloudsql-ip>
    port: 5432
    dbname: flyte
    username: <db-user>
    password: <db-password>
    options: "sslmode=verify-full sslrootcert=/etc/db-ssl/server-ca.pem"
(If using Cloud SQL Auth Proxy, adjust host and add sidecar/proxy as needed.) 5. Install Flyte Binary
Copy code
helm repo add flyteorg https://flyteorg.github.io/flyte
helm repo update
kubectl create namespace flyte
helm install flyte-binary flyteorg/flyte-binary -n flyte -f values.yaml
6. Wait for Ingress and Certificate - Wait for the ingress to get the static IP and the managed certificate to become active. - Confirm with:
Copy code
kubectl get ingress -n flyte
kubectl describe managedcertificate flyte-ssl-cert -n flyte
7. Access Flyte - Visit
https://flyte.example.com/console
in your browser. - For CLI/SDK, set endpoint to
dns:///flyte.example.com
. This process is based on the flyte-binary Helm chart and deployment manifests. Sources: - deployment manifests πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 do i set separateGrpcIngress: true
a
If you are using the GCE ingress controller on GKE, you should set
separateGrpcIngress: true
in your Helm values. This is because GCE ingress cannot serve both HTTP (for the Flyte console/API) and gRPC (for the SDK/CLI) on the same ingress resource due to protocol differences, so Flyte needs to create separate ingress resources for each (flyte_helm_generated.yaml). Would you like more detail on how to configure DNS and certificates for both ingresses? Sources: - deployment manifests πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 yes
a
You will need to configure two DNS recordsβ€”one for the HTTP ingress (e.g.,
flyte.example.com
) and one for the gRPC ingress (e.g.,
grpc.flyte.example.com
)β€”each pointing to their respective static IPs. You should also create a managed certificate for each domain and set the
host
field for each ingress in your Helm values. This is necessary because with
separateGrpcIngress: true
, GCE ingress will create two separate load balancers with different IPs (flyte_helm_generated.yaml). Would you like a sample Helm values configuration for this setup? Sources: - deployment manifests πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 yes
a
Here is a sample Helm values configuration for Flyte binary on GKE with separate HTTP and gRPC ingresses, each with its own DNS and managed certificate:
Copy code
ingress:
  enabled: true
  separateGrpcIngress: true
  host: flyte.example.com
  grpcHost: grpc.flyte.example.com
  commonAnnotations:
    kubernetes.io/ingress.class: "gce"
  httpAnnotations:
    kubernetes.io/ingress.global-static-ip-name: "flyte-http-ip"
    networking.gke.io/managed-certificates: "flyte-http-cert"
  grpcAnnotations:
    kubernetes.io/ingress.global-static-ip-name: "flyte-grpc-ip"
    networking.gke.io/managed-certificates: "flyte-grpc-cert"
  tls: true
You must create two static IPs, two managed certificates, and two Cloud DNS A records (one for each host). This setup matches the requirements for GCE ingress with
separateGrpcIngress: true
(flyte_helm_generated.yaml). Sources: - deployment manifests πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 grpcHost doesn't work. it's still using the http host
a
The Flyte Helm chart does not natively support a
grpcHost
field; only the
host
field is used for both HTTP and gRPC ingresses. To use different hosts for HTTP and gRPC, you must manually edit the generated ingress resources after deployment or use a custom Helm chart/template. This limitation is not explicitly documented, but is evident from the Helm values file and deployment manifests. Would you like more detail on how to manually patch the ingress or customize the chart for separate hosts? Sources: - deployment manifests πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 {"json":{"src":"auth_interceptor.go:213"},"level":"debug","msg":"Request failed due to [rpc error: code = Unavailable desc = connection error: desc = \"transport: authentication handshake failed: EOF\"]. If it's an unauthenticated error, we will attempt to establish an authenticated context.","ts":"2025-10-24T125148+01:00"} Error: Connection Info: [Endpoint: dns:///flytegrpc.solo.com, InsecureConnection?: false, AuthMode: Pkce]: rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: EOF" {"json":{"src":"main.go:13"},"level":"error","msg":"Connection Info: [Endpoint: dns:///flytegrpc.solo.com, InsecureConnection?: false, AuthMode: Pkce]: rpc error: code = Unavailable desc = connection error: desc = \"transport: authentication handshake failed: EOF\"","ts":"2025-10-24T125148+01:00"}
a
This error (
transport: authentication handshake failed: EOF
) usually means the gRPC client is trying to connect with TLS, but the backend (FlyteAdmin) is not serving TLS on the gRPC port, or the load balancer/ingress is not forwarding TLS traffic correctly. For GCP/GKE, the gRPC ingress must terminate TLS and forward HTTP/2 traffic to FlyteAdmin, which must also be configured to serve TLS if the backend protocol is HTTPS (flytekit-identity-aware-proxy/README.md, flyte_helm_generated.yaml). Check that: - The gRPC ingress is set up for HTTPS (not plain HTTP) and forwards to the correct backend port. - FlyteAdmin is configured to serve TLS on the gRPC port, or the ingress is terminating TLS and forwarding HTTP/2 to a plain backend. Would you like more detail on how to verify and fix these settings? Sources: - flytekit-identity-aware-proxy/README.md πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 yes
a
To verify and fix this, follow these steps: 1. Check Ingress TLS Termination: - If your GKE ingress is terminating TLS (i.e., handling HTTPS at the load balancer), your FlyteAdmin backend should serve plain gRPC (HTTP/2, no TLS) on its gRPC port (usually 81). - If you want FlyteAdmin to serve TLS directly, you must mount the certificate and configure FlyteAdmin to use it, and the ingress must forward raw TLS traffic (rare in GKE). 2. Verify Backend Protocols: - For GKE, the recommended setup is: - Ingress terminates TLS (using a managed certificate). - Ingress forwards HTTP/2 (h2c) to FlyteAdmin’s gRPC port (plain, not TLS). - Your ingress annotation should include:
cloud.google.com/app-protocols: '{"grpc-port":"HTTP2"}'
- The backend service for FlyteAdmin should point to port 81 (gRPC). 3. Check FlyteAdmin Configuration: - In your Helm values, ensure
configmap.adminServer.server.security.secure: false
(so FlyteAdmin serves plain gRPC). - If you set
secure: true
, FlyteAdmin expects TLS on the backend, which is not compatible with GKE’s default ingress TLS termination (flyte_helm_generated.yaml). 4. Confirm Health and Connectivity: - Ensure the gRPC ingress is healthy and the backend is marked healthy in GCP. - Test connectivity with
grpcurl
or similar tools to the external DNS name and port. If you need to serve TLS all the way to FlyteAdmin (rare), you must mount the cert and set
secure: true
in FlyteAdmin config, and configure the ingress to use TCP passthrough (not standard for GKE HTTP(S) load balancer). Would you like a sample values.yaml and ingress manifest for this setup? Sources: - flytekit-identity-aware-proxy/README.md - flyte_helm_generated.yaml πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 yes, and to test the connectivity command
a
Here’s a sample
values.yaml
for Flyte binary on GKE with gRPC ingress using GKE’s HTTP(S) load balancer (TLS terminated at ingress, backend is plain HTTP/2):
Copy code
ingress:
  enabled: true
  separateGrpcIngress: true
  host: flyte.example.com
  grpcHost: grpc.flyte.example.com
  commonAnnotations:
    kubernetes.io/ingress.class: "gce"
  httpAnnotations:
    kubernetes.io/ingress.global-static-ip-name: "flyte-http-ip"
    networking.gke.io/managed-certificates: "flyte-http-cert"
  grpcAnnotations:
    kubernetes.io/ingress.global-static-ip-name: "flyte-grpc-ip"
    networking.gke.io/managed-certificates: "flyte-grpc-cert"
    cloud.google.com/app-protocols: '{"grpc-port":"HTTP2"}'
  tls: true

configuration:
  adminServer:
    server:
      security:
        secure: false  # FlyteAdmin serves plain gRPC, TLS is terminated at ingress
Test gRPC connectivity (replace with your gRPC DNS and port, usually 443):
Copy code
grpcurl -v -proto <(curl -s https://raw.githubusercontent.com/flyteorg/flyteidl/master/protos/flyteidl/service/admin.proto) \
  grpc.flyte.example.com:443 flyteidl.service.AdminService/ListProjects
Or, for a simple connectivity check:
Copy code
grpcurl -v grpc.flyte.example.com:443 list
You should see a list of available gRPC services if the connection is successful. Sources: - https://github.com/flyteorg/flyte/issues/4962 - https://github.com/flyteorg/flyte/pull/3964 πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 I deployed flyte binary onto gke, where grpc ingress and http ingress have different dns. the health check for the grpc is failing. my backendconfig for the grpc svc is this. I have tested out this endpoint by port forwarding and it works.
Copy code
spec = {
      healthCheck = {
        requestPath = "/grpc.health.v1.Health/Check"
        type        = "HTTP2"
        port        = 8089
      }
    }
a
Your BackendConfig is using
requestPath = "/grpc.health.v1.Health/Check"
with
type = "HTTP2"
and
port = 8089
. However, GKE's HTTP/2 health checks expect a standard HTTP/2 response, but FlyteAdmin's gRPC health endpoint does not respond with a valid HTTP/2 (non-gRPC) response, causing the health check to fail. This is a known limitation: GKE's load balancer cannot perform a true gRPC health check, only HTTP/2 over REST (flyte/issues/3730, flyte_helm_generated.yaml). The recommended workaround is to configure the health check to use FlyteAdmin's HTTP health endpoint (e.g.,
/healthcheck
on port 8088), which returns a standard HTTP 200 response. Update your BackendConfig for the gRPC service to:
Copy code
spec = {
  healthCheck = {
    requestPath = "/healthcheck"
    type        = "HTTP"
    port        = 8088
  }
}
This will allow the GKE load balancer to mark the backend as healthy, even though the service is used for gRPC. Would you like more detail on why GKE cannot use the gRPC health endpoint directly? Sources: - flyte/issues/3730 - flyte_helm_generated.yaml πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 healthcheck now passes. but I am still getting
Copy code
{"json":{"src":"auth_interceptor.go:213"},"level":"debug","msg":"Request failed due to [rpc error: code = Unavailable desc = connection error: desc = \"transport: authentication handshake failed: read tcp 10.100.0.13:59344->34.110.131.158:443: read: connection reset by peer\"]. If it's an unauthenticated error, we will attempt to establish an authenticated context.","ts":"2025-10-28T11:23:43Z"}
Error: Connection Info: [Endpoint: dns:///flytegrpc.solo.com, InsecureConnection?: false, AuthMode: Pkce]: rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: read tcp 10.100.0.13:59344->34.110.131.158:443: read: connection reset by peer"
{"json":{"src":"main.go:13"},"level":"error","msg":"Connection Info: [Endpoint: dns:///flytegrpc.solo.com, InsecureConnection?: false, AuthMode: Pkce]: rpc error: code = Unavailable desc = connection error: desc = \"transport: authentication handshake failed: read tcp 10.100.0.13:59344->34.110.131.158:443: read: connection reset by peer\"","ts":"2025-10-28T11:23:43Z"}
when i try to do flytectl get projects. my config is
Copy code
admin:
  endpoint: dns:///flytegrpc.solo.com
  # endpoint: localhost:8089
  authType: Pkce
  insecure: false
  insecureSkipVerify: true
logger:
  show-source: true
  level: 6
a
This error (
transport: authentication handshake failed: read tcp ... connection reset by peer
) indicates that the gRPC client is attempting a TLS handshake on port 443, but the backend (FlyteAdmin) is not serving TLS on that port, or the GKE ingress is not forwarding TLS traffic as expected. In the GKE HTTP(S) load balancer setup, the ingress typically terminates TLS and forwards plain HTTP/2 (h2c) to FlyteAdmin, which should have
secure: false
in its config (flyte_helm_generated.yaml, flyte/issues/4962). If your FlyteAdmin is running with
secure: false
(plain gRPC), but your client is trying to connect with TLS (
insecure: false
), the handshake will fail. You must either: - Set
insecure: true
in your config to use plaintext (no TLS), or - Reconfigure FlyteAdmin to serve TLS on the gRPC port (not typical for GKE ingress). Most common solution: Set
insecure: true
in your config:
Copy code
admin:
  endpoint: dns:///flytegrpc.solo.com
  authType: Pkce
  insecure: true
  insecureSkipVerify: true
This matches the GKE ingress pattern, where TLS is terminated at the load balancer and backend is plain HTTP/2. Would you like more detail on why this is required for GKE ingress? Sources: - flyte/issues/4962 - flyte_helm_generated.yaml πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 set it to true but now I get
Copy code
{"json":{"src":"auth_interceptor.go:213"},"level":"debug","msg":"Request failed due to [rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read tcp 11.100.0.13:59354->35.112.131.158:443: read: connection reset by peer\"]. If it's an unauthenticated error, we will attempt to establish an authenticated context.","ts":"2025-10-28T11:26:18Z"}
Error: Connection Info: [Endpoint: dns:///flytegrpc.solo.com, InsecureConnection?: true, AuthMode: Pkce]: rpc error: code = Unavailable desc = connection error: desc = "error reading server preface: read tcp 11.100.0.13:59354->35.112.131.158:443: read: connection reset by peer"
{"json":{"src":"main.go:13"},"level":"error","msg":"Connection Info: [Endpoint: dns:///flytegrpc.solo.com, InsecureConnection?: true, AuthMode: Pkce]: rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: read tcp 11.100.0.13:59354->35.112.131.158:443: read: connection reset by peer\"","ts":"2025-10-28T11:26:18Z"}
a
This error means the client is now trying to connect without TLS (
insecure: true
), but the GKE ingress expects TLS on port 443. With GKE HTTP(S) load balancer, the external port 443 always expects TLSβ€”even if the backend is plain HTTP/2β€”so setting
insecure: true
will not work for a public DNS/port 443 endpoint (flyte/issues/4962). Summary: - If you connect to
flytegrpc.solo.com:443
, you must use
insecure: false
(TLS). - But FlyteAdmin backend must be plain HTTP/2 (
secure: false
), and the ingress must terminate TLS and forward HTTP/2 to the backend. If you still get handshake errors with
insecure: false
, it usually means the ingress is not forwarding HTTP/2 traffic correctly, or the backend protocol annotation is missing/misconfigured. Checklist: 1. Your gRPC ingress must have:
Copy code
cloud.google.com/app-protocols: '{"grpc-port":"HTTP2"}'
2. The backend service must point to the correct FlyteAdmin gRPC port (usually 81 or 8089). 3. FlyteAdmin must have
secure: false
(plain gRPC). 4. Your client config should be:
Copy code
admin:
     endpoint: dns:///flytegrpc.solo.com:443
     authType: Pkce
     insecure: false
     insecureSkipVerify: true
Would you like a step-by-step troubleshooting checklist for GKE gRPC ingress? Sources: - flyte/issues/4962 πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 there's no FlyteAdmin must have
secure: false
(plain gRPC). in my helmchart for flyte binary
a
The Flyte Binary Helm chart does not expose a direct
secure: false
option for FlyteAdmin in its
values.yaml
. Instead, FlyteAdmin in Flyte Binary is configured internally to serve plain gRPC by default, unless you explicitly enable TLS via advanced configuration overrides. There is no top-level
secure
field for FlyteAdmin in the standard Flyte Binary Helm values (charts/flyte-binary/values.yaml). If you have not set any custom TLS options for FlyteAdmin in your values, it will serve plain gRPC, which is compatible with GKE ingress TLS termination. If you need to enable TLS for FlyteAdmin, you must use advanced config overrides, which are not documented as a standard field in the Flyte Binary chart. Would you like more detail on how to override FlyteAdmin configuration for TLS in Flyte Binary? Sources: - https://github.com/flyteorg/flyte/pull/3964 - https://github.com/flyteorg/flyte/issues/4962 πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 no custom TLS options enabled. so what should my ingress annotation look like and the flyte config.yaml
a
It seems likely that, with no custom TLS enabled for FlyteAdmin, your GKE ingress should terminate TLS and forward HTTP/2 (h2c) to the backend. Your gRPC ingress should have the following annotation:
Copy code
cloud.google.com/app-protocols: '{"grpc-port":"HTTP2"}'
and should point to the FlyteAdmin gRPC service/port (usually 81 or 8089). The ingress should use a managed certificate and static IP as needed. Your
config.yaml
for Flyte CLI/SDK should be:
Copy code
admin:
  endpoint: dns:///flytegrpc.solo.com:443
  authType: Pkce
  insecure: false
  insecureSkipVerify: true
This setup matches GKE’s pattern: TLS is terminated at the ingress, and FlyteAdmin serves plain gRPC. If you still get handshake errors, double-check that the backend service and port are correct and that the ingress is healthy. Sources: - flyte/issues/4962 - flyte_helm_generated.yaml πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 {"json":{"src":"auth_interceptor.go:213"},"level":"debug","msg":"Request failed due to [rpc error: code = Unavailable desc = unexpected HTTP status code received from server: 502 (Bad Gateway); transport: received unexpected content-type \"text/html; charset=UTF-8\"]. If it's an unauthenticated error, we will attempt to establish an authenticated context.","ts":"2025-10-28T113653Z"}
a
This error (
unexpected HTTP status code received from server: 502 (Bad Gateway); transport: received unexpected content-type "text/html; charset=UTF-8"
) means the GKE ingress is not forwarding gRPC traffic correctly to the FlyteAdmin backend. Most commonly, this happens if the backend service/port is wrong, the
cloud.google.com/app-protocols
annotation is missing or incorrect, or the backend is not healthy (flyte/issues/4962). Double-check: - The gRPC ingress has
cloud.google.com/app-protocols: '{"grpc-port":"HTTP2"}'
. - The backend service points to the correct FlyteAdmin gRPC port (usually 81 or 8089). - The backend is healthy in GCP. - Your client config uses
insecure: false
and port 443. Would you like a troubleshooting checklist for these settings? Sources: - flyte/issues/4962 πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 running grpcurl -plaintext localhost:8089 grpc.health.v1.Health/Check is healthy after i port forwarded the grpc service. I do have
<http://cloud.google.com/app-protocols|cloud.google.com/app-protocols>: '{"grpc-port":"HTTP2"}'
.and the client config uses insecure: false
a
If
grpcurl -plaintext localhost:8089 grpc.health.v1.Health/Check
works via port-forward, but you get 502 Bad Gateway and handshake/content-type errors through the GKE ingress with
insecure: false
, it means the ingress is not forwarding HTTP/2 (h2c) traffic as expected. GKE HTTP(S) load balancer terminates TLS and expects to forward HTTP/2 (not HTTP/1.1, not TLS) to the backend, but FlyteAdmin must be listening for h2c (plain HTTP/2) on the backend port (flyte/issues/4962). Key points: - Your client config must use
insecure: false
and port 443. - The ingress must have
cloud.google.com/app-protocols: '{"grpc-port":"HTTP2"}'
. - The backend service must point to the correct port (8089). - FlyteAdmin must be listening for h2c (plain HTTP/2) on 8089. A 502 with content-type "text/html" means the ingress could not connect to the backend as HTTP/2, so it returned an error page. This is a common GKE issue if the backend is not healthy, not listening for h2c, or the service/port is misconfigured. Would you like more detail on how to verify the backend service and port, or how to debug GKE ingress backend connectivity? Sources: - flyte/issues/4962 πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 yes
a
To debug GKE ingress backend connectivity for gRPC: 1. Verify Backend Service and Port: Ensure your gRPC ingress backend service points to the correct FlyteAdmin port (e.g., 8089) and that the FlyteAdmin pod is listening on that port for plain HTTP/2 (h2c). You can check with:
Copy code
kubectl get svc -n flyte
   kubectl describe svc <your-grpc-service> -n flyte
   kubectl get pods -n flyte
   kubectl describe pod <flyteadmin-pod> -n flyte
2. Check Ingress Health and Annotations: Confirm the ingress resource has the annotation:
Copy code
cloud.google.com/app-protocols: '{"grpc-port":"HTTP2"}'
and that the backend is marked healthy in the GCP console. 3. BackendConfig Health Check: The health check for the gRPC backend should use the HTTP health endpoint (e.g.,
/healthcheck
on port 8088), not the gRPC endpoint, as GKE cannot perform true gRPC health checks (flyte/issues/3730). 4. Test Direct Pod Connectivity: Port-forward the gRPC service and test with
grpcurl -plaintext localhost:8089 list
to confirm FlyteAdmin is healthy and serving h2c. 5. Check GKE Ingress Logs and Events: Use:
Copy code
kubectl describe ingress <your-grpc-ingress> -n flyte
   kubectl logs <ingress-controller-pod> -n kube-system
to look for errors or misconfigurations. If all settings are correct and you still get 502, try restarting the ingress and backend pods, and ensure there are no network policies or firewalls blocking traffic. Sources: - flyte/issues/4962 - flyte/issues/3730 πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 show me how to enable TLS for the flyte binary helmchart server
a
To enable TLS for the Flyte binary Helm chart server (FlyteAdmin), you must provide a TLS certificate and key, and override the FlyteAdmin configuration to serve HTTPS/gRPC with TLS. The Flyte binary Helm chart does not expose a direct
secure: true
field, so you need to use the
extraConfig
or
configuration
override to inject the necessary settings (flyte-binary values.yaml, flyte/issues/4962). Example: 1. Create a Kubernetes secret with your TLS cert and key:
Copy code
kubectl create secret tls flyteadmin-tls --cert=server.crt --key=server.key -n flyte
2. In your
values.yaml
, mount the secret and override the FlyteAdmin config:
Copy code
flyteadmin:
  extraVolumeMounts:
    - name: tls
      mountPath: /etc/tls
      readOnly: true
  extraVolumes:
    - name: tls
      secret:
        secretName: flyteadmin-tls

configuration:
  adminServer:
    server:
      security:
        secure: true
        ssl:
          certificateFile: "/etc/tls/tls.crt"
          keyFile: "/etc/tls/tls.key"
3. Deploy with Helm:
Copy code
helm upgrade --install flyte-binary flyteorg/flyte-binary -n flyte -f values.yaml
This will enable FlyteAdmin to serve TLS on its HTTP and gRPC ports. Sources: - flyte/issues/4962 - flyte-binary values.yaml πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 i created the cert using. how do i mount this
Copy code
resource "kubernetes_manifest" "flyte_managed_cert" {
  manifest = {
    apiVersion = "<http://networking.gke.io/v1|networking.gke.io/v1>"
    kind       = "ManagedCertificate"
    metadata = {
      name      = "flyte-ssl-cert"
      namespace = "flyte"
    }
    spec = {
      domains = ["flyte.${var.domain_name}"]
    }
  }
}
a
A GKE
ManagedCertificate
is not a Kubernetes TLS secret and cannot be mounted directly into a pod. It is used by the GKE ingress controller to automatically provision and manage TLS certificates for your load balancer, terminating TLS at the ingress level (flyte/issues/3730). You do not mount a
ManagedCertificate
into FlyteAdmin; instead, you reference it in your ingress annotations, and the ingress controller handles TLS. How to use it: - Add the annotation to your ingress:
Copy code
networking.gke.io/managed-certificates: "flyte-ssl-cert"
- The ingress will terminate TLS using the managed certificate, and your backend (FlyteAdmin) should serve plain HTTP/2 (not TLS). If you want to enable TLS directly in FlyteAdmin, you must create a Kubernetes TLS secret from your own certificate and key, not a ManagedCertificate. Would you like more detail on the difference between ManagedCertificate and mounting a TLS secret? Sources: - flyte/issues/3730 πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
m
@ancient-wolf-19325 is there anyway i can avoid enabling TLS in the flyteadmin? I just want to expose the flyte grpc endpoint so i can run flytectl locally
a
Yes, you can avoid enabling TLS in FlyteAdmin. The recommended approach on GKE is to let the GKE ingress (with a ManagedCertificate) terminate TLS at the load balancer, while FlyteAdmin serves plain gRPC (no TLS) on its backend port. This way, your gRPC endpoint is securely exposed via the ingress, and you do not need to enable TLS in FlyteAdmin itself (flyte/issues/3730, flyte_helm_generated.yaml). Configure your ingress with the managed certificate annotation and ensure the backend service points to FlyteAdmin’s plain gRPC port. Your Flyte CLI config should use the public DNS with
insecure: false
(since TLS is terminated at the ingress). Sources: - flyte/issues/3730 - flyte_helm_generated.yaml πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.