Hi everyone :raised_hands:, we have deployed the<h...
# ask-the-community
m
Hi everyone 🙌, we have deployed the *chart flyte-core in EKS*, everything’s okay but we have problems with running workflow remotely. The following error is shown:
Copy code
flytesnacks git:(master) pyflyte run --remote examples/basics/basics/hello_world.py hello_world_wf
Failed with Exception Code: SYSTEM:Unknown
RPC Failed, with Status: StatusCode.UNAVAILABLE
        details: failed to connect to all addresses; last error: INTERNAL: ipv4:10.1.64.90:443: Trying to connect an http1.x server
        Debug string UNKNOWN:failed to connect to all addresses; last error: INTERNAL: ipv4:10.1.64.90:443: Trying to connect an http1.x server {grpc_status:14, created_time:"2023-09-21T20:46:30.828791+02:00"}
We can access the console UI with the our DNS but the GRPC we do not have access to. Our flyte config is:
Copy code
admin:
 # For GRPC endpoints you might want to use dns:///flyte.myexample.com
 endpoint: dns:///<our dns>
 authType: Pkce
 insecureSkipVerify: true # only required if using a self-signed cert. Caution: not to be used in production
 insecure: true # only required when using insecure ingress. Secure ingress may cause an unavailable desc error to true option.
logger:
 show-source: true
 level: 6
Any idea what is happening? Thank you ❤️
We have this configuration for our aws load balancer:
Copy code
ingress:
    albSSLRedirect: true
    separateGrpcIngress: true
    annotations:
      # -- aws-load-balancer-controller v2.1 or higher is required - <https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/>
      # For EKS if using [ALB](<https://kubernetes-sigs.github.io/aws-load-balancer-controller/guide/ingress/annotations/>), these annotations are set
      <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: alb
      <http://alb.ingress.kubernetes.io/tags|alb.ingress.kubernetes.io/tags>: service_instance=production
      <http://alb.ingress.kubernetes.io/scheme|alb.ingress.kubernetes.io/scheme>: internal
      <http://alb.ingress.kubernetes.io/security-groups|alb.ingress.kubernetes.io/security-groups>: $CLUSTER_SG,$CLUSTER_CUSTOM_SG
      <http://alb.ingress.kubernetes.io/subnets|alb.ingress.kubernetes.io/subnets>: $CLUSTER_SUBNETS
      <http://alb.ingress.kubernetes.io/target-type|alb.ingress.kubernetes.io/target-type>: ip
      # -- This is the certificate arn of the cert imported in AWS certificate manager.
      <http://alb.ingress.kubernetes.io/certificate-arn|alb.ingress.kubernetes.io/certificate-arn>: "{{ .Values.userSettings.certificateArn }}"
      <http://alb.ingress.kubernetes.io/listen-ports|alb.ingress.kubernetes.io/listen-ports>: '[{"HTTP": 80}, {"HTTPS":443}]'
      <http://alb.ingress.kubernetes.io/actions.ssl-redirect|alb.ingress.kubernetes.io/actions.ssl-redirect>: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
      # -- Instruct ALB Controller to not create multiple load balancers (and hence maintain a single endpoint for both GRPC and Http)
      <http://alb.ingress.kubernetes.io/group.name|alb.ingress.kubernetes.io/group.name>: flyte
    separateGrpcIngressAnnotations:
      <http://alb.ingress.kubernetes.io/backend-protocol-version|alb.ingress.kubernetes.io/backend-protocol-version>: GRPC
Also the our ingress seams okay:
Copy code
Name:             flyte-core-grpc
Labels:           <http://app.kubernetes.io/managed-by=Helm|app.kubernetes.io/managed-by=Helm>
Namespace:        flyte
Address:          <our aws load balancer>
Ingress Class:    <none>
Default backend:  <default>
Rules:
  Host        Path  Backends
  ----        ----  --------
  *
              /flyteidl.service.SignalService           flyteadmin:81 (10.1.52.244:8089,10.1.56.226:8089)
              /flyteidl.service.SignalService/*         flyteadmin:81 (10.1.52.244:8089,10.1.56.226:8089)
              /flyteidl.service.AdminService            flyteadmin:81 (10.1.52.244:8089,10.1.56.226:8089)
              /flyteidl.service.AdminService/*          flyteadmin:81 (10.1.52.244:8089,10.1.56.226:8089)
              /flyteidl.service.DataProxyService        flyteadmin:81 (10.1.52.244:8089,10.1.56.226:8089)
              /flyteidl.service.DataProxyService/*      flyteadmin:81 (10.1.52.244:8089,10.1.56.226:8089)
              /flyteidl.service.AuthMetadataService     flyteadmin:81 (10.1.52.244:8089,10.1.56.226:8089)
              /flyteidl.service.AuthMetadataService/*   flyteadmin:81 (10.1.52.244:8089,10.1.56.226:8089)
              /flyteidl.service.IdentityService         flyteadmin:81 (10.1.52.244:8089,10.1.56.226:8089)
              /flyteidl.service.IdentityService/*       flyteadmin:81 (10.1.52.244:8089,10.1.56.226:8089)
              /grpc.health.v1.Health                    flyteadmin:81 (10.1.52.244:8089,10.1.56.226:8089)
              /grpc.health.v1.Health/*                  flyteadmin:81 (10.1.52.244:8089,10.1.56.226:8089)
Annotations:  <http://alb.ingress.kubernetes.io/actions.ssl-redirect|alb.ingress.kubernetes.io/actions.ssl-redirect>:
                {"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}
              <http://alb.ingress.kubernetes.io/backend-protocol-version|alb.ingress.kubernetes.io/backend-protocol-version>: GRPC
              <http://alb.ingress.kubernetes.io/certificate-arn|alb.ingress.kubernetes.io/certificate-arn>: <our cert arn>
              <http://alb.ingress.kubernetes.io/group.name|alb.ingress.kubernetes.io/group.name>: flyte
              <http://alb.ingress.kubernetes.io/listen-ports|alb.ingress.kubernetes.io/listen-ports>: [{"HTTP": 80}, {"HTTPS":443}]
              <http://alb.ingress.kubernetes.io/scheme|alb.ingress.kubernetes.io/scheme>: internal
              <http://alb.ingress.kubernetes.io/security-groups|alb.ingress.kubernetes.io/security-groups>: <our cluster sg>
              <http://alb.ingress.kubernetes.io/subnets|alb.ingress.kubernetes.io/subnets>: <our cluster subnets>
              <http://alb.ingress.kubernetes.io/tags|alb.ingress.kubernetes.io/tags>: service_instance=production
              <http://alb.ingress.kubernetes.io/target-type|alb.ingress.kubernetes.io/target-type>: ip
              <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: alb
              <http://meta.helm.sh/release-name|meta.helm.sh/release-name>: flyte
              <http://meta.helm.sh/release-namespace|meta.helm.sh/release-namespace>: flyte
              <http://nginx.ingress.kubernetes.io/app-root|nginx.ingress.kubernetes.io/app-root>: /console
              <http://nginx.ingress.kubernetes.io/backend-protocol|nginx.ingress.kubernetes.io/backend-protocol>: GRPC
Events:       <none>
We are deploying it in private subnets and internal load balancer, to only have access with the VPN
d
considering that you're using a certificate, try using
insecure: false
in your
config
file
m
yesss that was it ❤️🦜 como siempre, muchas gracias david!!