`flyte-deps-contour-envoy` pod is stuck in a pendi...
# flyte-deployment
m
flyte-deps-contour-envoy
pod is stuck in a pending state when I try to deploy the sandbox env to a cloud kubernetes cluster w/ 4 nodes. I’m just following the docs. I see that this has come up before here and here, but neither of the suggested solutions make sense for me (viz., I don’t want to deploy on kind and I’m not running an nginx pod that would conflict with countour/envoy.) Could I get some help? Here’s the output of `k get pods -n flyte`:
Copy code
○ → kubectl get pods -n flyte
NAME                                              READY   STATUS    RESTARTS   AGE
flyte-deps-contour-envoy-xp6x2                    0/2     Pending   0          51m
flyte-deps-contour-envoy-v2tnd                    0/2     Pending   0          51m
flyte-deps-contour-envoy-qjfp5                    0/2     Pending   0          51m
flyte-deps-contour-envoy-bz2xj                    0/2     Pending   0          51m
flyte-deps-kubernetes-dashboard-8b7d858b7-2gnk2   1/1     Running   0          51m
minio-7c99cbb7bd-bczp4                            1/1     Running   0          51m
postgres-7b7dd4b66-n2w8g                          1/1     Running   0          51m
flyte-deps-contour-contour-cd4d956d9-tz82c        1/1     Running   0          51m
syncresources-6fb7586cb-szrjx                     1/1     Running   0          49m
flytepropeller-585fb99968-7bc9c                   1/1     Running   0          49m
datacatalog-7875898bf8-zdd6n                      1/1     Running   0          49m
flyteconsole-5667f8f975-q5j7b                     1/1     Running   0          49m
flyte-pod-webhook-8669764d6-8xsjx                 1/1     Running   0          49m
flyteadmin-649d4df4b-sk9px                        1/1     Running   0          49m
flytescheduler-9bdf8bf84-frn9r                    1/1     Running   0          49m
And here’s the logs for one of the pending pods:
Copy code
○ → kubectl describe pods flyte-deps-contour-envoy-xp6x2  -n flyte
Name:           flyte-deps-contour-envoy-xp6x2
Namespace:      flyte
Priority:       0
Node:           <none>
Labels:         <http://app.kubernetes.io/component=envoy|app.kubernetes.io/component=envoy>
                <http://app.kubernetes.io/instance=flyte-deps|app.kubernetes.io/instance=flyte-deps>
                <http://app.kubernetes.io/managed-by=Helm|app.kubernetes.io/managed-by=Helm>
                <http://app.kubernetes.io/name=contour|app.kubernetes.io/name=contour>
                controller-revision-hash=67bdb7bd55
                <http://helm.sh/chart=contour-7.10.1|helm.sh/chart=contour-7.10.1>
                pod-template-generation=1
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  DaemonSet/flyte-deps-contour-envoy
Init Containers:
  envoy-initconfig:
    Image:      <http://docker.io/bitnami/contour:1.20.1-debian-10-r53|docker.io/bitnami/contour:1.20.1-debian-10-r53>
    Port:       <none>
    Host Port:  <none>
    Command:
      contour
    Args:
      bootstrap
      /config/envoy.json
      --xds-address=flyte-deps-contour
      --xds-port=8001
      --resources-dir=/config/resources
      --envoy-cafile=/certs/ca.crt
      --envoy-cert-file=/certs/tls.crt
      --envoy-key-file=/certs/tls.key
    Limits:
      cpu:     100m
      memory:  100Mi
    Requests:
      cpu:     10m
      memory:  50Mi
    Environment:
      CONTOUR_NAMESPACE:  flyte (v1:metadata.namespace)
    Mounts:
      /admin from envoy-admin (rw)
      /certs from envoycert (ro)
      /config from envoy-config (rw)
Containers:
  shutdown-manager:
    Image:      <http://docker.io/bitnami/contour:1.20.1-debian-10-r53|docker.io/bitnami/contour:1.20.1-debian-10-r53>
    Port:       <none>
    Host Port:  <none>
    Command:
      contour
    Args:
      envoy
      shutdown-manager
    Liveness:     http-get http://:8090/healthz delay=120s timeout=5s period=20s #success=1 #failure=6
    Environment:  <none>
    Mounts:
      /admin from envoy-admin (rw)
  envoy:
    Image:       <http://docker.io/bitnami/envoy:1.21.1-debian-10-r55|docker.io/bitnami/envoy:1.21.1-debian-10-r55>
    Ports:       8080/TCP, 8443/TCP, 8002/TCP
    Host Ports:  80/TCP, 443/TCP, 0/TCP
    Command:
      envoy
    Args:
      -c
      /config/envoy.json
      --service-cluster $(CONTOUR_NAMESPACE)
      --service-node $(ENVOY_POD_NAME)
      --log-level info
    Limits:
      cpu:     100m
      memory:  100Mi
    Requests:
      cpu:      10m
      memory:   50Mi
    Liveness:   http-get http://:8002/ready delay=120s timeout=5s period=20s #success=1 #failure=6
    Readiness:  http-get http://:8002/ready delay=10s timeout=1s period=3s #success=1 #failure=3
    Environment:
      CONTOUR_NAMESPACE:  flyte (v1:metadata.namespace)
      ENVOY_POD_NAME:     flyte-deps-contour-envoy-xp6x2 (v1:metadata.name)
    Mounts:
      /admin from envoy-admin (rw)
      /certs from envoycert (rw)
      /config from envoy-config (rw)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  envoy-admin:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  envoy-config:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  envoycert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  envoycert
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     <http://node.kubernetes.io/disk-pressure:NoSchedule|node.kubernetes.io/disk-pressure:NoSchedule> op=Exists
                 <http://node.kubernetes.io/memory-pressure:NoSchedule|node.kubernetes.io/memory-pressure:NoSchedule> op=Exists
                 <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists
                 <http://node.kubernetes.io/pid-pressure:NoSchedule|node.kubernetes.io/pid-pressure:NoSchedule> op=Exists
                 <http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists
                 <http://node.kubernetes.io/unschedulable:NoSchedule|node.kubernetes.io/unschedulable:NoSchedule> op=Exists
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  56m   default-scheduler  0/4 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 3 node(s) didn't match Pod's node affinity/selector.
  Warning  FailedScheduling  54m   default-scheduler  0/4 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 3 node(s) didn't match Pod's node affinity/selector.
s
cc @Yee @Ketan (kumare3) @Kevin Su
e
cc: @Yuvraj
m
Figured this out. There was another ingress controller that was installed by default into my cluster. Didn’t realize it was there, and it conflicted with Flyte.
k
ohh fantastic
we would not have been able to help anyways. Also holidays so everything is a little slow here
281 Views