Hi team, we are migrating from flyte binary deploy...
# flyte-support
s
Hi team, we are migrating from flyte binary deployment on EKS to flyte core deployment (single cluster). added helm chart for core and the deployment is up, we are even able to see the execution data from binary deployment but facing issue while running workflows with tasks having secret requests/mounts. the node is stuck with queued status. below are the propeller logs and attached is the chart we've used. has anyone faced this issue or can lead where to look at ? we are using flyte version 1.14.1 cc @square-tomato-39478 @glamorous-rainbow-77959
Copy code
{"json":{"exec_id":"a7flf985rbqfg68jg2qd","node":"n0","ns":"flyte-pai-development","res_ver":"9167407","routine":"worker-22","wf":"flyte-pai:development:flyte_poc_pipeline_workflows.aws_secrets.secret_access_wf │
│ "},"level":"error","msg":"failed Execute for node. Error: failed at Node[n0]. RuntimeExecutionError: failed during plugin execution, caused by: failed to execute handle for plugin [container]: [Invalid] failed  │
│ to create resource, caused by: Pod \"a7flf985rbqfg68jg2qd-n0-0\" is invalid: [spec.volumes[2].name: Invalid value: \"mfzg3otbo4ztu32fmnzgk4dtnvqw3ylhmvzduylqfvzw54lunawtcorrgyztemzzha1tqmbsge4hgzldojsxios6\": m │
│ ust be no more than 63 characters, spec.containers[0].volumeMounts[1].name: Not found: \"mfzg3otbo4ztu32fmnzgk4dtnvqw3ylhmvzduylqfvzw54lunawtcorrgyztemzzha1tqmbsge4hgzldojsxios6\"]","ts":"2025-01-03T10:49:38Z"} │
│ {"json":{"exec_id":"a7flf985rbqfg68jg2qd","ns":"flyte-pai-development","res_ver":"9167407","routine":"worker-22","wf":"flyte-pai:development:flyte_poc_pipeline_workflows.aws_secrets.secret_access_wf"},"level":" │
│ warning","msg":"Error in handling running workflow [failed at Node[n0]. RuntimeExecutionError: failed during plugin execution, caused by: failed to execute handle for plugin [container]: [Invalid] failed to cre │
│ ate resource, caused by: Pod \"a7flf985rbqfg68jg2qd-n0-0\" is invalid: [spec.volumes[2].name: Invalid value: \"mfzg3otbo4ztu32fmnzgk4dtnvqw3ylhmvzduylqfvzw54lunawtcorrgyztemzzha1tqmbsge4hgzldojsxios6\": must be │
│  no more than 63 characters, spec.containers[0].volumeMounts[1].name: Not found: \"mfzg3otbo4ztu32fmnzgk4dtnvqw3ylhmvzduylqfvzw54lunawtcorrgyztemzzha1tqmbsge4hgzldojsxios6\"]]","ts":"2025-01-03T10:49:38Z"}   │
│ {"json":{"exec_id":"a7flf985rbqfg68jg2qd","ns":"flyte-pai-development","res_ver":"9167407","routine":"worker-22","wf":"flyte-pai:development:flyte_poc_pipeline_workflows.aws_secrets.secret_access_wf"},"level":" │
│ error","msg":"Error when trying to reconcile workflow. Error [failed at Node[n0]. RuntimeExecutionError: failed during plugin execution, caused by: failed to execute handle for plugin [container]: [Invalid] fai │
│ led to create resource, caused by: Pod \"a7flf985rbqfg68jg2qd-n0-0\" is invalid: [spec.volumes[2].name: Invalid value: \"mfzg3otbo4ztu32fmnzgk4dtnvqw3ylhmvzduylqfvzw54lunawtcorrgyztemzzha1tqmbsge4hgzldojsxios6\ │
│ ": must be no more than 63 characters, spec.containers[0].volumeMounts[1].name: Not found: \"mfzg3otbo4ztu32fmnzgk4dtnvqw3ylhmvzduylqfvzw54lunawtcorrgyztemzzha1tqmbsge4hgzldojsxios6\"]]. Error Type[*errors.Node │
│ ErrorWithCause]","ts":"2025-01-03T10:49:38Z"}
values chart used for core deployment
a
@square-carpet-13590 can you share how're you requesting secrets?
s
@average-finland-92144 with task decorator, like below the same was working with binary deployment
Copy code
@task(
    requests=Resources(cpu="100m", mem="512Mi"),
    secret_requests=[Secret(group=SECRET_GROUP, key=SECRET_NAME)],
)
def secret_access() -> str:
    secret_string = flytekit.current_context().secrets.get(SECRET_GROUP, SECRET_NAME)
    print(secret_string)
    return secret_string
a
well, the way the task pod is configured should not change with the Helm chart you use to deploy Flyte ( logic defined here) I see it's complaining with the volume name hitting the K8s limit, which is odd
s
@average-finland-92144 if you try to decode the string. It say invalid base32 - mfzg3otbo4ztu32fmnzgk4dtnvqw3ylhmvzduylqfvzw54lunawtcorrgyztemzzha1tqmbsge4hgzldojsxios6
@average-finland-92144 Found the issues, it is related to webhook secretManagerType pointed to k8s and we were trying to fetch the secrets from AWS.
Thanks for help @average-finland-92144
a
oh thanks for confirming, I wasn't able to reproduce but with your explanation it makes sense. let us know if you have any other questions