mysterious-dinner-77632
09/30/2025, 10:53 AMancient-wolf-19325
09/30/2025, 10:53 AMlabelClusterMap), and weights determine selection if multiple clusters are mapped to the same label. The propeller is selected based on these weights if no explicit override is provided.
- The --ecl flag (execution cluster label) can be passed at workflow execution time (e.g., via pyflyte run --ecl), and this label is respected, overriding project/domain defaults if present in the configmap.
- You can set a defaultExecutionLabel in the configmap, which is used if no ECL is specified at execution time and no project/domain override exists.
- You can use flytectl update execution-cluster-label to set a default ECL for each project and domain, which will be respected unless overridden at execution time.
All your points are correct according to RFC: Select cluster execution when triggering an execution, Allow setting a ExecutionClusterLabel when triggering a Launchplan/Workflow/Task, flytectl get execution-cluster-label, and clusters_config2_default_label.yaml.
Would you like more detail on any specific point?
Sources:
- RFC: Select cluster execution when triggering an execution
- Allow setting a ExecutionClusterLabel when triggering a Launchplan/Workflow/Task
- flytectl get execution-cluster-label
- clusters_config2_default_label.yaml π 0 button π 0 button π‘ Hint: Mention @runllm in the thread for followups.mysterious-dinner-77632
09/30/2025, 10:55 AMancient-wolf-19325
09/30/2025, 10:55 AM--ecl flag at execution time; there is no explicit mechanism in the referenced sources to prevent this override. The system is designed so that per-execution ECL takes precedence over project/domain defaults set via configmap or flytectl (RFC: Select cluster execution when triggering an execution). There is no documented way to "lock" a project/domain to a specific ECL and block user overrides.
For setting project-ECL connections at the Helm chart level, there is community interest and some discussion about managing execution cluster labels via infrastructure-as-code (Helm values/configmap), but the standard method remains using flytectl update execution-cluster-label or similar dynamic API calls (Execution Cluster label creation issue). Declarative management via Helm is not fully supported out-of-the-box, though recent PRs suggest progress in this area.
Would you like more detail on possible workarounds or the current state of declarative ECL management?
Sources:
- RFC: Select cluster execution when triggering an execution
- Execution Cluster label creation issue π 0 button π 0 button π‘ Hint: Mention @runllm in the thread for followups.mysterious-dinner-77632
09/30/2025, 11:01 AMancient-wolf-19325
09/30/2025, 11:01 AMflytectl commands. See Feat: Declarative management of matcheable resources and related RFC Declarative matchable resources. This allows you to specify ECL mappings in a YAML file and have them loaded at startup, but this is not yet the default or only method.
2. Imperative API (flytectl):
The current standard is to use flytectl update execution-cluster-label --attrFile ... to set ECLs for projects/domains. This is not fully declarative and can be overwritten by subsequent API calls or config changes (flytectl update matchable_execution_cluster_label.go).
Key Discussions and Issues:
- Feat: Declarative management of matcheable resources
- RFC: Declarative matchable resources
- Execution Cluster label creation issue
- Allow flytectl to set a targetExecutionCluster
- Add multi cluster set up to deployment section
- Add support for multi cluster setup in helm
Would you like a step-by-step example of how to use the declarative config approach?
Sources:
- Feat: Declarative management of matcheable resources
- Execution Cluster label creation issue
- flytectl update matchable_execution_cluster_label.go
- Allow flytectl to set a targetExecutionCluster
- Add multi cluster set up to deployment section
- Add support for multi cluster setup in helm π 0 button π 0 button π‘ Hint: Mention @runllm in the thread for followups.mysterious-dinner-77632
09/30/2025, 11:21 AMancient-wolf-19325
09/30/2025, 11:21 AMflytectl or the Admin API.
To restrict users from setting or updating ECLs via flytectl, you must control access to the FlyteAdmin API. This is typically done by limiting permissions at the network or authentication/authorization level, so only trusted users or service accounts can call the relevant Admin endpoints (Execution Cluster label creation issue). There is no built-in fine-grained RBAC for ECL updates in open-source Flyte; access control must be enforced externally.
Would you like more detail on securing FlyteAdmin or on the current state of declarative ECL support?
Sources:
- Feat: Declarative management of matcheable resources
- Execution Cluster label creation issue π 0 button π 0 button π‘ Hint: Mention @runllm in the thread for followups.mysterious-dinner-77632
09/30/2025, 12:06 PMancient-wolf-19325
09/30/2025, 12:06 PMflytectl and enforce execution cluster labels during deployment or periodically. This is a common workaround for environments lacking full declarative support. The Job would mount a config file (e.g., ecl.yaml) and run flytectl update execution-cluster-label --attrFile ecl.yaml. For periodic enforcement, use a CronJob.
Example Kubernetes Job:
apiVersion: batch/v1
kind: Job
metadata:
name: set-flyte-ecl
spec:
template:
spec:
containers:
- name: flytectl
image: cr.flyte.org/flyteorg/flytectl:v1.16.0
command: ["flytectl", "update", "execution-cluster-label", "--attrFile", "/config/ecl.yaml"]
volumeMounts:
- name: config
mountPath: /config
restartPolicy: OnFailure
volumes:
- name: config
configMap:
name: flyte-ecl-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: flyte-ecl-config
data:
ecl.yaml: |
project: myproject
domain: development
value: myclusterlabel
Example Kubernetes CronJob:
apiVersion: batch/v1
kind: CronJob
metadata:
name: enforce-flyte-ecl
spec:
schedule: "0 * * * *" # every hour
jobTemplate:
spec:
template:
spec:
containers:
- name: flytectl
image: cr.flyte.org/flyteorg/flytectl:v1.16.0
command: ["flytectl", "update", "execution-cluster-label", "--attrFile", "/config/ecl.yaml"]
volumeMounts:
- name: config
mountPath: /config
restartPolicy: OnFailure
volumes:
- name: config
configMap:
name: flyte-ecl-config
This approach ensures your ECLs are set as desired and can revert unauthorized changes. For more, see flytectl update matchable_execution_cluster_label.go.
Would you like more detail on error handling or idempotency for these jobs?
Sources:
- flytectl update matchable_execution_cluster_label.go π 0 button π 0 button π‘ Hint: Mention @runllm in the thread for followups.mysterious-dinner-77632
09/30/2025, 2:27 PMancient-wolf-19325
09/30/2025, 2:27 PMcr.flyte.org/flyteorg/flytectl:v1.16.0 does not exist in the official Flyte container registries. There is no evidence in the data sources that a published flytectl image is available; flytectl is typically distributed as a binary, not a container image (flytectl README). You may need to build your own Docker image containing the flytectl binary if you want to use it in a Kubernetes Job or CronJob.
Would you like instructions or a sample Dockerfile for building a custom flytectl image?
Sources:
- flytectl README π 0 button π 0 button π‘ Hint: Mention @runllm in the thread for followups.mysterious-dinner-77632
09/30/2025, 2:29 PMancient-wolf-19325
09/30/2025, 2:29 PMflytectl command-line tool ready for use in Kubernetes Jobs or CronJobs. Typically, flytectl is distributed as a standalone binary for local installation, and there is no reference to a pre-built container image in the official documentation or deployment guides. If you need to use flytectl in a containerized environment, you will likely need to build your own Docker image by starting from a base image (such as alpine or ubuntu), downloading the flytectl binary from the official releases, and adding it to the image. If you have specific requirements for the image or need a sample Dockerfile, please provide more details about your environment or constraints. If this doesn't help, please reach out to the Flyte support team for further assistance. π 0 button π 0 button π‘ Hint: Mention @runllm in the thread for followups.mysterious-dinner-77632
09/30/2025, 2:54 PMancient-wolf-19325
09/30/2025, 2:54 PMflytectl requires authentication with Keycloak, and the Job lacks the necessary credentials or configuration to perform the OAuth2 flow non-interactively. Since flytectl typically uses PKCE or device code flows that require browser interaction, running it in a non-interactive Job is challenging (flytectl README, Keycloak auth issues).
A common workaround is to use a confidential client (service account) in Keycloak and configure flytectl to use client credentials grant (if supported by your Flyte deployment). This involves:
- Registering a confidential client in Keycloak for automation.
- Providing the client ID and secret to the Job (as environment variables or mounted secrets).
- Configuring flytectl to use these credentials, typically by setting authType: ClientSecret in the config file and referencing the secret.
If your Flyte deployment does not support client credentials grant for flytectl, you may need to pre-populate the token cache or use a service account with the required permissions (Keycloak config discussion).
Would you like a step-by-step example for setting up flytectl with client credentials in a Job?
Sources:
- Keycloak auth issues π 0 button π 0 button π‘ Hint: Mention @runllm in the thread for followups.mysterious-dinner-77632
09/30/2025, 2:56 PMancient-wolf-19325
09/30/2025, 2:56 PMflytectl) to set or update ECLs (Execution Cluster label creation issue). Declarative management via ConfigMap or Helm values is not yet fully supported, though there are ongoing discussions and PRs to add this feature (Feat: Declarative management of matcheable resources).
Would you like more detail on the current state of declarative ECL management or possible workarounds?
Sources:
- Execution Cluster label creation issue π 0 button π 0 button π‘ Hint: Mention @runllm in the thread for followups.mysterious-dinner-77632
09/30/2025, 2:57 PMancient-wolf-19325
09/30/2025, 2:57 PMflytectl authentication in a K8s Job is complex, your main workarounds are:
1. Pre-populate ECLs at Deployment Time:
- Use flytectl manually or in a CI/CD pipeline (outside the cluster) where authentication is easier to manage, to set the desired ECLs after deploying Flyte.
- This can be scripted as part of your infrastructure-as-code or deployment process.
2. Custom Controller or Operator:
- Build a lightweight custom Kubernetes controller or operator that watches a ConfigMap or CRD and calls the FlyteAdmin API (with a service account or pre-provisioned token) to enforce ECLs.
- This requires some development effort but allows for declarative management.
3. Manual Enforcement:
- Document the required ECLs and have an admin periodically check and enforce them using flytectl from a secure, authenticated environment.
4. Use a Sidecar or Init Container with Pre-Authenticated Token:
- If you can securely inject a pre-generated access token (with sufficient lifetime) into your Job (e.g., via a Kubernetes Secret), you can run flytectl non-interactively.
- This avoids interactive auth but requires secure token management and rotation.
5. Wait for Declarative Support:
- Track and contribute to ongoing Flyte PRs/issues for declarative ECL management (Feat: Declarative management of matcheable resources), which may soon allow you to manage ECLs via Helm or ConfigMap.
There is no built-in, fully declarative, or K8s-native way to manage ECLs as of now. All workarounds involve either external automation, custom code, or manual steps.
Would you like more detail on any of these approaches?
Sources:
- https://github.com/flyteorg/flytesnacks/blob/master/examples/productionizing/productionizing/use_secrets.py
- https://github.com/flyteorg/flyte/issues/6394
Note: I wasnβt able to find highly relevant data sources, but above are a few potentially relevant links. π 0 button π 0 button π‘ Hint: Mention @runllm in the thread for followups.