<@U06RKAMK0UC> what’s the best recommended way for...
# ask-ai
c
@ancient-wolf-19325 what’s the best recommended way for ml ops engineers to set up flyte in kind cluster that mimics a production flyte-core installation?
j
The recommended way for MLOps engineers to set up Flyte in a local KIND (Kubernetes IN Docker) cluster that mimics a production flyte-core installation is to use the official Flyte Helm charts, specifically the
flyte-core
chart. This approach allows you to replicate the multi-component architecture of a production deployment, including separate services for FlyteAdmin, Datacatalog, FlytePropeller, and FlyteConsole, rather than the all-in-one
flyte-binary
chart. You should configure the required dependencies (PostgreSQL, S3-compatible object store, ingress, etc.) and override values as needed for your local environment. The Flyte documentation provides detailed steps for installing Flyte using Helm, including how to add the chart repo, download example values files, and install the chart with custom overrides. While the base values files are cloud-oriented, you can adapt them for KIND by ensuring local service endpoints and storage are properly configured. This setup will closely mirror a production flyte-core deployment and is suitable for development and testing before moving to cloud or on-prem environments. For more, see Installing Flyte and Planning your deployment. Would you like a step-by-step guide or example values file for KIND? Sources: - Installing Flyte - Planning your deployment πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
c
@ancient-wolf-19325 a step-by-step guide or example values file for KIND, please. we tried
Copy code
# install minio for s3, postgres for db
helm install flyte-be-deps flyteorg/flyte-deps --namespace flyte --values <https://raw.githubusercontent.com/flyteorg/flyte/refs/heads/master/charts/flyte-deps/values-sandbox.yaml>
# install flyte-core
helm install flyte-backend flyteorg/flyte-core --namespace flyte --values <https://raw.githubusercontent.com/flyteorg/flyte/refs/heads/master/charts/flyte-core/values-sandbox.yaml>
but the different ports and endpoints are not configured correctly
j
It seems likely that, for KIND, you need to explicitly set NodePort values and update service endpoints in your values files, since KIND does not support LoadBalancer services out of the box. The default
values-sandbox.yaml
files are designed for environments like Minikube and may not map ports or endpoints as expected in KIND. You should override the service types to
NodePort
and set fixed nodePort values for MinIO, Postgres, and Contour/Envoy, then update Flyte's config to use these endpoints. For example, in your custom values file, set:
Copy code
minio:
  service:
    type: NodePort
    nodePorts:
      api: 30084

postgresql:
  primary:
    service:
      type: NodePort
      nodePorts:
        postgresql: 30089

contour:
  envoy:
    service:
      type: NodePort
      nodePorts:
        http: 30081
Then, in your Flyte-core values, set storage and admin endpoints to match these ports (e.g., MinIO at
http://localhost:30084
, FlyteAdmin at
localhost:30081
). For more details, see values-sandbox.yaml for flyte-deps and values-sandbox.yaml for flyte-core. Would you like a full example values file and a step-by-step Helm install sequence? Sources: - flyte-core values-sandbox.yaml - values-sandbox.yaml for flyte-deps πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
c
@ancient-wolf-19325 a full example values file and a step-by-step Helm install sequence, please
j
It seems likely that, for KIND, you need to override the default service types to NodePort and set explicit nodePort values for MinIO, Postgres, and Contour/Envoy, then update Flyte's config to use these endpoints. Below is a minimal example values file and a step-by-step Helm install sequence adapted for KIND, based on the official flyte-core values-sandbox.yaml and flyte-deps values-sandbox.yaml. Example: flyte-deps-values-kind.yaml
Copy code
minio:
  service:
    type: NodePort
    nodePorts:
      api: 30084

postgresql:
  primary:
    service:
      type: NodePort
      nodePorts:
        postgresql: 30089

contour:
  enabled: true
  envoy:
    service:
      type: NodePort
      nodePorts:
        http: 30081
Example: flyte-core-values-kind.yaml
Copy code
config:
  admin:
    endpoint: "localhost:30081"
    insecure: true
  storage:
    provider: s3
    userDataContainer: "my-flyte-bucket"
    metadataContainer: "my-flyte-bucket"
    endpoint: "http://localhost:30084"
    region: "us-east-1"
    accessKey: "minio"
    secretKey: "miniostorage"
Step-by-step Helm install sequence: 1. Add the Flyte Helm repo:
Copy code
helm repo add flyteorg https://flyteorg.github.io/flyte
   helm repo update
2. Create the namespace:
Copy code
kubectl create namespace flyte
3. Install dependencies:
Copy code
helm install flyte-be-deps flyteorg/flyte-deps \
     --namespace flyte \
     --values flyte-deps-values-kind.yaml
4. Install Flyte core:
Copy code
helm install flyte-backend flyteorg/flyte-core \
     --namespace flyte \
     --values flyte-core-values-kind.yaml
5. Port-forward the Envoy service to access the Flyte Console:
Copy code
kubectl -n flyte port-forward svc/flyte-contour-envoy 30081:80
6. Access the Flyte Console at http://localhost:30081/console. You may need to adjust bucket names and credentials to match your MinIO setup. For more, see flyte-core values-sandbox.yaml and flyte-deps values-sandbox.yaml. Sources: - https://www.union.ai/docs/flyte/deployment/flyte-deployment/installing/ πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
c
@ancient-wolf-19325 thanks! what should my flyte config file look like for the setup you just mentioned?
j
Your Flyte CLI config file (e.g.,
~/.flyte/config.yaml
) should point to the FlyteAdmin gRPC endpoint and, if needed, include storage settings for MinIO. For the KIND setup above, use:
Copy code
admin:
  endpoint: dns:///localhost:30081
  insecure: true
  authType: Pkce
logger:
  show-source: true
  level: 0
storage:
  type: minio
  connection:
    endpoint: http://localhost:30084
    access-key: minio
    secret-key: miniostorage
    disable-ssl: true
This matches the NodePort mappings and credentials from your Helm values. For more, see pyflyte CLI config reference. Would you like more detail on configuring authentication or storage? Sources: - pyflyte CLI config reference πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
c
@ancient-wolf-19325 when running this command
Copy code
pyflyte --config ~/.flyte/config-sandbox.yaml     run --remote     --image $IMAGE     workflows/example_intro.py training_workflow     --hyperparameters '{"C": 0.01}'
we get the following error when trying to upload the code tarball to minio, how do we resolve it?
Copy code
ConnectionError: HTTPConnectionPool(host='minio.flyte.svc.cluster.local', port=9000): Max retries exceeded with url:
/my-s3-bucket/flytesnacks/development/7H4DQC2VP7KXVEOW6BYOOT3QXM%3D%3D%3D%3D%3D%3D/fast2cb848fb6098d24f50e6d2d64678876a.tar.gz
...
(Caused by NameResolutionError("<urllib3.connection.HTTPConnection object at 0x161b93530>: Failed to resolve
'minio.flyte.svc.cluster.local' ([Errno 8] nodename nor servname provided, or not known)"))
j
This error indicates that your local machine cannot resolve or connect to the MinIO service at
minio.flyte.svc.cluster.local:9000
. In KIND or other local Kubernetes setups, this DNS name is only resolvable inside the cluster, not from your host. You need to set the MinIO endpoint in your Flyte config to
http://localhost:<NodePort>
(e.g.,
30084
), matching your NodePort mapping for MinIO, not the internal cluster DNS name. Update your
~/.flyte/config-sandbox.yaml
to:
Copy code
storage:
  connection:
    endpoint: http://localhost:30084
    access-key: minio
    secret-key: miniostorage
    disable-ssl: true
and ensure you are not using
minio.flyte.svc.cluster.local
as the endpoint. For more, see GitHub Issue #5663. Would you like more detail on troubleshooting MinIO connectivity? Sources: - GitHub Issue #5663 πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
c
@ancient-wolf-19325 i already have this storage section in my config yaml:
Copy code
storage:
  type: minio
  connection:
    endpoint: <http://localhost:30084>
    access-key: minio
    secret-key: miniostorage
    disable-ssl: true
and it still fails when trying to upload to the signed url for the reason you said, it uses a url that only works within the kind cluster
j
This issue occurs because FlyteAdmin generates signed URLs using the internal Kubernetes DNS name (e.g.,
minio.flyte.svc.cluster.local:9000
), which is only accessible from within the clusterβ€”not from your host. Even if your local config points to
localhost:30084
, the signed URL returned by FlyteAdmin will still use the internal address, causing uploads from your host to fail. To resolve this, you must configure Flyte to generate signed URLs with an endpoint accessible from your host (i.e.,
localhost:30084
). This requires overriding the
storage.signedURL.stowConfigOverride.endpoint
in your Flyte-core Helm values to
http://localhost:30084
. For example:
Copy code
inline:
  storage:
    signedURL:
      stowConfigOverride:
        endpoint: http://localhost:30084
Apply this override in your
flyte-core-values-kind.yaml
and upgrade your Helm release. For more, see flyte-core values-sandbox.yaml and related GitHub issue. Would you like step-by-step instructions for updating your Helm deployment? Sources: - GitHub Issue #5004 πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
c
@ancient-wolf-19325 i think the
inline
values file section only exists in the flyte-binary and flyte-sandbox helm charts, neither of which we are using
j
You are correct: the
inline
section for direct config overrides is specific to the
flyte-binary
and
flyte-sandbox
Helm charts. In the
flyte-core
chart, you must override the storage signed URL endpoint by editing the appropriate config map or values file section directly, not via
inline
. For
flyte-core
, add or update the following in your
values.yaml
(or a custom override file) under the
configuration
or
configmap
section, depending on your chart version:
Copy code
configmap:
  storage:
    signedURL:
      stowConfigOverride:
        endpoint: http://localhost:30084
or, if your chart uses a
configuration
block:
Copy code
configuration:
  storage:
    signedURL:
      stowConfigOverride:
        endpoint: http://localhost:30084
This ensures FlyteAdmin generates signed URLs with
localhost:30084
as the endpoint, making uploads from your host possible. For more, see GitHub Issue #5004. Would you like a concrete example for your specific chart version? Sources: - GitHub Issue #5004 πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
c
@ancient-wolf-19325 neither of those options worked. i think the value is hard-coded in _helpers.tpl line 230:
Copy code
signedUrl:
    stowConfigOverride:
      endpoint: <http://minio>.{{ .Release.Namespace }}.svc.cluster.local:9000
is there another way we can change it? potentially via an automated command-line tool after it does both
helm install
commands?
j
You are correct: in the
flyte-core
Helm chart, the
storage.signedUrl.stowConfigOverride.endpoint
is hardcoded in
_helpers.tpl
, making it difficult to override via values files. The recommended workaround is to patch the generated ConfigMap after Helm installation using
kubectl
. After running your
helm install
commands, patch the relevant ConfigMap (typically named
flyte-propeller-config
or similar) in the
flyte
namespace. For example:
Copy code
kubectl -n flyte patch configmap flyte-propeller-config \
  --type merge \
  -p '{"data":{"storage.yaml":"storage:\n  signedUrl:\n    stowConfigOverride:\n      endpoint: http://localhost:30084\n"}}'
You may need to adjust the ConfigMap name and YAML path depending on your deployment. After patching, restart the Flyte pods to pick up the change:
Copy code
kubectl -n flyte rollout restart deployment flyte-propeller
kubectl -n flyte rollout restart deployment flyteadmin
For more, see GitHub Issue #5004. Would you like a script to automate this process? Sources: - GitHub Issue #5004 πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
a
hey @cool-nest-98527 what's the status here? Happy to help
c
@ancient-wolf-19325 When deploying with Flyte Core, what is the intended way to use
ImageSpec
? Do we need to set up a Docker image registry, or is there a way to sideload Docker images for tasks without using a registry?
j
When deploying with Flyte Core, the intended way to use
ImageSpec
is to configure a Docker image registry that both your local development environment and the Flyte cluster can access. Flyte’s
ImageSpec
builds and pushes images to this registry, and the cluster pulls them to run your tasks. Sideloading images directly into the cluster (without a registry) is not supported by `ImageSpec`β€”a registry is required for Flyte Core deployments (ImageSpec documentation, Flyte Getting Started). For local development (e.g., with Flyte Sandbox or demo), you can use a local registry (such as
localhost:30000
in KIND or Docker-in-Docker setups). For production, use a cloud or organization registry (ECR, GCR, GHCR, etc.). Sideloading images (e.g., with
docker load
) is not supported for Flyte task execution with
ImageSpec
. Would you like more detail on configuring a local or remote registry for Flyte? Sources: - ImageSpec documentation - Flyte Getting Started: Install Docker and get access to a container registry πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
c
What if we don't use ImageSpec? How do we set up to sideloading images for tasks to run? @ancient-wolf-19325
j
If you do not use
ImageSpec
, you can specify a pre-built Docker image for each task by setting the
container_image
parameter in the
@task
decorator to the image’s full name (e.g.,
myregistry/myimage:tag
). However, Flyte still requires the image to be available in a registry accessible to the cluster; sideloading images (e.g., with
docker load
or manually loading images onto cluster nodes) is not supported for Flyte task execution (ImageSpec documentation, Flyte Getting Started). In summary, regardless of whether you use
ImageSpec
or specify images manually, Flyte Core expects all task images to be available in a Docker registry that the cluster can pull from. Sideloading images without a registry is not supported. Would you like more detail on registry setup or image specification in Flyte? Sources: - ImageSpec documentation - Flyte Getting Started πŸ‘ 0 button πŸ‘Ž 0 button πŸ’‘ Hint: Mention @runllm in the thread for followups.
c
hey @average-finland-92144, thanks for the offer! we’re still working through some issues, the latest being the image registry ^. we have a script so far to install
flyte-core
in
kind
on mac, but the env still has issues. something like this:
Copy code
helm install flyte-deps flyteorg/flyte-deps --namespace flyte --values "$ROOT_DIR/charts/flyte/values.local-deps.yaml"
echo "---------"

rm -rf /tmp/flyte-core/
# Download the flyte-core chart to a temporary directory
helm pull flyteorg/flyte-core --untar --untardir /tmp/
# replace stowConfigOverride endpoint, it will be on two lines
sed -i.bak '
/^[[:space:]]*stowConfigOverride:/ {
    n
    s|endpoint: <http://minio>.{{ .Release.Namespace }}.svc.cluster.local:9000|endpoint: <http://localhost:9000>|
}
' /tmp/flyte-core/templates/_helpers.tpl
# finally do the install
helm install flyte-core /tmp/flyte-core     --namespace flyte --values "$ROOT_DIR/charts/flyte/values.local-core.yaml"
but it kind of feels like we’re swimming upstream here. are we doing something wrong? or no one in the community sets up a full flyte-core install on local k8s via kind? or they do but they use minikube? thanks!
a
@cool-nest-98527 in this tutorial we use
microk8s
as the distro running on a VM in Mac to deploy flyte-core or flyte-binary
c
thanks! does no one try to replicate a real production install on their local kind/microk8s? the typical best practice is to have environments that are as similar to production as possible, but is that not common with Flyte?
a
I think multiple users have done something like this in the past. Probably at a different layer: ie. using the flyte sandbox on the laptop to learn/test the SDK and then move to installing in production. Part of the process depends on the particular infrastructure you use though