Hi team, is there any documentation related to con...
# flyte-support
s
Hi team, is there any documentation related to configuring a
flyte-core
helm deployment using a custom MinIO S3 bucket, without IAM configuration? There's helm chart parameters to pass in accessKey and secretKey but we want to avoid baking long-term credentials into our source code. I checked all the pages under https://docs-legacy.flyte.org/en/latest/deployment/deployment/index.html and https://www.union.ai/docs/v1/flyte/deployment/flyte-deployment/. I also checked the example flyte core chart and read its README.md but I haven't seen if there's alternatives or usages for the accessKey secretKey fields
f
How should the secret be Incase it’s not Iam or secret key
a
@strong-soccer-41351 If i understand correctly, you dont want your secret to be coded in your values file, but fetch it from a
vault
like hashicorp or a secret manager. There are multiple ways todo it . Since i dont know your setup i will paste a generic answer below:
Copy code
Approach

1. Helmfile + envsubst or SOPS + Vault plugin
     Secrets are pulled before rendering Helm values
     Use Case : CI/CD pipelines or local development
     Pros: Simple, no extra components.
     Cons: Secrets exist in CI logs or files unless handled carefully.

2. Vault Agent Template
     Sidecar or pre-rendered file writes secrets to a local file mounted into Helm
     Use Case : Kubernetes-native injection
     Pros: Secrets never touch CI/CD or Git.
     Cons: Slightly more setup.

3. External Secrets Operator (ESO)
     Secrets synced automatically into Kubernetes Secrets, which Helm can reference
    Use Case :Recommended for production
    Pros: Best for production clusters.
    Cons: Requires installing the operator.
Which one you choose depends on your env and secret manager.
I will give an example of the Approach 1 as it’s the simplest one:
Copy code
# values-template.yaml
s3:
    accessKey: "{{ .Values.vault.s3.accessKey }}"
    secretKey: "{{ .Values.vault.s3.secretKey }}"
Get the result from vault:
Copy code
export ACCESS_KEY=$(vault kv get -field=accessKey secret/s3-accessKey)
export SECRET_KEY=$(vault kv get -field=secretKey secret/s3-secretKey)
envsubst < values-template.yaml > values.yaml
After that you can do a
helm install
s
@acceptable-knife-37130 is there no way to just store s3 key / access key in k8s secret and reference the secret name? would seem like another simple / plausible setup
a
You can but for production it is not a good approach. You should be rotating the keys at regular interval. Keeping it in k8 secret works but then you have to either sync it or manually change it. Each time you run the script the secret should be updated based on the values present in the
vault
or
secret manager
If you do not have a
vault
or a
secret manager
then yes you can follow that approach but should be careful as anyone with access to kubernetes admin can see the secret as it’s mounted and base64 encoded. Ideally devops team should not have access to any secrets that does not concern them
s
I think that's fine for us. We try to protect our cluster as much as possible and yeah, i agree with those principles. We do use Vault, but we have a plugin that then renders the vault key path into a k8s secret which automatically updates it. Which makes rotating keys manageable. We just need a way to reference the name of the k8s secret in this flyte-core helm chart
and then have flyte automatically pull the secret value
So I guess we need step 3 just without ESO
a
Yes getting those secret values during helm install would require additinal steps. Helm cannot directly access kubernetes secrets that is created ahead of time. You have todo someting like below:
Copy code
ACCESS_KEY=$(kubectl get secret aws-s3-credentials -o jsonpath='{.data.accessKey}' | base64 --decode)
SECRET_KEY=$(kubectl get secret aws-s3-credentials -o jsonpath='{.data.secretKey}' | base64 --decode)

helm upgrade myapp ./chart \
  --set s3.accessKey=$ACCESS_KEY \
  --set s3.secretKey=$SECRET_KEY
Run it by your security team before implementing your approach
s
i meant a bit more like, referencing the name of the kubernetes secret in the helm chart
like having this manifest deployed in your namespace
Copy code
apiVersion: v1
kind: Secret
metadata:
  name: my-secret-name
  namespace: default
data:
  AWS_ACCESS_KEY_ID: <secret>
  AWS_SECRET_ACCESS_KEY: <secret>
and then using the value
my-secret-name
in the flyte-core helm chart
helm upgrade
wouldn't work for us because we deploy everything through ArgoCD (gitops)
a
The question is what will you add in the values file Values file for access_key and secret_key So basically you have todo something like below:
Copy code
# values.yaml
s3:
  secretName: my-secret-name
  accessKeyRef: AWS_ACCESS_KEY_ID
  secretKeyRef: AWS_SECRET_ACCESS_KEY

# deployment.yaml
env:
  - name: AWS_ACCESS_KEY_ID
    valueFrom:
      secretKeyRef:
        name: {{ .Values.s3.secretName }}
        key: {{ .Values.s3.accessKeyRef }}
  - name: AWS_SECRET_ACCESS_KEY
    valueFrom:
      secretKeyRef:
        name: {{ .Values.s3.secretName }}
        key: {{ .Values.s3.secretKeyRef }}
It has to be tested
s
Does this value exist? I don't see it in our flyte-core and i also don't see it online... checking here: https://artifacthub.io/packages/helm/flyte/flyte-core
c
You can definitely avoid baking the credentials into git. There is an option to pass the whole storage config as a secret.
Copy code
flyte-core:
  storage:
    secretName: metadata-storage-config # pragma: allowlist secret
I added it to Flyte in this PR: https://github.com/flyteorg/flyte/pull/6419
👀 1
From the readme:
Copy code
Optionally load the storage configuration from a secret so that sensitive values aren't declared in the values file.
We build a secret using templating in an external secrets resource.
s
looks like our Flyte template files don't have that support yet. Can we just take the latest flyte-core files and use our old values including this
storage.secretName
one? do you know? :)
c
You're saying you're on an old version of the helm chart? You can definitely try to take the diff from the PR and I think it would work. Can ignore the stuff under
deployment
or
docker