[Namespaces for pods and projects] Hi, I can see t...
# flyte-deployment
e
[Namespaces for pods and projects] Hi, I can see that for each project, Flyte will create 3 different namespaces on k8s, corresponding to 3 domains of the project (e.g.,
flytesnacks-development
). This is normally okay. However, I’m deploying Flyte (via the
flyte-binary
Helm chart) on an on-prem k8s cluster where I only have permissions on one single namespace. As in my project, I need a private Docker image, I need to configure the
imagePullSecrets
for our service account, and I did it. However, when I tried to execute my workflows on Flyte, they fail because they couldn’t pull the private Docker image. My uneducated guess is that Flyte created seperate namespaces for the project, the service account might need to be configured with the
imagePullSecrets
for each of those namespaces as well. And since I don’t have permission on those namespace, I cannot do it. So my questions are: 1. Does my guess make sense? 2. Is there anyway to configure Flyte to not create new namespaces for project, but to use a single namespace?
Seem to related to this Github issue: https://github.com/flyteorg/flyte/issues/2790
But all the links in the GitHub issue above don’t seem to point to proper pages.
Another solution for my use case seems to be using the Custom Pod Template to add secret to every pod created by Flyte. But there documentation for it is super light here https://docs.flyte.org/projects/cookbook/en/latest/auto/core/containerization/private_images.html I don’t really understand what to follow
Copy code
Update FlytePropeller about the pod created in the previous step.
v
Hey, I’m also new to Flyte, and experienced similar issues as you do. I’ll share how I deal with them Maybe there is a way to set it globally on the flyte server’s side, but I could not find it myself. What I did, (which may not be the best practice but it works), is to use the custom pod templates as you mentioned on all of my tasks. Here’s an example for custom pod template from the docs: https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/kubernetes/pod/pod.html But that’s too long! the decorator is 49 lines for their minimal example. Don’t worry, I got you covered. We can abstract away this configuration using a helper function, and to save you some time I’ll share mine:
Copy code
from typing import List
from flytekitplugins.pod import Pod
from kubernetes.client.models import (
    V1PodSpec,
    V1Container,
    V1EmptyDirVolumeSource,
    V1PodSpec,
    V1ResourceRequirements,
    V1Volume,
    V1VolumeMount,
    V1EnvVar,
    V1EnvFromSource,
    V1ConfigMapEnvSource,
    V1SecretEnvSource,
    V1Toleration,
)

def k8s_env(env_dict) -> List[V1EnvVar]:
    return [V1EnvVar(name=k, value=v) for k, v in env_dict.items()]

def k8s_env_from_configmap(cms) -> List[V1EnvFromSource]:
    return [V1EnvFromSource(config_map_ref=V1ConfigMapEnvSource(name=cm)) for cm in cms]

def k8s_env_from_secret(secrets) -> List[V1EnvFromSource]:
    return [V1EnvFromSource(secret_ref=V1SecretEnvSource(name=secret)) for secret in secrets]

def task_config(image: str, **kwargs):
    primary_container_name = kwargs.get("primary_container_name", "primary")
    env = kwargs.get("env", {})
    env_configmaps = kwargs.get("env_configmaps", [])
    env_secrets = kwargs.get("env_secrets", [])
    node_pool = kwargs.get("node_pool", "default-pool")
    cpu_request = kwargs.get("cpu", kwargs.get("cpu_request", "1"))
    cpu_limit = kwargs.get("cpu", kwargs.get("cpu_limit", "1"))
    memory_request = kwargs.get("memory", kwargs.get("memory_request", "1Gi"))
    memory_limit = kwargs.get("memory", kwargs.get("memory_limit", "1Gi"))
    gpu = int(kwargs.get("gpu", 0))

    env_from_sources = k8s_env_from_configmap(env_configmaps) + k8s_env_from_secret(env_secrets)

    resource_requests = {"cpu": cpu_request, "memory": memory_request}
    resource_limits = {"cpu": cpu_limit, "memory": memory_limit}

    if gpu > 0:
        resource_requests["<http://nvidia.com/gpu|nvidia.com/gpu>"] = gpu
        resource_limits["<http://nvidia.com/gpu|nvidia.com/gpu>"] = gpu

    pod = Pod(pod_spec=V1PodSpec(
        containers=[V1Container(
            name=primary_container_name,
            env=k8s_env(env),
            env_from=env_from_sources,
            resources=V1ResourceRequirements(
                requests=resource_requests,
                limits=resource_limits,
            ),
        )],
        node_selector={"<http://cloud.google.com/gke-nodepool|cloud.google.com/gke-nodepool>": node_pool},
        tolerations=[
            V1Toleration(
                key="nodepool",
                operator="Equal",
                value=node_pool,
                effect="NoSchedule",
            ),
            V1Toleration(
                key='<http://nvidia.com/gpu|nvidia.com/gpu>',
                operator='Exists',
                effect='NoSchedule',
            ),
        ],
    ))

    return {'task_config': pod, 'container_image': image}
As you can see this contains some custom logic for my use case, do not use it as-is. For example I expect that we’re running on GKE and that there is a taint and label for each GKE node group. But this example demonstrates how you can configure anything kubernetes python client supports for your pod spec (and by extension also container spec), and you can adjust this to specify your
image_pull_secrets
according to the V1PodSpec: https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1PodSpec.md It receives a list of V1LocalObjectReference, so add the import for it, configure it with the name of your imagepullsecret and it will work. Set your pod spec to have your desired image pull secret by default, and use it to configure resources, env-vars, etc. Now to actually use this helper, I do this in the decorator:
Copy code
@task(cache=True, cache_version='1.0', **task_config(
    node_pool='cpu-big-pool',
    image='yourcustomcontainerimageregistry/image:v1',
    env={'APT_DEPENDENCIES': 'git-lfs'},
    env_secrets=['github-secret'],
    cpu='2', memory='8000Mi'))
def myexampletask():
    pass
task_config
passes to the task decorator the keyword args
task_config
(the pod spec) and
container_image
With this helper function, you can write each configuration as a one liner instead of messing with verbose pod spec directly in the decorator Keep in mind that if you use pod spec, use it for everything. task decorator supports configuring resources requests/limits directly in the decorator, but it does not work well with your custom pod spec because it has its own configs as well that conflict with it