Hi! We are currently trying to add automatic trac...
# announcements
b
Hi! We are currently trying to add automatic tracing through datadog to the pods started through Flyte. To do so, we would need to set a few environment variables and mount a volume. While we can set the environment variables in the config, there is no way to add volumes here. Is there any other way that we're missing? The only other thing I could think of is switching all of our tasks to pod tasks, where we would have control over the whole spec, but that is something we would like to avoid if possible. Also happy to contribute here if this is something you'd see a need for.
d
Hey @Bernhard Stadlbauer! I think you should be able to use the new(ish) default PodTemplate integration for this. Essentially, the one-off configurations in the flytek8s configuration you linked get applied to the PodSpec whenever we create a Pod in Flyte. That configuration now includes a
DefaultPodTemplateName
option. Basically FlytePropeller looks for a PodTemplate in the namespace where the Pod is being created (e.x.
flytesnacks-deployment
for the project-domain be default), if that doesn't exist then it looks for a PodTemplate in the namespace that FlytePropeller is deployed in (typically
flyte
) and uses that PodSpec as the base for configuring Pods. So PodTemplate values will be overridden by the flytek8s configuration options, which will be overridden by specific task configuration.
Unfortunately we don't have this documented well right now. But there is an open issue which may provide a little more insight here.
🙏 1
With this we're trying to move away from adding a new configuration option to FlytePlugins for each Pod field we want to control, rather let users control everything using the k8s api.
b
Hi @Dan Rammer (hamersaw)! Thanks for the note on the pod-template, that seems like a really nice addition 👍 We tried it out, and it worked out of the box, the only small issue we're facing is that we would like to mount a volume into our main container, e.g. along the lines of:
Copy code
apiVersion: v1
kind: PodTemplate
metadata:
  name: flyte-default-template
  namespace: {{ .Release.Namespace }}
template:
  metadata:
  spec:
    volumes:
      - hostPath:
          path: /var/run/datadog/
        name: apmsocketpath
    containers:
      - name: noop
        image: <http://docker.io/rwgrim/docker-noop|docker.io/rwgrim/docker-noop> # (<http://docker.io/rwgrim/docker-noop>)
        volumeMounts:
          - name: apmsocketpath
            mountPath: /var/run/datadog
but as the
noop
container gets completely overwritten (if I'm not mistaken, the related code is here) this won't work. Are we missing something here? One thing we could think of is to merge container specs instead of overwriting them?
d
Oh OK, this totally makes sense. Have you thought about what this might look like for pods with multiple containers? There are certainly instances where we launch some kind of sidecar - are we applying the container spec to all containers? Or matching on a container name (I think the primary container name is hardcoded)?
b
I think while we could imagine usecases where you'd like to inject the volume into all containers within the pod (e.g. to get traces of all of them), for this particular usecase it might be enough to only match the primary container
d
Perfect. So it looks like when we use the Pod plugin we set an annotation for the primary container name here, so we know the primary container for pod tasks. However, when we use a plain container task the container name is set to the task name here. I think we either (1) set the annotation for the primary container name on container tasks or (2) use a default container name for container tasks so that we have a deterministic name. Then we can easily merge the primary container with the container Spec in the PodTemplate as you suggested. Does this sound reasonable to you? I'm open to other ideas as well. Would you be interested in contributing this? I'm happy to help.
a
Hello! I’m working with Bernhard on this, we’re interested in contributing to this change, but I first want to see if we’re understanding things right. For option (1) that you proposed, since the primary container name would still be dependent on the task name and the container name specified in the PodTemplate is static, it seems like we still wouldn’t be able to match the containers by name in the way we want? It seems like we would have to go with option (2), by setting the containerName here to some constant, ensuring the name in our PodTemplate matches that constant, and then finding the container in basePodSpec here that matches the name and merging on that. (Let me know if that makes sense! I’m still just getting up to speed with Flyte 🙇 )
d
Hey @Ailin Yu! What I was thinking with option 1 is to set a Pod annotation similar to the Pod plugin. So here we set an annotation on the pod so that "primary_container_name" points to the primary container name. This is currently used to determine the pod status because secondary containers may run forever, but the pod is completed when the primary container completes. I was thinking that even though the container name in the container plugin will by dynamically set, we can include a similar pod annotation for the "primary_container_name" which points to the single container. This could be done right around where you linked setting the container name. Then we can use the annotation as "alias" or sorts to get the primary container. This opens us up to allowing for multiple configurable containers in the default PodTemplate and just mapping them. I'd be happy to meet up and discuss this if it is not clear, sometimes in person is much easier.
cc @Haytham Abuelfutuh RE: extend the default PodTemplate for pod configuration to allow individual container configuration. We're trying to figure out the best route to map from the PodTemplate containers to Pods from the k8s plugin.
a
Thanks for the detail! I think I follow… This way in BuildPodWithSpec we would know the annotation key and can use that to get the container name that we want to merge on? I think I’m also not sure of the best way to add an annotation in that function since it has the taskContainer in scope but not the pod, I see you can get annotations through
TaskExecMetadata.GetAnnotations()
but that feels like maybe a bit of an incorrect way to use a getter function
d
Sure, I haven't looked too deep into the specific implementation details. The gist is this needs to work similarly for the container and pod plugins. I would support any approach that is clean and extensible to support specifying multiple container configurations.
I also created a github issue to track this work. Would be glad to assign you if that is OK?
a
Sounds good to me! Yes feel free to assign it to me, thanks for making the issue!
This is probably not the best way to go about things, but just to get rolling with something to try and test with, I thought to try adding an annotation to the podTemplate here:
Copy code
podTemplate.Template.Annotations[PrimaryContainerKey] = taskCtx.TaskExecutionMetadata().GetTaskExecutionID().GetGeneratedName()
Since the
taskCtx
here is the same thing that gets passed into
buildPodSpec
and eventually into
ToK8sContainer
to set the containerName, I thought it should be the same thing. Then BuildPodWithSpec can get the annotation out of the podTemplate and try to match that to a container name. I otherwise realized I was missing a couple of details on locally testing, which I posted about up in the main channel!
d
I think you're on the right track. I was thinking we could add a function (like
getPrimaryContainerName
to the
podBuilder
interface here . This would, of course need to be implemented for the container (use the task name as you suggested) and sidecar (refactor the current code) plugins. Then something like below in the BuildResource function:
Copy code
podSpec, err := builder.buildPodSpec(ctx, task, taskCtx)
	if err != nil {
		return nil, err
	}

	podSpec.ServiceAccountName = flytek8s.GetServiceAccountNameFromTaskExecutionMetadata(taskCtx.TaskExecutionMetadata())
    primaryContainerName, err := builder.getPrimaryContainerName(ctx, task, taskCtx)
    if err != nil {
        return nil, err
    }

	podTemplate := flytek8s.DefaultPodTemplateStore.LoadOrDefault(taskCtx.TaskExecutionMetadata().GetNamespace())
	pod, err := flytek8s.BuildPodWithSpec(podTemplate, podSpec, primaryContainerName)
	if err != nil {
		return nil, err
	}

    pod.Annotations[PrimaryContainerName] = primaryContainerName
Then we can pass the
primaryContainerName
as a string to the
BuildPodWithSpec
function and don't have to rely on annotations, but we still ensure the annotation is set. This will take a little bit of refactoring, but it should be fairly straight forward. We can certainly meetup as well to iron out any issues in the dev environment and sort through this.
a
Ah got it! I had thought a bit about passing the container name directly yesterday too but wasn’t sure what would be better! I think I’ll need to look at
podBuilder
a bit more to understand how its used currently, with regards to refactoring. Otherwise I’m just wrestling with my dev setup right now to try to get a test task to try out!
164 Views