<@U03BXVBDGA0> / <@U03BSHYHQ1K>, just following up...
# flyte-support
h
@shy-holiday-15500 / @nice-zebra-99977, just following up on the resource overriding issue we talked about earlier today. If I remember correctly, you were seeing a discrepancy between setting task resource limits using the @task decorator vs setting default values via
flytectl update task-resource-attribute
and the task resource defaults defined in the helm chart values file. Just so we can dive deeper on this, can you tell us a bit about what you tried? We're probably missing some documentation around this area.
thx 1
cc: @icy-agent-73298, @thankful-minister-83577
n
I will take a look tomorrow and respond im heads down at the moment.
A question I forgot to ask this morning is there a way to better name pods?
s
Will do quick writeup of what we saw without a lot of screenshots
We're on 1.1.0
We are using values_eks.yaml, plus some custom overrides (but we aren't using the custom overrides to modify task resources etc., its for external-dns related annotations and a couple other web-oriented things, along with using the inject-finalizer setting on the k8s plugin)
Here is what we saw before:
We had 25Gi memory set in the task decorator requests
In the flyte console sidebar when clicking on a finished/failed node, we could see that the "Task" tab said 25Gi request
But then when we went in K9s and looked at the running pod, it had a 20Gi request. We double checked versions of workflows registerd etc. (i.e., to verify we weren't just running an older version that had 20Gi request
And the task resource attributes limit for mem was set to 20Gi
When we used flytectl to increase it, this went away
We'll try to reproduce this another day this week to double check we didn't just make a goofy/simple mistake
i
can you try fetching the data on the registered task using flytectl eg :
Copy code
flytectl get task -p <project> -d <domain> <name-of-task> -o yaml
which would dump the closure along with memory requests and limits configured for it.
Copy code
- closure:
    compiledTask:
      template:
        container:
          args:
          - pyflyte-execute
          - --inputs
          - '{{.input}}'
          - --output-prefix
          - '{{.outputPrefix}}'
          - --raw-output-data-prefix
          - '{{.rawOutputDataPrefix}}'
          - --checkpoint-path
          - '{{.checkpointOutputPrefix}}'
          - --prev-checkpoint
          - '{{.prevCheckpointPrefix}}'
          - --resolver
          - flytekit.core.python_auto_container.default_task_resolver
          - --
          - task-module
          - workflows.example_10_flyte_decks
          - task-name
          - get_data_annotated
          image: <http://ghcr.io/flyteorg/flyte-conference-talks:scipy-2022-v1|ghcr.io/flyteorg/flyte-conference-talks:scipy-2022-v1>
          resources:
            limits:
            - name: MEMORY
              value: 1Gi
            requests:
            - name: MEMORY
              value: 1Gi
        id:
          domain: development
          name: workflows.example_10_flyte_decks.get_data_annotated
          project: flytesnacks
          resourceType: TASK
          version: v1
        interface:
Using
update task-resource-attribute
allows you set the configuration for all tasks/workflows running in the project/domain Also it allows you to set this workflow level https://docs.flyte.org/projects/flytectl/en/latest/gen/flytectl_update_task-resource-attribute.html
👀 1
s
Will try that if we experience this again, made some other changes to get this workflow running and having a hard time recreating the issue now
We will try to recreate the issue in an isolated environment, and I"ll try dumping the task using flytectl
162 Views