https://flyte.org logo
#ask-the-community
Title
# ask-the-community
e

Eduardo Matus

09/05/2023, 1:55 AM
Hi Everyone… how resource limits works? I have this config in the flyte.yaml:
Copy code
task_resource_defaults:
    task_resources:
      defaults:
        storage: 100Mi
        ephemeral-storage: 1000Mi
      requests:
        cpu: 1
        memory: 100Mi
      limits:
        cpu: 4
        memory: 64Gi
        storage: 2000Mi
        ephemeral-storage: 2000Mi
but if I not specify the limit at task level using the @task decorator, the task fails with OOMKilled. any ideas?
y

Yee

09/05/2023, 2:11 AM
set the requests higher
h

Hank Huang

09/05/2023, 2:11 AM
if you just want to run a workflow, you can reference this PR to limit your resource.
y

Yee

09/05/2023, 2:12 AM
the limits are not the actual limits that are used to spin up the pod. they’re limits on how high users can set their requests and limits.
we’re working on cleaning up the story
h

Hank Huang

09/05/2023, 2:15 AM
I see, so for resource limit in this PR is just to limit in pods wise?
in system wise, do you know where Flyte do the resource limit ?
y

Yee

09/05/2023, 2:16 AM
i don’t think there’s a
defaults
key is there?
set the requests to be what you want the default to be.
g

Guy Harel

09/05/2023, 12:19 PM
This was somewhat unintuitive to me, so it might be the confusion here, but mind that if you annotate a task with a resource request (ie,
requests=Resources(cpu="200m", mem="100Mi")
), then it sets the limits as well as the requests. The pod would then get a limit of 100Mi in this case, even though the default might be higher.
e

Eduardo Matus

09/05/2023, 12:57 PM
so basically the requests are the limits (thinking in pod instances)? I have a task that varies in memory usage depending of the parameters, if I set high requests values doesn’t mean that all pod for that task will allocate that amount of resources, correct?
v

Victor Churikov

09/05/2023, 2:16 PM
I have a task that varies in memory usage depending of the parameters
I have a similar case where I have a task that calculates how much memory I need based on my custom logic, and then passes that value as an input to a dynamic workflow. In the nested dynamic workflow, I use .with_overrides() to dynamically set memory requests/limits based on the input of the other task, looking somewhat like this:
Copy code
@dynamic
def my_wf(memory_gb: str):
...
    resource_overrides = Resources(cpu="1", mem=f"{memory_gb}Gi")
...
    some_task_outputs = some_task(inputs=inputs).with_overrides(requests=resource_overrides)
...
    <some other tasks in this wf that consume the above task's outputs>
I did not test that with regular
@workflow
s, but I think workflow inputs aren’t promises since they are immediately available so it should work the same, but need to test this in action. If regular workflow doesn’t work, use a dynamic, which works for me.
y

Yee

09/05/2023, 6:48 PM
I have a task that varies in memory usage depending of the parameters, if I set high requests values doesn’t mean that all pod for that task will allocate that amount of resources, correct?
yes, there’s an ongoing project to allow users to configure this dynamically, but it’s a ways away. for now, you are able to override at the workflow level with the with_overrides() call.
The pod would then get a limit of 100Mi in this case, even though the default might be higher.
yes the reason for this comes from k8s… for everything except CPU, you need to set request = limit otherwise k8s changes the eviction policy.
oh yeah what victor said.
4 Views