rhythmic-lizard-91902
09/05/2023, 1:55 AMtask_resource_defaults:
task_resources:
defaults:
storage: 100Mi
ephemeral-storage: 1000Mi
requests:
cpu: 1
memory: 100Mi
limits:
cpu: 4
memory: 64Gi
storage: 2000Mi
ephemeral-storage: 2000Mi
but if I not specify the limit at task level using the @task decorator, the task fails with OOMKilled.
any ideas?thankful-minister-83577
refined-doctor-1380
09/05/2023, 2:11 AMthankful-minister-83577
thankful-minister-83577
refined-doctor-1380
09/05/2023, 2:15 AMrefined-doctor-1380
09/05/2023, 2:16 AMthankful-minister-83577
defaults
key is there?thankful-minister-83577
tall-ram-83532
09/05/2023, 12:19 PMrequests=Resources(cpu="200m", mem="100Mi")
), then it sets the limits as well as the requests. The pod would then get a limit of 100Mi in this case, even though the default might be higher.rhythmic-lizard-91902
09/05/2023, 12:57 PMkind-kite-58745
09/05/2023, 2:16 PMI have a task that varies in memory usage depending of the parametersI have a similar case where I have a task that calculates how much memory I need based on my custom logic, and then passes that value as an input to a dynamic workflow. In the nested dynamic workflow, I use .with_overrides() to dynamically set memory requests/limits based on the input of the other task, looking somewhat like this:
@dynamic
def my_wf(memory_gb: str):
...
resource_overrides = Resources(cpu="1", mem=f"{memory_gb}Gi")
...
some_task_outputs = some_task(inputs=inputs).with_overrides(requests=resource_overrides)
...
<some other tasks in this wf that consume the above task's outputs>
I did not test that with regular @workflow
s, but I think workflow inputs aren’t promises since they are immediately available so it should work the same, but need to test this in action. If regular workflow doesn’t work, use a dynamic, which works for me.thankful-minister-83577
I have a task that varies in memory usage depending of the parameters, if I set high requests values doesn’t mean that all pod for that task will allocate that amount of resources, correct?yes, there’s an ongoing project to allow users to configure this dynamically, but it’s a ways away. for now, you are able to override at the workflow level with the with_overrides() call.
thankful-minister-83577
The pod would then get a limit of 100Mi in this case, even though the default might be higher.yes the reason for this comes from k8s… for everything except CPU, you need to set request = limit otherwise k8s changes the eviction policy.
thankful-minister-83577