Hi!, I’m encountering an issue with memory allocat...
# flyte-support
p
Hi!, I’m encountering an issue with memory allocation in my Flyte deployment using the Helm chart for flyte-core. Despite specifying requests and limits in the Flyte task decorator, these settings do not seem to affect the pod memory. The pods always allocate memory (and cpu) based on the default values defined in the flyte-admin-base-config under task_resource_defaults, like this: values.yaml:
Copy code
task_resource_defaults:
      task_resources:
        defaults:
          cpu: 250m
          memory: 2Gi
        limits:
          cpu: 5000m
          memory: 64Gi
Here’s what I’ve done so far: • Set a low default memory request and a high memory limit in task_resource_defaults. • Observed that changing the defaults in task_resource_defaults affects the memory available to the pods. • Observed that changing the requests and limits in the task decorator (inside or outside the limits of task_resource_defaults) affects nothing. I have been checking the memory available for the pod with kubectl describe pod/node where it always shows the requests and limist as the exact same as the defaults in task_resource_defaults. Also, when the memory usage of the pod goes over the limit I get OOMKilled. For some reason the Flyte task decorator seem to be ignored, and only the defaults are applied. Could anyone help me understand what might be missing or misconfigured? Any insights or suggestions would be greatly appreciated!
g
which version of flytekit and flyteadmin are you using?
p
It is flyte-core version v1.10.0. It's not set up by me and I'm not exactly sure about the flytekit and flyteadmin versions. Looking into that
flytekit is 1.3.2. Seems pretty old and that should be easy to update!
g
yup, could you try to update flytekit and run it again? let me know if it still doesn’t work
p
I'll try that!
g
thanks
p
updated to flytekit 1.12.2 but that does not seem to have effect
a
p
^ This works 🤩
Thanks for the tip! Pretty weird though that there is such a difference between using the task decorators versus overriding them like this 🤷 I just set
.with_overrides(limits=Resources(cpu="6", mem="40Gi"))
and it allocates that to the pod
a
so it overrides even the task_resource_defaults config?
p
yes, so for me currently the task_resource defaults and limits work as intended but the decorators for the task do not have effect on anything. By default the pod is started with the memory specified in task_resources defaults. However, if I use the
.with_overrides
it correctly modifies the memory withing the limits specified in task_resources limits. It clearly cannot go higher than the limit. The weird part is why are the task decorator requests and limits not working