Hey team! I’m checking the logs in the propeller, ...
# ask-the-community
f
Hey team! I’m checking the logs in the propeller, and seeing:
Failed to cache Dynamic workflow [[CACHE_WRITE_FAILED] Failed to Cache the metadata, caused by: The entry size is larger than 1/1024 of cache size
In the propeller optimization guide I saw that there is supposed to be a
storage.cache
field to tune this, but I could not find examples about how to fill that (is it a number? a string? a map?). Would you link me to some resource about this?
d
So it looks like the
1/1024
is a constraint from the freecache (v1.1.1) library we're using. However, as you mentioned we can configure the cache size using the
max_size_mbs
option (example here). If this value is set to
0
then the cache is disabled. Looking through these values in the flyte deployments it seems like they should be updated as they're relatively arbitrary. Are you using a default deployment and running into these issues?
Also, does the workflow fail because of this? We probably should warn if it exceeds the cache size, but fallback to just not using the cache.
f
Hey! Thanks for coming back. It’s not failing, but we’re getting error logs for this. Maybe an info should be more appropiate? And I’m investigating why I have some queued tasks in the propeller which disappear if I restart the deployment; I think these things might be causing that, but not sure, trying to validate still