Hi, asking about distributed locking in Flyte. I n...
# ask-the-community
e
Hi, asking about distributed locking in Flyte. I noticed there's an issue already about ability to serialize executions so that none of those would run in parallel if one is already running, but I'm thinking of even more generic feature, which would be some sort of distributed mutex/lock/semaphore. Edit: this issue 267 is exactly about what I'm after. As a huge fan of Redis, I guess one rather simple way to achieve this would be to use distributed locks (Redlock), but wanted to ask here what's your way of solving similar issues? My use case involves an ETL pipeline, where certain task takes huge amount of memory and I wouldn't want to ever run two workflows in parallel, because it'd cause OOM immediately. On the other hand I want to schedule that workflow often enough, so that if there's a long backlog it would quickly start another workflow when previous finished.
b
Hi Edvard and welcome to the community! Would you mind putting a proposal together in the RFC incubator. This is usually where initial ideas for RFCs are sketched out and then discussed in our bi-weekly community sync (next one being tomorrow). If it gets enough traction it would then graduate into an RFC. cc @David Espejo (he/him) feel free to correct me if I’m wrong here
e
Created #3754
k
e
Thanks -- should have added that unfortunately that cache serializing doesn't help, because due to nature of this workflow cache would be never hit
k
I mean you could fake use the cache?
Ohh wait I understand
You want them to run serially, my bad, not cache
But to be honest we could just use the cache reserve protocol to achieve this
e
Yup, or not even serially, but just not run at all if a process is already running. issue 267 mentions exactly the same problem I have
k
These are scheduled executions, would that be ok?
e
It would be ok, and would alleviate the problem. Thing is, I can easily see how a situation would occur that scheduler would schedule n invocations of a workflow, even though currently wf would make all those unnecessary when complete. Then again there shouldn't be that many such queued invocations, so it could be way to go
Only now noticed your reply and yes, it'd be exactly skipping which would be preferred in this case 🙂
k
Ok then I would implement this at the launchplan level
149 Views