Hello, I'm at a point with flyte now where I'm asking myself how bigger data (talking GBs) is passed efficiently between tasks.
I cannot imagine that if I have a setup on e.g. AWS that those bigger output/input data is synced with S3 all the time. Or am I missing something?
What would be the way to exchange big data between tasks inside a workflow without up and downloading it on every task again?
I can imagine that it could be shared over a mounted volume from the k8s cluster, but that would probably interfere with the caching mechanism at some point, right?