hi folks, do you run Flyte -- local mode -- on not...
# ask-the-community
m
hi folks, do you run Flyte -- local mode -- on notebook on transient notebook env (eg. databricks, collab)? In that situation, how do you enable local cache? the default behavior writes output to
~/.*flyte*/*local*-*cache*/
which assumes strongly a durable persistent local env
k
Wdym
Local cache is local
m
right.. but if i run Flyte code on databricks environement for example.. cache just doens't work across runs
so i'm asking if there are good solutions for such ephemeral environment?
k
Wdym is there no disk
There should be
Ya we can make it store in s3
I love the idea
m
yah i see that ``~/.*flyte*/*local*-*cache*/` now has sq-lite artifact
k
Shall we collaborate on this
m
and flytekit likely queries on this local db
k
Ya it does
m
so if we implement s3 path, it's gonna look more like Flyte running on K8S cluster
k
No it won’t
As that needs a db
Here we will have to use lookup
m
just s3 look-up by uri path?
k
But if this is a custom cache then we could Simply upload the cache db
That’s the other option
m
ok i'd love to collaborate
databricks is the standard notebooking env we are going with, so we'd like to have caching funcationality here
k
Ok I have never used it so would love to understand
Why databricks
Let’s have a chat sometime
m
databricks notebook have great UX and it's been worth the $$ for the productivity gain for our engs
let me write up something and will share with you to get first round of feedback
we are actually want to make local/remote execution more seamless.. often when folks do remote execution, they are hoping they can reuse result from remote execution to iterate locally as well
k
@Mick Jermsurawong using remote cache locally is dangerous
But you can fetch all the data
Checkout the new Flyte data uri
m
it is dangerous in the sense that you are concerned about data corruption right?
i think read-only secondary cache is sufficient for us
anyways that's a secondary ask.. I think the first ask is just to be able to have external durable storage for local execution, as described in the issue above
pls let me know further thougths, and will be happy to contribute
k
Let me discuss today
m
hi ketan! any further thoughts on this?
k
i read it briefly, i have some comments, i guess i think if we set a s3 path you do not even need a prefix / context
but also we wont have time to work on this at the moment
m
if s3 path can be env var as well, that would work.. i'm happy to implement the work here, but want to make sure that directinoally it's something OSS will accept
k
yes i think we should, @Yee is out but he will be back week after (he got married)
m
sounds good. will work with Yee on this then
hi @Eduardo Apolinario (eapolinario)! thanks for the input here https://github.com/flyteorg/flyte/issues/4580#issuecomment-1864541011 also happy to chat here if it's more helpful.
e
awesome, let's keep chatting here. It wouldn't be too hard to lean on our flytekit's existing infra to support loading/writing to a blob store. I just wanted to separate the two ideas: (1) the local scope, and (2) a remote cache. If you want to throw a PR I'd be more than happy to review.
m
gotcha.. 1/ local scope here is simply have the cache local disk to be configurable right? 2/ remote cache will also reuse that cache path? 2.1/ do you have preference if we will simply sync the whole DB files that python diskcache write.. or should we try to encode the cache key in indiviual remote blob store path (closer to how on-cluster execution works)
e
1/ correct. 2/ Yeah, the local scope is optional, its purpose is just to help you segregate local caches. 2.1/ That's a good question. It'd be simpler to sync all DB files, but I fear that this might make the local cache very slow after multiple runs (imagine the case of a few thousand objects of different sizes being stored there). I also dislike the fact that if we go that route the local cache becomes slower and slower... so my vote goes for to make each entry its own separate entry in the blob store. wdyt?
m
Sorry eduardo for late response. And happy new year! 2.1/ Yup each cache entry can have its own entry in the blob store. That makes sense to me