Yes
@freezing-airport-6809, that's exactly it. In our stack, we use MLflow to track experiments and save models. Uniting it with Flyte, we could have a place to analyse executions and, consequently, model performance.
If I understood correctly, our current approach is very similar to
@jolly-whale-9142's. Does your MLflow server have some sort of authorization layer? If so, how do you make it work? Because right now we're passing Secrets to tasks and setting the required environment variables with a
configure_mlflow()
helper function.
I think this decorator could work in two ways: a) by simply starting and ending a run, and configuring the connection to the server if needed; or b) same as (a), but also by keeping track of all logs being made inside a function, and batch logging everything after the run is over. Functionality (a) is very trivial, maybe too trivial to become an integration; functionality (b) would be awesome, but I cannot think of a way to implement it without creating an extra layer of work to the end user. Thoughts?