Hello, is there a way to handle "workflow terminat...
# ask-the-community
s
Hello, is there a way to handle "workflow termination from UI" from python task plugins? Our plugin spins up a
dask
cluster and we would like to ensure that the cluster gets cleaned up in every case that the workflow finishes (successful completion, failure, or user terminates). We are currently using the dask-kubernetes
KubeCluster
operator to create the cluster in
pre_execute
, then close everything in the
post_execute
(pseudo-code). However,
post_execute
doesn't get called when a user terminates from the UI, so we are seeing the
dask
cluster consistently hang. Thanks for your help!
k
This is not currently support in flytekit, but we have an issue for that. flytekit will execute some code (delete dask cluster in this case) upon termination of the task
s
Ah, thank you! Is this on your roadmap, and what do you suggest as a workaround? We have been looking into the k8s Owners and Dependents mechanism (i.e. set dask cluster as a dependent of the
flyteworkflow
pod so that it gets cleaned up automatically when the flyte pod does) but haven't figured out how to access the uid of the
flyteworkflow
from the plugin code.
t
@Kevin Su revisit this please!
s
FYI we unblocked ourselves on the k8s Owners and Dependents solution by exposing the flyteworkflow uid as an environment variable, @Derek Yu can fill in more details!
k
yes, this is on our roadmap. we may implement this in the next flytekit release. nice, good to know that.
159 Views