New RunLLM response generated: >>>*# Question* He...
# runllm-flyte-trial
j
New RunLLM response generated: >>>*# Question* Hey team! In our current Flyte -> Spark setup, we use our internal infra to run Spark jobs. This infra is owned by another team. Currently there's no shutdown hook implemented ie, if user terminated Flyte job, the Spark job keeps running. Question: What is recommended way to implement shutdown? I can think of couple of ways: 1. have shutdown handler which stops the Spark job when Flyte wf is being terminated 2. other is to use Flyte Agent to terminate any orphaned Spark jobs # Answer It seems likely that implementing a shutdown handler to stop the Spark job when a Flyte workflow is terminated is a viable approach. However, there are challenges with signal handling in Flyte tasks, as discussed in this GitHub issue. The Flytekit PythonTask interface does not currently support a termination hook, but there is interest in adding this feature. Another approach could be using a Flyte Agent to manage and terminate orphaned Spark jobs. This method is suggested in this discussion, where using agents is recommended for handling external services. The Flyte Agent can be configured to clean up resources when a workflow is terminated. Would you like more details on either of these approaches? Sources: - GitHub Issue: Flytekit plugin termination hook - this discussion Slack Link https://flyte.slack.com/archives/CP2HDHKE1/p1732037296.543429 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.