#ask-the-community
# ask-the-community
a
Hi, I am trying to do databricks integration from Flyte via the plugin within our enterprise setting and have the following questions. Request someone to please help answer these questions.
  1. We have an existing cluster up and running in databricks with flytekit and flytekitplugins-spark installed. Can I still provide the docker image that I want use for running my task in the cluster config or docker image paths should be provided only on new clusters created via the config for the plugin?
  2. We are using pyflyte package + flytectl register to package & register the workflow given that we have to do this via CI/CD when we are ready to productionize the workflows
a. Can you please help understand how the code gets injected into the databricks cluster path in the case of pyflyte package + flytectl register. Per my understanding, flytectl register pushes only the protobuf objects and believe I should always have a custom docker image provided with the code copied to the databricks path in the image. b. When we do a pyflyte package with the --image parameter, I am assuming that this is the image that is actually used to launch the task in databricks. Right? c. What is the purpose of the destinationDir parameter in flytectl register. The documentation says that it provides the path where code resides in the container. Does it relate to the image specified during the pyflyte package?