Hi <@U0530S525L0> <@USU6W5ATA> I'm trying to follo...
# ask-the-community
r
Hi @Evan Sadler @Kevin Su I'm trying to follow this guide and run a Flyte workflow on Databricks. Configured the dbx plugin based on this guide: https://docs.flyte.org/en/latest/deployment/plugins/webapi/databricks.html#deployment-plugin-setup-webapi-databricks I'm trying to take the easy way out first and run the workflow on an existing cluster. Installed
pyflyte
on the cluster and triggered the run like this:
pyflyte run -remote dbx_example_existing_cluster.py my_databricks_job
The Databricks job is started, but failing with this error:
TypeError: loader must define exec_module()
k
Did you build an image for the Databricks task?
Seems like module not found
r
no, I assumed it's not needed when running the job on an existing cluster
the guide says
For demonstration purposes, I saved time on setup by employing an existing cluster that already had the necessary Python dependencies installed. In a production environment, you can choose to configure a job cluster using a custom Docker image based on Databricks' base build, ensuring that all necessary dependencies are present.
I presume the docker image is only needed when you want to create a new cluster and want to manage the dependencies in the image.
I've just created a cluster and installed
pyflyte
package and copied the
actual code
to
/databricks/driver/databricks/
So, even if I use an existing cluster, I need to create a docker image and use that image in cluster config, right?
e
I was able to avoid using a custom docker image, but I needed to make sure that the code flyte injected into the cluster was in the correct place - wherever the python root was. There is an option with pyflyte to set --destination-dir. I believe I set it to be relative “.” And it worked. I’m not 100% this is the issue, but it’s worth a try. Flyte packages your code (without dependencies) and then injects into the databricks cluster / task. It then runs a command inside the machine with some assumptions. Usually it goes wrong with the code is in a different place / the destination python env is in a different place.
r
All right, thanks for the hint, will give it a try
one more quick question
did you install all these packages on your cluster?
or just the
pyflyte
package as I did?
e
I had a requirements.txt and just made sure to install that. One of the packages was flytekit. I didn’t use this one.
r
All right, thanks
managed to localize the issue so, when I create a Databricks cluster and install
flytekit
package, create a notebook with these import statements:
Copy code
from flytekit.core.workflow import ImperativeWorkflow as Workflow
from flytekit.core.workflow import WorkflowFailurePolicy, reference_workflow, workflow
from flytekit.deck import Deck
from flytekit.image_spec import ImageSpec
from flytekit.loggers import logger
and run it, I got the same issue:
The interesting thing is that this only happens on the first run of the cell. When I rerun the same cell, it completes successfully.
If I clear the state of the notebook and re-run the cell, the issue comes back.
I suspect it's a compatibility issue between the DBR and flytekit.
I used DBR 12.2LTS runtime and the latest flytekit package.
let me try the same versions that were used in the example
DBR 11.3LTS
and
flytekit==1.3.0b4
looks better:
ModuleNotFoundError: No module named 'flytekit.image_spec'
let me try
DBR 11.3LTS
and
flytekit==1.7.0
ah, ok,
TypeError: loader must define exec_module()
again, so this issue might be specific to
flytekit==1.7.0
180 Views