Evan Sadler
12/22/2022, 3:42 PM@task(
task_config=Databricks(
databricks_conf={
"run_name": "test databricks",
"existing_cluster_id": "1220-215617-43ri4502",
"timeout_seconds": 3600,
"max_retries": 1,
}
),
Kevin Su
12/22/2022, 6:45 PM_execute_task_cmd.callback(test=False, **args)
Evan Sadler
12/22/2022, 6:59 PMModuleNotFoundError: No module named 'flyte_cookiecutter'
This is my folder structure and I have init…
flyte_cookiecutter
__init__.py
workflows
__init__.py
databricks.py
Kevin Su
12/22/2022, 8:39 PM.
for dest directory?Evan Sadler
12/22/2022, 9:00 PMKevin Su
12/22/2022, 9:07 PMEvan Sadler
12/22/2022, 9:10 PMKevin Su
12/22/2022, 9:18 PMEvan Sadler
12/22/2022, 9:28 PMTanmay Mathur
12/22/2022, 9:33 PMKevin Su
12/22/2022, 9:40 PMKetan (kumare3)
Frank Shen
01/05/2023, 7:33 PMKevin Su
01/05/2023, 7:39 PMFrank Shen
01/05/2023, 7:42 PMKevin Su
01/05/2023, 8:00 PMYee
Evan Sadler
01/05/2023, 8:14 PMFrank Shen
01/05/2023, 8:26 PMKevin Su
01/05/2023, 8:56 PMFor the spark plugin, it seems like it save sand loads the datasets directly from s3:That transformer only works for spark.dataframe. yes, you can directly return spark dataframe in the task as well.