abundant-hamburger-66584
12/22/2022, 3:42 PMabundant-hamburger-66584
12/22/2022, 3:42 PM@task(
task_config=Databricks(
databricks_conf={
"run_name": "test databricks",
"existing_cluster_id": "1220-215617-43ri4502",
"timeout_seconds": 3600,
"max_retries": 1,
}
),
abundant-hamburger-66584
12/22/2022, 3:45 PMabundant-hamburger-66584
12/22/2022, 3:45 PMabundant-hamburger-66584
12/22/2022, 3:51 PMglamorous-carpet-83516
12/22/2022, 6:45 PM_execute_task_cmd.callback(test=False, **args)
abundant-hamburger-66584
12/22/2022, 6:59 PMabundant-hamburger-66584
12/22/2022, 8:09 PMModuleNotFoundError: No module named 'flyte_cookiecutter'
This is my folder structure and I have init…
flyte_cookiecutter
__init__.py
workflows
__init__.py
databricks.py
abundant-hamburger-66584
12/22/2022, 8:09 PMabundant-hamburger-66584
12/22/2022, 8:21 PMglamorous-carpet-83516
12/22/2022, 8:39 PM.
for dest directory?glamorous-carpet-83516
12/22/2022, 8:40 PMabundant-hamburger-66584
12/22/2022, 9:00 PMabundant-hamburger-66584
12/22/2022, 9:06 PMglamorous-carpet-83516
12/22/2022, 9:07 PMabundant-hamburger-66584
12/22/2022, 9:10 PMglamorous-carpet-83516
12/22/2022, 9:18 PMabundant-hamburger-66584
12/22/2022, 9:28 PMabundant-hamburger-66584
12/22/2022, 9:29 PMbrave-island-50333
12/22/2022, 9:33 PMbrave-island-50333
12/22/2022, 9:33 PMglamorous-carpet-83516
12/22/2022, 9:40 PMfreezing-airport-6809
freezing-airport-6809
salmon-refrigerator-32115
01/05/2023, 7:33 PMglamorous-carpet-83516
01/05/2023, 7:39 PMsalmon-refrigerator-32115
01/05/2023, 7:42 PMglamorous-carpet-83516
01/05/2023, 8:00 PMthankful-minister-83577
thankful-minister-83577
abundant-hamburger-66584
01/05/2023, 8:14 PMabundant-hamburger-66584
01/05/2023, 8:14 PMabundant-hamburger-66584
01/05/2023, 8:15 PMsalmon-refrigerator-32115
01/05/2023, 8:26 PMsalmon-refrigerator-32115
01/05/2023, 8:32 PMglamorous-carpet-83516
01/05/2023, 8:56 PMFor the spark plugin, it seems like it save sand loads the datasets directly from s3:That transformer only works for spark.dataframe. yes, you can directly return spark dataframe in the task as well.