Hey all, We got this bug where parallelism drops f...
# flyte-support
b
Hey all, We got this bug where parallelism drops from unlimited to 1 Which extremely slows down the workflow. showing the following error in flyte logs: (notice last line)
Copy code
{"json":{"exec_id":"ahsk8mtk7gjnhp8g2b9j","ns":"flytesnacks-development","res_ver":"221585641","routine":"worker-15","src":"executor.go:364","wf":"flytesnacks:development:brainflyte.etl.exports.workflow_preprocessed_data_for_download.wf_prepare_and_save_group_data"},"level":"info","msg":"Handling Workflow [ahsk8mtk7gjnhp8g2b9j], id: [project:\"flytesnacks\" domain:\"development\" name:\"ahsk8mtk7gjnhp8g2b9j\" ], p [Running]","ts":"2025-01-13T16:29:37Z"}
{"json":{"exec_id":"ahsk8mtk7gjnhp8g2b9j","node":"n0","ns":"flytesnacks-development","res_ver":"221585641","routine":"worker-15","src":"handler.go:180","wf":"flytesnacks:development:brainflyte.etl.exports.workflow_preprocessed_data_for_download.wf_prepare_and_save_group_data"},"level":"info","msg":"Dynamic handler.Handle's called with phase 1.","ts":"2025-01-13T16:29:37Z"}
{"json":{"exec_id":"ahsk8mtk7gjnhp8g2b9j","node":"n0/dn3/n12","ns":"flytesnacks-development","res_ver":"221585641","routine":"worker-15","src":"handler.go:180","wf":"flytesnacks:development:brainflyte.etl.exports.workflow_preprocessed_data_for_download.wf_prepare_and_save_group_data"},"level":"info","msg":"Dynamic handler.Handle's called with phase 0.","ts":"2025-01-13T16:29:37Z"}
{"json":{"exec_id":"ahsk8mtk7gjnhp8g2b9j","node":"n0/dn3/n12","ns":"flytesnacks-development","res_ver":"221585641","routine":"worker-15","src":"handler.go:670","tasktype":"python-task","wf":"flytesnacks:development:brainflyte.etl.exports.workflow_preprocessed_data_for_download.wf_prepare_and_save_group_data"},"level":"info","msg":"Parallelism now set to [1].","ts":"2025-01-13T16:29:37Z"}
We are using flytekit 1.13.5. We are also using map_task inside of a dynamic workflow (which creates duplicate runs in the UI as shown in appended pic) - I don't know if this is related but up until now it look like it doesn't really have any effect on our workflows.
t
could you elaborate a bit when you get a chance @best-oil-18906 - are you able to repro this without the dynamic task? the dynamic task shouldn’t really be affecting anything. also roughly what’s the length of the map task & resources requested?
i think the log line might be a red herring btw
b
hey length of map task is around 800 resources are 16gi 4 cpu I'll try to reproduce it tommorow