agreeable-kitchen-44189
07/27/2022, 10:26 AMagreeable-kitchen-44189
07/27/2022, 10:26 AMdevelopment
, staging
and production
). We do have a POC running and are currently testing.
One of the hurdles we're seeing at the moment is related to reference_tasks (and workflows/etc.). The rough outline is, that some of our teams publish workflows which are then consumed by downstream workflows from other teams. To keep the teams docker environments separate (and thus enable multiple Python envs, etc), we're using reference tasks to connect the workflows.
The problem now is that whenever a reference task is started, it will run in the same domain (i.e. k8s-cluster/k8s-namespace) as the workflow that started it.
Imagine a workflow which creates some output based on data in the platform. This workflow uses a launch plan from another team to ensure required data is ingested into the platform. For separation reasons, the service account in the development
domain only has read access to the data. The import workflow (from the other team) would need write access though (which it would have, if it would be running in production
domain/cluster).
I would think this is a limitation of propeller (as it receives the whole workflow CRD), and thus can't schedule a subworkflow to a different domain or cluster as it would need to leave the data plane then? Or are we missing something here?freezing-airport-6809
freezing-airport-6809
freezing-airport-6809
agreeable-kitchen-44189
07/27/2022, 9:25 PMfreezing-airport-6809
freezing-airport-6809
freezing-airport-6809