Got it - even in this case, there are a few ways to do this I could see that avoid needing to actually perform the container builds in the orchestration system. Two slightly different approaches would be:
1. The flyte pipeline dynamically fans out to train N models and saves them in a location in S3 with a common prefix. Then, a downstream GitHub Actions job reads all model artifacts from the location in S3 and creates/pushes the container images (even better, if the majority of the contents of your container is the same, the only layer that may differ is the model)
2. The flyte pipeline can trigger GitHub Actions or some other CI system to run a job, with an input argument that points at the saved model artifact (via the 'workflow dispatch' API in the GitHub Actions case)
Also you may be able to get away w/ docker buildx to produce multi-arch images even if your host isn't an arm-based host