Is there a way to rebuild the database that flyte ...
# flyte-deployment
m
Is there a way to rebuild the database that flyte admin uses from the s3 bucket? Today we attempted to migrate to a new s3 bucket, we did a backup, restored it to a new bucket, and then updated our values.yaml to point to the new bucket. The backend started erroring with:
Copy code
Unable to read WorkflowClosure from location s3://[redacted-old-bucket-name]/metadata/admin/flytesnacks/production/plaster.genv2.generators.sigproc_v2.sigproc_v2_analyze_workflow/23.5.19 : path:s3://[redacted-old-bucket-name]/metadata/admin/flytesnacks/production/plaster.genv2.generators.sigproc_v2.sigproc_v2_analyze_workflow/23.5.19: Conf container:[redacted-new-bucket-name] != Passed Container:[redacted-old-bucket-name]. Dynamic loading is disabled: not found" debug_error_string = "UNKNOWN:Error received from peer {grpc_message:"...
(I've elided the end of the message because it repeats the error message).
A couple of things jump out to me: • what is "dynamic loading" - is what we're trying to do possible with that enabled? • If in some catastropic event one lost the RDS, it feels like the s3 bucket has all the information necessary to rebuild it, right?
This story ends up with us rolling back to the original s3 bucket, we'd like to still move though!
k
No you cannot today
But you are right
We rather rely on db backups as sources of truth
m
@Ketan (kumare3) thanks for confirming that. Is updating the RDS tables that reference the s3 bucket name safe to do? These tables explicitly have the s3 bucket in one of their columns: • artifact_data • node_executions • task_executions • workflows I see there are a lot of binary blobs / “closures” / hashes (perhaps?) in tables and I cannot be sure if they also reference the bucket name.
158 Views