Thanks
@average-finland-92144, I've had a custom domain/kubernetes-ingress controller for my minio/minio-console services since I my first iteration of the flyte deployment and configured the helm config files to point to the custom domain name for minio without issues, but what's confounded me is the fact that I only have issues with minio server when uploading files through the kubernetes ingress I created for minio (this issue doesn't occur for flyte admin, flyte console, nor even minio-console ingresses, just the minio server ingresses that flyte uses), which is when flyte tries to upload the .tar file of the packaged code.
I first thought this might have been a bug in flyte, since most people don't go the on-prem deployment route and I'm sure even less people try to productionize their blob storage when on-prem, but quickly figured it must be an ingress issue, since flyte doesn't throw checksum errors when uploading to my minio/s3 on-prem storage when using port forwarding. However, after trying the following nginx annotations to ensure my Ingress controller doesn't modify the file being uploaded, I still get the MD5 checksum error
nginx.ingress.kubernetes.io/proxy-ssl-verify: "off"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/proxy-body-size: '0'
nginx.ingress.kubernetes.io/proxy-pass-headers: Content-MD5, Authorization
nginx.ingress.kubernetes.io/proxy-read-timeout: '600'
nginx.ingress.kubernetes.io/proxy-request-buffering: 'off'
nginx.ingress.kubernetes.io/proxy-send-timeout: '600'
nginx.ingress.kubernetes.io/ssl-redirect: 'false'
Let me know if you or someone else has ever had this issue with on-prem minio/s3 storage and how they may have gotten around it. At least I have a work around, which is to port-forward the storage service to local environment in the meantime.