Hi, Coming from <https://docs.flyte.org/en/latest/...
# ask-the-community
a
Hi, Coming from https://docs.flyte.org/en/latest/deployment/aws/opta.html. I would like to customize the resources made by Opta (for example the parameters of AWS ASG worker nodes, like EC2 key pair name, auto public IP, or VPC ACLs), but can't see how to do that and where are they defined. Any ideas/pointers?
k
Hi @Attila Nagy welcome. I think these are indeed changeable- but let me point to expert's. Cc @Yuvraj @JD Palomino
🙏 1
y
@Attila Nagy You can check opta terraform module that they are using and you can only configure these values from opta https://docs.opta.dev/reference/aws/modules/aws-base/#fields I think few inputs are not configurable, what do say @JD Palomino ?
a
So the infrastructure made by an
opta apply
from the flyte repo actually just creates resources defined in opta itself, with a very limited ability to customize them? Is the preferred way to modify them is to generate the terraform files, edit them and apply those (with terraform itself, or with opta?)?
k
really an opta question, but you can use flyte helm charts directly
j
1. EKS Node groups do not support EC2 key pairs for ssh access last I checked 2. What do you want public ips for? We already give you a load balancer 3. VPC ACLs-- you mean security groups?
What exactly do you need customize?
a
I would like to have a playground where the machines are (ssh) accessible without any further configuration. That's what key pairs and public IPs would be needed for, to make this easy. The load balancer only configures HTTP/HTTPS ports (BTW, after running opta apply, three instances are created, but the LB reports two of them as unhealthy, I'm not sure if this is normal), and I guess it wouldn't be useful for accessing the instances directly. Nope, VPC ACLs. I don't like that the deployment is open to the public by default and a VPC ACL can quickly solve that. (but instance security groups are fine as well). Thanks.
j
EKS does not allow you to ssh into their nodegroups ec2s
Why would you want to do that in the first place?
a
The three nodes behind the LB are the EKS nodes?
j
yup
a
And is it normal for two of them to be unhealthy?
j
That’s a misnomer, but yes that should be fine
If you want I can go into detail about it
a
That would be awesome, yes
j
very well
the load balancer works by forwarding requests to different target groups depending on listener configurations
for our setup there’s only one target group which everything gets forwarded to
this target group references a specific port in each ec2 serving as a node which we hold as reserved for ingress nginx pods
Opta’s ingress nginx pod by default does not run on HA to make it easy for folks to start using it w/o too much resource issues
Instead it only runs on one pod, one ec2
that is the one which is healthy
the others are unhealthy because there is currently no nginx container running there to handle traffic as, again, we’re on not HA
and that’s the explanation
So why do you want to ssh into the ec2s?
a
Oh! So is deploying with Opta recommended for prod setups (where HA would be great)? If so, how?
Because the above weren't clear for me, I wanted to look around to better understand, but now I get it, thanks!
j
If you’re ready for prod and the extra load of HA sure
a
Is that nginx used only for the web UI or all flyte-related communications?
j
everything shown through the lb
👍 1
a
So what is the recommended way of: • deploying prod clusters with Opta • changing the cluster size/instance types • adding GPU nodes?
j
1. Other than the HA settings (see https://docs.opta.dev/reference/aws/modules/aws-k8s-base/), you should be good to go 2. The k8s-cluster module comes with a default nodegroup which you can configure (again, see our docs https://docs.opta.dev/reference/aws/modules/aws-eks/ ) 3. You can have multiple nodegroups for different node instance configurations (e.g. one with GPUS-- that can be set via the
use_gpu
bool input). See our nodegroup docs
a
OK, I hope I'm starting to get it. Thanks a lot for your help!
j
Anytime
k
thank you @JD Palomino
163 Views