Hi All, can anyone please tell me how can I contro...
# ask-the-community
s
Hi All, can anyone please tell me how can I control the type of instance that I can get as part of spark cluster when running a spark job through flyte? Is there such a provision? I see that we can specify spark_conf, does that take care of what type of instances are brought into kubernetes cluster for spark job? I have my kubernetes cluster on aws In databricks, I can control what type of ec2 instances by master and worker nodes are. I'm looking for similar feature in flyte
k
The type of instance is dependent on your nodepool- Flyte will not magically get a different instance type. Your autoscale and asg determine that
r
You may also want to take a look at Karpenter (https://karpenter.sh/) -- rather than having static managed node groups, you can leverage various provisioners and annotations/namespaces to spin up different instance types on demand
s
thanks @Rahul Mehta will check it out!
163 Views