Hi All, can anyone please tell me how can I control the type of instance that I can get as part of spark cluster when running a spark job through flyte? Is there such a provision? I see that we can specify spark_conf, does that take care of what type of instances are brought into kubernetes cluster for spark job?
I have my kubernetes cluster on aws
In databricks, I can control what type of ec2 instances by master and worker nodes are. I'm looking for similar feature in flyte
f
freezing-airport-6809
11/02/2022, 1:41 PM
The type of instance is dependent on your nodepool- Flyte will not magically get a different instance type. Your autoscale and asg determine that
👍 1
e
elegant-australia-91422
11/02/2022, 6:09 PM
You may also want to take a look at Karpenter (https://karpenter.sh/) -- rather than having static managed node groups, you can leverage various provisioners and annotations/namespaces to spin up different instance types on demand
❤️ 1
w
worried-winter-16424
11/02/2022, 7:15 PM
thanks @elegant-australia-91422 will check it out!