Hi All, can anyone please tell me how can I control the type of instance that I can get as part of spark cluster when running a spark job through flyte? Is there such a provision? I see that we can specify spark_conf, does that take care of what type of instances are brought into kubernetes cluster for spark job?
I have my kubernetes cluster on aws
In databricks, I can control what type of ec2 instances by master and worker nodes are. I'm looking for similar feature in flyte