Hi, I have a doubt regarding scaling of nodes. Do ...
# ray-integration
f
Hi, I have a doubt regarding scaling of nodes. Do we have options to make each worker pod run in different node so that the node will spawn 'n' number of nodes with a less memory instance? for eg, Now if I request for 8G memory and 4 CPU and request for 4 replicas, the node is spawning an instance with higher GB instance and trying to accommodate all worker nodes in single node. Instead I need an approach where each worker pod should schedule in 4 different node with less GB instance. Do we have any way to achieve this scaling?
t
Could you revive that thread? I'll ping my team to respond.
151 Views