Cluster load balancing
The Kubernetes cluster is designed to accept traffic received by any node in the cluster. Even if an instance of the service for which the traffic is intended is not running on that node, the Kubernetes ingress controller will automatically forward the connection to a node capable of servicing the request.
System administrators have several options to choose from when deciding how to route traffic into their ICE Server. Consider a hypothetical cluster with three nodes, n1.cluster.com, n2.cluster.com and n3.cluster.com:
Chose any one physical node in the cluster, say n1.cluster.com and route all traffic to it by using its hostname as the server address when logging in. Doing so allows this node to function, implicitly, as the cluster’s load balancer. This provides the simplest configuration but at the cost of introducing a single point of failure: Should n1.cluster.com fail, the ICE Server software will remain running but clients won't be able to reach it on one of the remaining, live nodes.
Configure each physical node in your Kubernetes cluster as an 'ingress address' on the 'Organization' settings screen in ICE Desktop. In this example, you’d provision n1.cluster.com, n2.cluster.com, n3.cluster.com as ingress addresses. When logging into ICE Server, users may enter any one of those hostnames as the server address. Should that node fail, clients will automatically try to reconnect to the other nodes. This configuration solves the single-point-of-failure problem described above, but may not evenly distribute ingress load across the cluster (nor does it provide a way for an administrator to forcibly migrate users to or from a specific node).
Front the cluster with an external load balancer. This configuration can be as simple or as complex as your needs demand—even offering global routing of traffic to georedundant clusters distributed around the world.