Stateful Versus Stateless Selection of Edge or Cloud Servers Under Latency Constraints
Date
2022-06-14Abstract
We consider a radio access network slice serving mobile users whose requests imply computing requirements. Service is virtualized over either a powerful but distant cloud infrastructure or an edge computing host. The latter provides less computing and storage capacity with respect to the cloud, but can be reached with much lower delay. A tradeoff thus naturally arises between computing capacity and data transfer latency. We investigate the performance of this service model, discussing how service requests should be routed to edge or cloud servers. We look at the performance of various classes of online algorithms based on different levels of information about the system state. Our investigation is based on analytical models, simulations in OMNeT++, and a prototype implementation over operational cellular networks. First of all, we observe that distributing the load of service requests over edge and cloud is in general beneficial for performance, and simple to implement with a stateless online server selection policy that can be easily configured with near-optimal performance. Second, we shed light on the limited improvements that stateful polices can offer, notwithstanding they base their decisions on the knowledge of server congestion levels or round-trip latency conditions. Third, we unveil that stateful policies are dangerously prone to errors, which may make stateless policies preferable.