Efficiency of Distributed Selection of Edge or Cloud Servers Under Latency Constraints
Fecha
2023-06-13Resumen
We consider a set of network nodes that generate latency-constrained service requests requiring the execution of computing tasks on servers located either in the cloud or at the network edge. We explore the efficiency of a distributed server selection strategically performed by individual nodes. In an earlier analysis, we argued for a stateless centralized allocation based on a probabilistic selection between edge and cloud servers. In that proposal, the optimal share of edge and cloud tasks was computed according to static network characteristics, with no knowledge of the actual network state. In this new study, we perform an analysis based on game theory, where we compare the globally optimal allocation performed at a central level against a distributed server selection driven by the selfish objectives of individual nodes. The inefficiency of the selfish allocation can be computed as the price of anarchy, which is shown to be very small, thus justifying a distributed strategic implementation of stateless policies. This insight is precious for designing algorithms for server selection and quantitatively proves the efficiency of distributed selfish approaches.