Towards Native Explainable and Robust AI in 6G Networks
Fecha
2022-06-08Resumen
6G networks are expected to face the daunting task of providing support to a
set of extremely diverse services, each more demanding than those of previous
generation networks (e.g., holographic communications, unmanned mobility,
etc.), while at the same time integrating non-terrestrial networks, incorporating new technologies, and supporting joint communication and sensing. The
resulting network architecture, component interactions, and system dynamics
are unprecedentedly complex, making human-only operation impossible, and
thus calling for AI-based automation and configuration support. For this to
happen, AI solutions need to be robust and interpretable, i.e., network engineers
should trust the way AI operates and understand the logic behind its decisions.
In this paper, we revise the current state of tools and methods that can make
AI robust and explainable, shed light on challenges and open problems, and
indicate potential future research directions.