• español
    • English
  • Login
  • español 
    • español
    • English
  • Tipos de Publicaciones
    • bookbook partconference objectdoctoral thesisjournal articlemagazinemaster thesispatenttechnical documentationtechnical report
Ver ítem 
  •   IMDEA Networks Principal
  • Ver ítem
  •   IMDEA Networks Principal
  • Ver ítem
JavaScript is disabled for your browser. Some features of this site may not work without it.

Sustainable Operation of 5G Virtualized RAN Computing Infrastructure

Compartir
Ficheros
APOSTOLAKIS_NIKOLAOS_PHD_THESIS_FINAL_v3.pdf (4.002Mb)
Identificadores
URI: https://hdl.handle.net/20.500.12761/1995
Metadatos
Mostrar el registro completo del ítem
Autor(es)
Apostolakis, Nikolaos
Supervisor(es)/Director(es)
Gramaglia, Marco; Banchs, Albert
Fecha
2025-10-31
Resumen
Virtualized Radio Access Networks (vRANs) are undergoing a significant paradigm shift, moving from specialized, vendor-specific hardware platforms to commercial off-the-shelf (COTS) servers for baseband processing and upper-layer 5G functions. This transition enables operators to avoid vendor lock-in and promotes interoperability across diverse software and hardware ecosystems. Moreover, shared COTS infrastructure allows for efficient multi-tenant resource usage, simplified orchestration, and ultimately, reduced capital (CapEx) and operational (OpEx) expenditures. However, typical vRAN implementations, particularly the physical (PHY) layer running on COTS servers, struggle to meet the computational demands and strict timing requirements of 5G. While modern CPUs support both data and pipeline-level parallelism, they are often unable to handle the millisecond-scale latencies required for efficient 5G operation. To overcome these limitations, vRAN deployments increasingly rely on specialized hardware accelerators such as Graphics Processing Units (GPUs), Application-Specific Integrated Circuits (ASICs), and Field Programmable Gate Arrays (FPGAs), which are better equipped to meet these real-time constraints and deliver the necessary quality of service. Yet, these solutions introduce their challenges in terms of sustainability, flexibility, and manageability. GPUs, while easier to orchestrate in cloud-native environments, require substantial upfront investment and consume significantly more power than COTS servers. Although vendors aim to amortize GPU costs by colocating high-revenue stream workloads like Artificial Intelligence (AI) inference and training, this strategy remains largely theoretical and unproven in practice. ASICs and FPGAs, on the other hand, are often the default choice for many vRAN vendors due to their lower energy consumption and cost-effectiveness. However, their lack of reprogrammability and limited support for multi-tenant sharing hinder efficient management and orchestration by end-to-end network controllers. The aforementioned limitations of external hardware in supporting the 5G vRAN vision highlight the need for financially sustainable solutions based on CPU-only COTS servers, resources that are inherently easier to share, manage, and maintain across the evolution of mobile networks. At the same time, there is a growing demand for new computing paradigms that can improve hardware resource utilization and reduce the energy footprint of telecom operators. This thesis contributes to both of these directions. First, it introduces the design and implementation of a CPU-aware Medium Access Control (MAC) scheduler that intelligently allocates computational resources to users, significantly enhancing the reliability of time-critical task execution. Second, it reformulates the Forward Error Correction (FEC) problem, an essential component of 5G and 6G baseband processing, as an optimization problem that can be efficiently solved on a quantum computer. This positions quantum hardware as a promising addition to the data center resource pool, offering a potentially lower energy footprint compared to conventional accelerators. The first part of this thesis introduces ATHENA, a Machine Learning (ML)-based radio resource controller that intelligently allocates uplink grants to users based on the CPU quota available to the virtualized Distributed Unit (vDU). The vDU typically operates alongside third-party applications on conventional shared COTS infrastructure, eliminating the need for external accelerator units. ATHENA is deployed as an O-RAN real-time application (dApp) within the O-RAN Real-Time RIC (Radio Intelligent Controller) and leverages instantaneous CPU congestion metrics provided by the Service Management and Orchestration (SMO) framework. These metrics capture the effects of resource contention (commonly referred to as the noisy neighbor effect) caused by non-RAN applications. By dynamically controlling radio resource allocation in response to CPU load, ATHENA improves the determinism of vDU processing timelines, enabling them to meet the strict latency requirements imposed by 5G technologies and use cases. ATHENA is built on a Reinforcement Learning (RL) framework and is platform-agnostic as it does not assume any specific CPU architecture or configuration. Compared to baseline schedulers, it delivers higher reliability under both low and high levels of CPU contention from neighboring applications. The development and evaluation of ATHENA were based on the design and implementation of a fully cloud-native, end-to-end 5G deployment, encompassing both RAN and core network components. This system is rigorously described in the current thesis, and to the best of the author's knowledge, at the time of its implementation, it was the first open-source initiative to provide such a complete system without requiring expensive hardware RF front-ends. This approach democratizes access to state-of-the-art 5G deployments, supporting researchers and practitioners in developing algorithms and identifying challenges associated with CPU-only vDU architectures. In a direction orthogonal to conventional processor-based approaches, the second part of this thesis introduces Qu4Fec, a novel LDPC (Low-Density Parity-Check) decoder implemented on quantum computers. Qu4Fec reformulates LDPC decoding as an optimization problem that can be natively solved on quantum hardware. In simulation, Qu4Fec outperforms existing state-of-the-art approaches by 1.82 dB and underscores the importance of carefully designing LDPC codes to preserve their error-correcting performance within the quantum paradigm. Moreover, Qu4Fec achieves comparable decoding performance to conventional LDPC decoding algorithms, suggesting the feasibility of offloading 5G FEC tasks to quantum computers, which, according to recent studies, may offer a lower energy footprint than conventional computing platforms, contributing in this way to the sustainability of mobile systems. However, experimental evaluations on actual quantum hardware revealed significant discrepancies compared to simulation results. These were primarily attributed to the high noise levels in current quantum systems, as well as limitations introduced by scaling and quantization during quantum program compilation. Finally, this thesis offers a forward-looking perspective on the use of quantum machines for mobile network workloads and proposes hardware-level enhancements to future quantum architectures that could better support LDPC decoding tasks.
Compartir
Ficheros
APOSTOLAKIS_NIKOLAOS_PHD_THESIS_FINAL_v3.pdf (4.002Mb)
Identificadores
URI: https://hdl.handle.net/20.500.12761/1995
Metadatos
Mostrar el registro completo del ítem

Listar

Todo IMDEA NetworksPor fecha de publicaciónAutoresTítulosPalabras claveTipos de contenido

Mi cuenta

Acceder

Estadísticas

Ver Estadísticas de uso

Difusión

emailContacto person Directorio wifi Eduroam rss_feed Noticias
Iniciativa IMDEA Sobre IMDEA Networks Organización Memorias anuales Transparencia
Síguenos en:
Comunidad de Madrid

UNIÓN EUROPEA

Fondo Social Europeo

UNIÓN EUROPEA

Fondo Europeo de Desarrollo Regional

UNIÓN EUROPEA

Fondos Estructurales y de Inversión Europeos

© 2021 IMDEA Networks. | Declaración de accesibilidad | Política de Privacidad | Aviso legal | Política de Cookies - Valoramos su privacidad: ¡este sitio no utiliza cookies!