dc.description.abstract | The evolution towards self-configuring communication networks necessitates anticipatory approaches for network management, aiming for zero-touch orchestration. Intent-Based Networking (IBN) represents a transformative perspective by automating the translation of user-defined intents into effective network operations. Rather than specifying how tasks should be executed, users only have to define desired outcomes using high level human intents, such as secure data transfer between two locations, and the IBN system will adapt itself accordingly. The system continuously monitors and adjusts settings to ensure these goals are met, simplifying network management, enhancing agility, and speeding up service deployment. By abstracting complex configurations, IBN offers a more intuitive, efficient, and business-aligned way to manage networks, leveraging automation, artificial intelligence, and machine learning to align network configurations with user-specified outcomes.
To achieve this, network orchestrators must use automated decision models that accurately actualize intent-based objectives, especially in anticipatory networking where predicting the relationship between management decisions and performance is often impossible. Traditional prediction models, restricted by generic and manually set objectives, require perfect knowledge of this relationship, which is not feasible for many tasks. For instance, while it is possible to measure the resulting performance of a resource allocation decision a posteriori, determining the outcome in advance is extremely difficult. This highlights the need for advanced predictive models that autonomously understand and adapt to the interplay between predictions and targeted network management objectives. This thesis introduces a model designed to bridge the gap between loss functions and performance metrics in regression tasks, representing a significant advancement in automated machine learning. Real-world examples underscore the model's effectiveness, aligning predictions with the detailed objectives of IBN and showcasing its adaptability and capability to meet the demands of anticipatory networking tasks.
Central to this, is the goal of addressing the \say{loss-metric mismatch} problem in machine learning, where the loss function used for training deep neural networks (DNNs) does not align with the actual performance metric or the translated anticipatory Management and Network Orchestration (MANO) objective in the context of IBN. The first part of the thesis delves into this issue, proposing a model that aligns training objectives with performance metrics, enhancing the practical application of DNNs in network management. However, in the rapidly evolving landscape of network management, another critical aspect is aligning disparate entities towards a common objective. Traditional methodologies, which prioritize individual optimization goals, often lead to competitive behaviors that can deteriorate overall system performance. This is particularly problematic in complex scenarios requiring a collaborative and holistic approach. To address this, the second part of the thesis builds on the initial model, introducing an improved version that coordinates multiple concurrent predictions to achieve a common goal with minimal assumptions. This model maintains a solution to the loss-metric mismatch while extending its application to multiple predictors, significantly widening the range of potential network management applications. Exhaustive testing in controlled environments and real-world applications demonstrates the superior performance of this approach, meeting the intricate demands of anticipatory IBN.
The final part of the thesis is dedicated to exploring the explainability of DNN models. In today's world, where ethical considerations and enhanced understanding are paramount, explainability is crucial for comprehending a model's behavior and discerning normal functioning from potential malfunctions, whether due to inherent flaws or malicious data attacks. The ability to explain a model's decisions and processes is not just a technical requirement but also an ethical imperative, ensuring transparency and trust in automated systems. Special attention is given to the significance of explainability in spatio-temporal models used for detecting potential attacks. The thesis examines the explainability aspects of the models proposed above, demonstrating their inherent transparency compared to other techniques like Reinforcement Learning (RL) thanks to their inherent architectures. This exploration is vital for providing insights into the model's decision-making processes, enhancing trustworthiness, and facilitating the identification and mitigation of vulnerabilities. By focusing on explainability and interpretability, the thesis aims to contribute to the development of more transparent, reliable, and ethically sound machine learning models, particularly in network management and security.
Overall, this thesis presents a comprehensive exploration of advanced network management strategies. It identifies and addresses critical challenges in aligning the predictive capabilities of machine learning models, particularly DNNs, with the nuanced objectives of IBN. The proposed models, which resolve the \say{loss-metric mismatch} and ensure unified goal achievement, mark significant advancements in automated network management. These models are not only theoretically relevant but also validated through extensive real-world applications, demonstrating their practical efficacy and adaptability in dynamic networking environments. Furthermore, the thesis emphasizes the importance of explainability in machine learning, highlighting its crucial role in ensuring the reliability, transparency, and ethical integrity of automated systems. By delving into the explainability of DNN models, the research significantly contributes to the development of more transparent and trustworthy machine-learning solutions, offering a robust framework for the future of network management. | es |