Explainability Analysis: An In-Depth Comparison between Fuzzy Cognitive Maps and LAMDA
Fecha
2024-07-15Resumen
Currently, it has become very relevant that machine learning techniques can provide an explanation of the results they generate, which is even more relevant in certain domains, called critical, such as health and energy, among others. For this reason, several explainability methods have been generated in the literature. Some of them are Local Interpretable Model-Agnostic Explanations (LIME) and Feature Importance, which have been used in a wide range of problems. The objective of this work is to analyze the explainability capacity of the Learning Algorithm for Multivariate Data Analysis (LAMDA) and Fuzzy Cognitive Maps (FCM), which have been used with great interest due to their interpretability, management of uncertainty during the inference process, simplicity in its use, among other things. For the development of this work, data from two critical domains have been considered, health (COVID-19 and Dengue datasets) and energy (energy price dataset), for which prediction/classification models have been developed using LAMDA and FCM techniques. Afterward, two explainability techniques were used to analyze the explainability provided by each method, one based on the LIME method and the other on the feature importance method, the latter adapted to our work by being based on the permutation of values. Finally, the work proposes two new explainability methods, one based on causal inference and the other on the degrees of membership of the variables to the classes. The latter, in particular, allows doing an explainability analysis by class. The new explainability methods reproduce the results of well-known explainability methods in the literature such as LIME and feature importance, with less execution cost, and also, with an explainability analysis by class. This work opens the doors to new work on class explainability. Furthermore, we see that machine learning approaches based on causal or fuzzy relationships are quite self-explanatory, but specific explainability methods such as those we propose in the work allow us to study particular aspects, such as highly important variables, which general explainability methods do not allow us to do, such as LIME and feature importance.