Mostrar el registro sencillo del ítem

dc.contributor.authorBenito Gutiérrez, Diego Javier 
dc.contributor.authorQuintero, Carlos
dc.contributor.authorAguilar, Jose 
dc.contributor.authorRamirez, Juan Marcos 
dc.contributor.authorFernández Anta, Antonio 
dc.date.accessioned2024-07-17T10:38:45Z
dc.date.available2024-07-17T10:38:45Z
dc.date.issued2024-07-15
dc.identifier.issn1568-4946es
dc.identifier.urihttps://hdl.handle.net/20.500.12761/1837
dc.description.abstractCurrently, it has become very relevant that machine learning techniques can provide an explanation of the results they generate, which is even more relevant in certain domains, called critical, such as health and energy, among others. For this reason, several explainability methods have been generated in the literature. Some of them are Local Interpretable Model-Agnostic Explanations (LIME) and Feature Importance, which have been used in a wide range of problems. The objective of this work is to analyze the explainability capacity of the Learning Algorithm for Multivariate Data Analysis (LAMDA) and Fuzzy Cognitive Maps (FCM), which have been used with great interest due to their interpretability, management of uncertainty during the inference process, simplicity in its use, among other things. For the development of this work, data from two critical domains have been considered, health (COVID-19 and Dengue datasets) and energy (energy price dataset), for which prediction/classification models have been developed using LAMDA and FCM techniques. Afterward, two explainability techniques were used to analyze the explainability provided by each method, one based on the LIME method and the other on the feature importance method, the latter adapted to our work by being based on the permutation of values. Finally, the work proposes two new explainability methods, one based on causal inference and the other on the degrees of membership of the variables to the classes. The latter, in particular, allows doing an explainability analysis by class. The new explainability methods reproduce the results of well-known explainability methods in the literature such as LIME and feature importance, with less execution cost, and also, with an explainability analysis by class. This work opens the doors to new work on class explainability. Furthermore, we see that machine learning approaches based on causal or fuzzy relationships are quite self-explanatory, but specific explainability methods such as those we propose in the work allow us to study particular aspects, such as highly important variables, which general explainability methods do not allow us to do, such as LIME and feature importance.es
dc.description.sponsorshipSocialProbinges
dc.language.isoenges
dc.publisherElsevieres
dc.titleExplainability Analysis: An In-Depth Comparison between Fuzzy Cognitive Maps and LAMDAes
dc.typejournal articlees
dc.journal.titleApplied Soft Computinges
dc.type.hasVersionAOes
dc.rights.accessRightsembargoed accesses
dc.volume.number164es
dc.identifier.doi10.1016/j.asoc.2024.111940es
dc.relation.projectIDCOMODIN-CMes
dc.relation.projectNamePredCov-CMes
dc.subject.keywordExplainability analysises
dc.subject.keywordLAMDAes
dc.subject.keywordFuzzy Cognitive Mapses
dc.subject.keywordClassification modelses
dc.subject.keywordSupervised learninges
dc.description.refereedTRUEes
dc.description.statuspubes


Ficheros en el ítem

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem