Study of Explainability Analysis Methods for the LAMDA Family Algorithms in Classification and Clustering Tasks
Fecha
2024-09-01Resumen
Explainability analysis is a very relevant topic today, due to the interest of allowing the interpretability of machine learning models. In this work, we carry out an in-depth study of explainability analysis for the algorithms of the LAMDA (Learning Algorithm for Multivariate Data Analysis) family that have been used in the context of supervised and unsupervised learning. In particular, for the case of classification the LAMDA-HAD algorithm, and for the case of clustering the LAMDA-RD algorithm. For the explainability analysis, two classic methods from the explainability area were considered, LIME (Local Interpretable Model-Agnostic Explanation) and Feature Importance, and another one developed by us for the LAMDA family. In particular, our explainability method for LAMDA allows measuring the importance of each characteristic in a general way, and for each cluster. In general, the results obtained in both cases (classification and clustering) are satisfactory, especially because our explainability method for LAMDA gives an explainability similar to the traditional ones, but in addition, it can be given by cluster.