Explainability Analysis in Predictive Models Based on Machine Learning Techniques on the Risk of Hospital Readmissions
MetadatosMostrar el registro completo del ítem
Purpose: Analyzing the risk of re-hospitalization of patients with chronic diseases allows the healthcare institutions can deliver accurate preventive care to reduce hospital admissions, and the planning of the medical spaces and resources. Thus, the research question is: Is it possible to use artificial intelligence to study the risk of re-hospitalization of patients? Methods: This article presents several models to predict when a patient can be hospitalized again, after its discharge. In addition, an explainability analysis is carried out with the predictive models to extract information to determine the degree of importance of the predictors/descriptors. Particularly, this article makes a comparative analysis of different explainability techniques in the study context. Results: The best model is a classifier based on decision trees with an F1-Score of 83% followed by LGMB with an F1-Score of 67%. For these models, Shapley values were calculated as a method of explainability. Concerning the quality of the explainability of the predictive models, the stability metric was used. According to this metric, more variability is evidenced in the explanations of the decision trees, where only 4 attributes are very stable (21%) and 1 attribute is unstable. With respect to the LGBM-based model, there are 12 stable attributes (63%) and no unstable attributes. Thus, in terms of explainability, the LGBM-based model is better. Conclusions: According to the results of the explanations generated by the best predictive models, LGBM-based predictive model presents more stable variables. Thus, it generates greater confidence in the explanations it provides.