• español
    • English
  • Login
  • English 
    • español
    • English
  • Publication Types
    • bookbook partconference objectdoctoral thesisjournal articlemagazinemaster thesispatenttechnical documentationtechnical report
View Item 
  •   IMDEA Networks Home
  • View Item
  •   IMDEA Networks Home
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Explainability Analysis in Predictive Models Based on Machine Learning Techniques on the Risk of Hospital Readmissions

Share
Files
VersionEnglish.pdf (982.2Kb)
Identifiers
URI: https://hdl.handle.net/20.500.12761/1768
ISSN: 2190-7196
DOI: 10.1007/s12553-023-00794-8
Metadata
Show full item record
Author(s)
Lopera, Juan; Aguilar, Jose
Date
2023-12-01
Abstract
Purpose: Analyzing the risk of re-hospitalization of patients with chronic diseases allows the healthcare institutions can deliver accurate preventive care to reduce hospital admissions, and the planning of the medical spaces and resources. Thus, the research question is: Is it possible to use artificial intelligence to study the risk of re-hospitalization of patients? Methods: This article presents several models to predict when a patient can be hospitalized again, after its discharge. In addition, an explainability analysis is carried out with the predictive models to extract information to determine the degree of importance of the predictors/descriptors. Particularly, this article makes a comparative analysis of different explainability techniques in the study context. Results: The best model is a classifier based on decision trees with an F1-Score of 83% followed by LGMB with an F1-Score of 67%. For these models, Shapley values were calculated as a method of explainability. Concerning the quality of the explainability of the predictive models, the stability metric was used. According to this metric, more variability is evidenced in the explanations of the decision trees, where only 4 attributes are very stable (21%) and 1 attribute is unstable. With respect to the LGBM-based model, there are 12 stable attributes (63%) and no unstable attributes. Thus, in terms of explainability, the LGBM-based model is better. Conclusions: According to the results of the explanations generated by the best predictive models, LGBM-based predictive model presents more stable variables. Thus, it generates greater confidence in the explanations it provides.
Share
Files
VersionEnglish.pdf (982.2Kb)
Identifiers
URI: https://hdl.handle.net/20.500.12761/1768
ISSN: 2190-7196
DOI: 10.1007/s12553-023-00794-8
Metadata
Show full item record

Browse

All of IMDEA NetworksBy Issue DateAuthorsTitlesKeywordsTypes of content

My Account

Login

Statistics

View Usage Statistics

Dissemination

emailContact person Directory wifi Eduroam rss_feed News
IMDEA initiative About IMDEA Networks Organizational structure Annual reports Transparency
Follow us in:
Community of Madrid

EUROPEAN UNION

European Social Fund

EUROPEAN UNION

European Regional Development Fund

EUROPEAN UNION

European Structural and Investment Fund

© 2021 IMDEA Networks. | Accesibility declaration | Privacy Policy | Disclaimer | Cookie policy - We value your privacy: this site uses no cookies!