The primary objective of this study is to externally validate the EPIC's Readmission Risk model and to compare it with the LACE+ index and the SQLape Readmission model. As secondary objective, the EPIC's Readmission Risk model will be adjusted based on the validation sample, and finally, it´s performance will be compared with machine learning algorithms.
Introduction: Readmissions after an acute care hospitalization are relatively common, costly to the health care system and are associated with significant burden for patients. As one way to reduce costs and simultaneously improve quality of care, hospital readmissions receive increasing interest from policy makers. It is only relatively recently that strategies were developed with the specific aim of reducing unplanned readmissions by applying prediction models. EPIC's Readmission Risk model, developed in 2015 for the U.S. acute care hospital setting, promises superior calibration and discriminatory abilities. However, its routine application in the Swiss hospital setting requires external validation first. Therefore, the primary objective of this study is to externally validate the EPIC's Readmission Risk model and to compare it with the LACE+ index (Length of stay, Acuity, Comorbidities, Emergency Room visits index) and the SQLape (Striving for Quality Level and analysing of patient expenditures) Readmission model. Methods: For this reason, a monocentric, retrospective, diagnostic cohort study will be conducted. The study will include all inpatients, who were hospitalized between the 1st January 2018 and the 31st of January 2019 in the Lucerne Cantonal hospital in Switzerland. Cases will be inpatients that experienced an unplanned (all-cause) readmission within 18 or 30 days after the index discharge. The control group will consist of individuals who had no unscheduled readmission. For external validation, discrimination of the scores under investigation will be assessed by calculating the area under the receiver operating characteristics curves (AUC). For calibration, the Hosmer-Lemeshow goodness-of-fit test will be graphically illustrated by plotting the predicted outcomes by decile against the observations. Other performance measures to be estimated will include the Brier Score, Net Reclassification Improvement (NRI) and the Net Benefit (NB). All patient data will be retrieved from clinical data warehouses.
Study Type
OBSERVATIONAL
Enrollment
23,116
Logistic regression model that predicts the risk of all-cause unplanned readmissions developed by the privately held healthcare software company EPIC.
The LACE+ score is a point score that can be used to predict the risk of post-discharge death or urgent readmission. It was developed based on administrative data in Ontario, Canada.
The readmission risk model (Striving for Quality Level and analyzing of patient expenditures), is a computerized validated algorithm and was developed in 2002 to identify potentially avoidable readmissions.
Cantonal Hospital of Lucerne
Lucerne, Canton Lucerne, Switzerland
Discrimination at 18 days
For discrimination of the scores under investigation, the area under the receiver operating characteristics curves (AUC) will be calculated.
Time frame: 18 days after index discharge date
Discrimination at 30 days
For discrimination of the scores under investigation, the area under the receiver operating characteristics curves (AUC) will be calculated.
Time frame: 30 days after index discharge date
Calibration at 18 days
For calibration, the Hosmer-Lemeshow goodness-of-fit test will be graphically illustrated by plotting the predicted outcomes by decile against the observations.
Time frame: 18 days after index discharge date
Calibration at 30 days
For calibration, the Hosmer-Lemeshow goodness-of-fit test will be graphically illustrated by plotting the predicted outcomes by decile against the observations.
Time frame: 30 days after index discharge date
Overall Performance at 18 days
Brier Score (The Brier score is a quadratic scoring rule, where the squared difference between actual binary outcomes Y and predictions p are calculated. The Brier score can range from 0 for a perfect model to 0.25 for a non-informative model with a 50% incidence of the outcome.)
Time frame: 18 days after index discharge date
Overall Performance at 30 days
Brier Score (The Brier score is a quadratic scoring rule, where the squared difference between actual binary outcomes Y and predictions p are calculated. The Brier score can range from 0 for a perfect model to 0.25 for a non-informative model with a 50% incidence of the outcome.)
Time frame: 30 days after index discharge date
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.
Clinical usefulness (NRI) at 18 days
Net Reclassification Improvement (NRI): In the calculation of the NRI, the improvement in sensitivity and the improvement in specificity are summed. The NRI ranges from 0 for no improvement and 1 for perfect improvement.
Time frame: 18 days after index discharge date
Clinical usefulness (NRI) at 30 days
Net Reclassification Improvement (NRI): In the calculation of the NRI, the improvement in sensitivity and the improvement in specificity are summed. The NRI ranges from 0 for no improvement and 1 for perfect improvement.
Time frame: 30 days after index discharge date
Clinical usefulness (NB) at 18 days
Net Benefit (NB): NB = (TP - w FP) / N, where TP is the number of true positive decisions, FP the number of false positive decisions, N is the total number of patients and w is a weight equal to the odds of the cut-off (pt/(1-pt), or the ratio of harm to benefit
Time frame: 18 days after index discharge date
Clinical usefulness (NB) at 30 days
Net Benefit (NB): NB = (TP - w FP) / N, where TP is the number of true positive decisions, FP the number of false positive decisions, N is the total number of patients and w is a weight equal to the odds of the cut-off (pt/(1-pt), or the ratio of harm to benefit
Time frame: 30 days after index discharge date