The model identified AKI 48 hours in advance, allowing ample time for clinicians to intervene and provide treatment.
10:39 AM
Author |
In 2019, Google AI subsidiary DeepMind used a large dataset of patient records from the Veterans Health Administration to develop a predictive model for acute kidney injury—a potentially fatal condition whose prognosis improves the earlier a treatment intervention is administered. The DeepMind model purported to predict AKI 48 hours in advance, allowing ample lead time for clinicians to intervene and administer treatment.
The study team reviewed electronic health record data over a five-year period from more than 700,000 individuals.
“This is a phenomenal model because it can predict AKI up to 48 hours in advance, in a continuous manner, and has the best model performance compared to all previously published models,” said Jie Cao, a Ph.D., student in the Department of Computational Medicine and Bioinformatics. Cao is a researcher in the ML4LHS Lab, run by Karandeep Singh, M.D., MMSc., assistant professor in the Department of Learning Health Sciences and the Biomedical & Clinical Informatics Lab run by Kayvan Najarian, Ph.D., professor in the Department of Computational Medicine and Bioinformatics. All are of the University of Michigan. However, “concerns were raised about the generalizability of a model like this given the predominantly male [VA] population that it was trained on,” Cao added.
This led Cao and her colleagues to evaluate the model’s generalizability in a non-VA, more sex-balanced population. Their findings have been published in Nature Machine Intelligence.
The researchers reconstructed aspects of the DeepMind model, then trained and validated this model on two cohorts: one comprising 278,813 VA hospitalizations (from 118 VA hospitals) and the other 165,359 U-M hospitalizations. Not surprisingly, given the 94% male population with which the original model was developed, the reconstructed model performed worse for female patients in both cohorts.
To mitigate the model’s sex-based discrepancies, researchers updated the model with data from U-M’s more sex-balanced patient population, which extended the original model from 160 decision trees to 170. This small extension improved performance in the U-M cohort both overall and between sexes.
Like Podcasts? Add the Michigan Medicine News Break on Spotify, Apple Podcasts or anywhere you listen to podcasts.
“The extended model was successful at U-M. It used the VA model as the backbone, added information from U-M, and the final product worked well for the U-M patient population,” said Cao, lead author of the paper. “When researchers would like to benefit from the rich information contained in the original model and do not want to build a new local model from scratch, our study is a good example of how ‘fine-tuning’ could work when the original model was not trained on a diverse population,” she explained.
When the extended model was applied to VA patients, however, the discrepancies in model performance between males and females actually worsened.
“This finding surprised us to some extent,” said Cao, “but it is also reasonable and helps us understand the problem better. Difference in patient characteristics is one common factor contributing to model performance discrepancy. By matching the female patients at two different health systems and still finding discrepant model performance, we actually show that difference in patient characteristics wasn’t the only reason contributing to model performance discrepancy.”
Lower performance of the extended model in female VA patients, then, was not a factor of patient characteristics or low sample size, but likely attributable to variables such as differences in practice patterns between males and females at the VA.
Overall, the study demonstrated the value of updating existing models with data from the population to which the model will be applied.
“If a predictive model is to be taken out of one healthcare system and applied to another, the population the model was trained on is often different from the population it is going to be applied to. Even if the training population is diverse, we could observe a drop in model performance if nothing is done,” said Cao. “Our ‘extended model’ approach is to provide a solution to partially address this issue.”
To achieve peak performance, a model would, in theory, be applied only to a population matching the population it was trained on. But this is often not the case in practice.
“In the real world,” said Cao, this approach is “infeasible due to limited resources, time, expertise, etc. Our ‘extended model’ strategy is a workaround in these scenarios.”
This research is significant, said Cao, because it shows “the complexity of discrepancies in model performance in subgroups that cannot be explained simply on the basis of sample size.” It also offers “a potential strategy to mitigate the generalizability issue,” she said, and, finally, it demonstrates “the importance of reproducing and evaluating artificial intelligence studies.”
Live your healthiest life: Get tips from top experts weekly. Subscribe to the Michigan Health blog newsletter
Headlines from the frontlines: The power of scientific discovery harnessed and delivered to your inbox every week. Subscribe to the Michigan Health Lab blog newsletter
Explore a variety of health care news & stories by visiting the Health Lab home page for more articles.
Department of Communication at Michigan Medicine
Want top health & research news weekly? Sign up for Health Lab’s newsletters today!