A crash course in AI

A broad overview of Michigan Medicine’s approach to artificial intelligence

Author | Katie Whitney

Illustration of a doctor crossing a bridge to a patient. The bridge is on top of two giant letters spelling AI.
Illustration by Andrea Levy

By Katie Whitney and Lauren Talley, with additional reporting by Kelly Malcom and Elisabeth Paymal. Illustrations by Andrea Levy.

Is it the end of civilization as we know it? Or is it the dawn of utopia? With the roll out last year of OpenAI’s massively popular generative artificial intelligence tool ChatGPT, fears and hopes have abounded for the future of our species.

After speaking with more than a dozen experts at Michigan Medicine about the potential for AI to improve the medical field, we learned that clinicians and researchers here are falling on the side of hope. At the same time, medical experts are expressing a healthy dose of caution and a deep desire to make sure whatever we do with this new power is done safely and ethically.

In this primer on Michigan Medicine’s approach to AI, we hope you’ll get a clear picture of what our major priorities are and a taste of the exciting possibilities for using AI to improve patient care, research, and education.

Jump to:
Glossary of AI terms
What is our highest priority?
How do we make AI safe and ethical?
How can AI alleviate burnout?
What cool projects are we doing with AI?
What is AI not good at? 
AI incubators at U-M

 
Glossary of AI terms

Artificial intelligence (AI). Technology that mimics human intelligence, enabling computers to think and learn from new inputs and perform human-like tasks.
Artificial general intelligence (AGI). AI with a level of intelligence equal to that of a human. Although not yet realized, several key cognitive scientists think it is possible and will likely be achievable soon.
Chatbot software. AI designed to simulate conversation through text or voice interactions.
Generative AI. A form of machine learning that can produce text, video, images, and other types of content. ChatGPT, DALL-E, Bing, and Bard are examples of generative AI applications that produce text or images based on user-given prompts or dialogue.
Large language model (LLM). A deep learning algorithm that can perform a variety of natural language processing (NLP) tasks. LLMs use transformer models and are trained using massive datasets — hence, large. This enables them to recognize, translate, predict, or generate text or other content.
Machine learning (ML). A branch of AI and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy.
Natural language processing (NLP). A field of AI focused on understanding and processing language. LLMs are specific models of NLP. 
Predictive AI technology. AI that collects and analyzes existing data to predict future occurrences.

 

Illustration of giant block letters spelling AI. On top of the letters sit a stethoscope and a microscope. The letters are covered with computer parts and cast a long shadow.
What is our highest priority for the use of AI right now?

Several experts we spoke with believe the new generative AI tools are ready to realize one of the original promises of electronic health records (EHR): that they would make everyone’s lives easier. EHR have improved medicine in many ways, but they have also added immense administrative burden to health care providers. Now, health systems are very close to being able to use large language models (LLM) to record interactions between patients and providers and generate accurate notes for a patient’s EHR. At the same time, the generative AI tools could also read patient emails, compare those to a patient’s medical history, and create a draft for health care providers to review and incorporate into their response. 

“The opportunity to augment patient care is here and now. This is really low-hanging fruit,” says Gil Omenn, M.D., Ph.D., the Harold T. Shapiro Distinguished University Professor of Medicine, as well as professor of computational medicine and bioinformatics, of internal medicine, and of human genetics at the Medical School, and professor of environmental health at the School of Public Health. “It’s very responsive to some of the biggest complaints of patients, doctors, and nurses.” 

Leaders at Michigan Medicine are also tempering their hopes with caution. 

“AI is very eager to give you an answer, and it will make one up if it can’t find one,” says Ulysses Balis, M.D., the A. James French Professor of Pathology Informatics and professor of pathology. “In medicine, that would be catastrophic. You need to keep an adult in the room, and the adult is the health care provider.” 

If AI is used properly, the medical field has an opportunity to greatly alleviate administrative burden among health care providers. This also has the potential to improve patient care by enabling providers to spend more time in face-to-face interactions with patients and families.

 

Illustration of a butterfly with green wings overlaid on abstract computer parts. Where the butterfly's body and antennae would be, there is a stethoscope.
How do we make sure AI is safe and ethical?

There are several initiatives underway at Michigan Medicine that are aimed at making sure the implementation of AI tools in medicine will be safe and ethical.

1. Set up ground rules for using AI in clinical spaces. In 2018, Michigan Medicine established the Clinical Intelligence Committee, making it one of the first academic medical centers to establish AI governance in health care. The group collaborates with the U-M Law School to create policy around liability and governance.

2. Ensure that educators, learners, researchers, and staff use AI responsibly. Michigan Medicine leaders helped draft the U-M Generative AI Committee’s report that was published last June (see genai.umich.edu). “This report established the University of Michigan guidelines for the safe, transparent, and ethical use of AI for our faculty, trainees, and staff at U-M and Michigan Medicine, and was one of the first of its kind in U.S. higher education ,” says Brian D. Athey, Ph.D., the Michael Savageau Collegiate Professor and Chair of the Department of Computational Medicine and Bioinformatics, who was a member of the committee. “The Committee also outlined the establishment of secure enterprise Generative AI platforms and services now available to all in the U-M community.”

3. Train AI on unbiased data. AI is completely dependent on data, and if those data are biased, then AI is biased. The work of making medical data less biased has been ongoing for decades, and Michigan Medicine has been a leader in this space. For example, U-M researchers made headlines in 2022 when they discovered that pulse oximeters did not produce accurate data for people with darker skin tones. Ongoing work at the U-M Center for Bioethics and Social Sciences in Medicine (CBSSM) is focused on the Ethical, Legal, and Social Implications of AI and Datasharing (ELSIAID) Lab. “AI is not going to cure the ills of our society. Any bigotry, biases, and blind spots will only be reflected and compounded, and hid beneath the guise of ‘algorithmic neutrality,’” says Kayte Spector-Bagdady, JD, MBe, interim co-director of CBSSM and assistant professor of obstetrics and gynecology, who also recently participated in the White House Roundtable on AI and Health.

4. Test AI tools that are being used for patient care. Michigan Medicine is helping to protect patient safety by making sure AI tools function as advertised. For example, Michigan Medicine researchers were the first to evaluate Epic’s sepsis model. In 2021, they published their results in JAMA Internal Medicine, showing that the Epic sepsis model was not as effective as the company had claimed. The White House’s “Blueprint for an AI Bill of Rights” cited the research, and Epic ultimately overhauled its product. “While a lot of organizations were implementing AI tools that were being handed to them from vendors a few years ago, we were one of the first to start evaluating those tools fully independently from the vendor,” says Karandeep Singh (M.D. 2008), who is an author of the study. Singh is associate chief medical information officer of artificial intelligence, as well as associate professor of learning health sciences, of internal medicine, and of urology.

5. Understand patient concerns. Patients want to know if AI is being used in their health care and what the implications are, says Jodyn Platt, Ph.D., MPH, associate professor of learning health sciences, who is researching patient and public perspectives on AI in medicine. Patients are worried there could be increased costs with new AI technologies, and they want to know whether AI will make it easier to see a doctor. Her work aims to understand patients’ questions about AI in health care so that providers can better meet their needs.

6. Make sure new tools are accessible to everyone. Charles Friedman, Ph.D., the Josiah Macy Mr. Professor of Medical Education and chair of the learning health sciences department, believes the full promise of AI to improve medicine can't be realized if all of these marvelous technologies are poorly understood, not well distributed, or behind a paywall. "We want to create an ecosystem of AI models that could be stored in something akin to a public library," he says. In 2017, he and researchers at the U-M Knowledge Systems Laboratory established a movement called Mobilizing Computable Biomedical Knowledge or MCBK. AI models are one form of computable knowledge. MCBK aims to make these models FAIR, which stands for findable, accessible, interoperable, and reusable. The movement is now global and aims to allow widespread use of AI resources.

 

Illustration of giant block letters spelling out AI. On top of the letters is a bridge. A doctor is crossing the bridge to get to a patient.
How are we using AI to start alleviating administrative burden and burnout?

Michigan Medicine is working hard to develop a comprehensive strategy to implement AI across all of its missions. As a first step, clinicians and researchers are test driving and creating some tools to alleviate administrative burden:

1. Epic In Basket pilot. Michigan Medicine is part of a pilot program to test Epic’s In Basket summarization and messaging tool, which will be embedded within Epic’s EHR to suggest responses to patient queries coming into the MyChart portal. It also has the potential to identify billing codes for administrators. This project is being implemented in the U-M Office of Clinical Informatics in conjunction with the Health Information Technology Service.

2. Azure OpenAI Service is a platform that uses generative AI to analyze patient records. Michigan Medicine has received early approval to test this service, which is a collaboration with Microsoft. The health system is currently completing the legal and compliance elements required to use it in a health care setting, keeping patient data safe and secure, in compliance with HIPAA. This will offer a more general platform to analyze patient records safely and securely for clinical delivery, education, and research.

3. Decimal Code is a tool that reads medical free text (such as clinician notes) and then predicts what billing codes are likely to be needed for that patient visit. It also tells the user how accurate the predictions are. This makes it easier to streamline the workflow so that administrators can focus more time and attention on complicated cases. Created by Michael Burns, M.D., Ph.D. (Residency 2016), who is an assistant professor of anesthesiology, and data scientist John Vandervest, Decimal Code has been in development for three years and is now patented.

4. Dragon Ambient eXperience (DAX) is a tool created by Nuance that can “listen” to patient visits and create notes for EHR based on the interactions between patients and providers. In 2021, U-M Health West piloted the tool and found that providers were happy with it — in fact, 77% said they would be disappointed if they could no longer use DAX. Similar systems are being evaluated for use throughout the entire health system.

Illustration of a hand releasing a butterfly. The hand is teal and set against a gray backdrop of old computer parts. The butterfly is teal and overlaid with images of computer parts.

 

What cool projects are we doing with AI?

As of last fall, there were more than 160 AI research projects being performed at Michigan Medicine. Here are just a few of the exciting AI applications researchers, clinicians, and educators are engaged in.

 1. Predicting patient deterioration.  Researchers at the Max Harry Weil Institute for Critical Care Research and Innovation have created a suite of machine learning algorithms that use EHR data to predict patient deterioration in hospital settings. When tested on COVID-19 patients, the tool, called PICTURE, provided 30 hours advanced warning time and 17% fewer false alarms for every true alarm compared to the Epic Deterioration Index.

2. Predicting adverse cardiac events. In partnership with Toyota, Kayvan Najarian, Ph.D., professor of computational medicine and bioinformatics and of emergency medicine, is developing a mathematical algorithm that analyzes cardiac signals and predicts an adverse cardiac event. The AI algorithm is trained against the driver’s medical records and can be installed in cars. Electrodes are placed on the driver’s chest to record electrocardiogram signals, which are sent to a computer that analyzes the data to deliver timely life-saving warnings while on the road.

3. Diagnosing brain tumors in 150 seconds. A team of neurosurgeons and engineers, led by assistant professor of neurosurgery Todd Hollon, M.D. (Residency2020), has developed an AI-based diagnostic screening tool to analyze brain tumor specimens taken during an operation and predict diagnosis in the operating room in 150 seconds. That’s much faster than conventional histopathology techniques, which can take up to 30 minutes or more. Using a novel laser Raman microscope in the operating room, the study showed that the results of the model, called DeepGlioma, were as good as results from a pathology lab.

4. Stimulating the immune system to fight viruses. Geoffrey Siwo, Ph.D., is using AI to generate new antiviral molecules. Siwo, who is assistant professor of learning health sciences, is leveraging AI tools to create molecules that take advantage of the immune system’s own antiviral capabilities. Up to now, antiviral efforts have largely focused on studying the biology of new viruses to find potential protein targets, and then trying to make or find drugs that could target those proteins. Siwo says this is painstaking work that takes too long in an age of increasing pandemics. Instead, he says, “AI can help you design new compounds that can help stimulate the innate immune system.”

5. Re-imagining cell morphology. Joshua Welch, Ph.D., associate professor of computational medicine and bioinformatics, is currently working on an AI model that he compares to DALL-E 3, OpenAI’s generative AI tool that creates realistic images and artwork. “But instead of asking it to draw pictures of cats or avocados, we train it on pictures of cells. Then, when we give it the molecules from a cell that we haven’t seen a picture of, it can draw a picture of what the cell looks like under the microscope.” He says his team hopes “to do these types of single-cell measurements on cells from a person with a disease and then come up with personalized predictions for how their cells will respond to various drugs or genetic perturbations.”

6. Educating future doctors in AI. As AI becomes a critical component of health care, it’s imperative that medical education prepares physicians to use the tools in this burgeoning field. “The end goal is not for every clinician to be a computer programmer,” says Cornelius James, M.D., assistant professor of learning health sciences, of internal medicine, and of pediatrics. He directs evidence-based curriculum at the Medical School and is researching digital health tools to understand how they will be incorporated into medical practice in the future. “Instead, clinicians should be able to apply outputs of AI models in an effective way. They need to be able to use AI as a tool and, in some cases, as a teammate.” An ad hoc curriculum committee at the Medical School, led by assistant professor of internal medicine Virginia Sheffield, M.D., is currently conducting an in-depth analysis of AI curricula and plans to share their findings early this year.

7. Improving health in Africa. “People underestimate the value of doing AI globally,” says Akbar Waljee, M.D. (Residency 2005), the Lyle C. Roll Professor, as well as professor of learning health sciences and of internal medicine. “In places with low resources and fewer regulatory barriers for entry in terms of AI, there are opportunities to augment health in a more expedited way.” Waljee, who is from Kenya, is helping to lead UZIMA Data Science, a collaboration between U-M and Aga Khan University that aims to leverage AI and machine learning-based tools to address maternal and child health as well as mental health of young people. Princess Zahra Aga Khan met with leaders at U-M in October to discuss the initiative.

 

Sculptural illustration shows a large mannequin-like face with a butterfly over one eye. In front of the face is a table with a beaker, stethoscope, microscope, inner workings of a computer, and a magnifying glass.
What is AI not good at?

We asked AI experts to put on their most skeptical hats and tell us where AI is weakest right now in the field of medicine. Here's what they had to say:

“I think AI is still not very good at being able to describe its own uncertainty. AI engines are built on different data sets and different versions, with different architectures and by different teams with different objectives. There are lots of models with billions of parameters, and each parameter could have its own uncertainty, with consequences for data quality, model veracity, and inferential credibility.” 
Arvind Rao, Ph.D., associate professor of computational medicine and bioinformatics and of radiation oncology

“AI is not good at knowing whether the way we allocate our care is fair or not. A lot of what we’re using AI tools for is to decide who we reach out to for a potential intervention. For example, with Epic’s AI sepsis model, you’re going to check on a patient who is flagged to see if they need treatment, but the model can’t tell you whether the way we allocate that care upholds our values.” 
Karandeep Singh (M.D. 2008), associate chief medical information officer of artificial intelligence, as well as associate professor of learning health sciences, of internal medicine, and of urology

“A weakness of AI is unawareness of complete patient context. It is often assumed that a decision for next steps to treat a disease or condition requires only one or two pieces of information, such as a CT scan or biopsy. In reality, an expert’s medical decision for an individual patient considers multiple interacting considerations, often beyond the medical data. A machine knows nothing about how a patient is feeling about their diagnosis, their concerns about using one treatment versus another, or their knowledge of a family member’s experience with a similar condition.” 
Ryan Stidham, M.D., MS (Fellowship 2011), associate professor of internal medicine and of computational medicine and bioinformatics, who is using AI to help better describe gastrointestinal diseases

“Overblown claims about AI are made all the time. A researcher’s incentive is to publish a paper, to make a claim that they have something new, and a clinician’s incentive is to treat a patient better, whether it’s faster, more accurately, cheaper, or with better outcomes. I think AI fails when those incentives are not aligned. Making AI work every time, on an individual level, not just statistically, is a challenge.” 
Vikas Gulani, M.D. (Residency 2005, Fellowship 2006), the Fred Jenner Hodges Professor of Radiology and chair of the department

“Recent AI models that many people are now hearing about are not at all mature enough to perform many of the tasks health care workers currently do — disease and accident prevention, acute care, post-acute care, chronic disease management, having empathy and cultural sensitivity, detecting and addressing uncertainty, and appreciating irony and sarcasm. Without careful modifications, current AI models would do all the above quite poorly compared to humans. Still, AI-augmented work will likely get much better very quickly, and we are preparing for that future.” 
Andrew Rosenberg , M.D., chief information officer and associate professor of anesthesiology and of internal medicine  

 

AI incubators

There are several organizations that help AI projects get off the ground at Michigan Medicine. Here are a few:

MIDAS (Michigan Institute of Data Science), a university-wide collaboration hub for research involving “Big Data,” statistical analysis, machine learning, and AI. More than 460 MIDAS affiliate faculty members come from all schools and colleges at U-M Ann Arbor campus, and from U-M Dearborn and Flint campuses. This makes MIDAS one of the largest, and one of the most scientifically diverse, data science institutes at a U.S. university. MIDAS focuses on five key areas: responsible research, data, analytics, AI, and emerging initiatives.

U-M Precision Health, a university-wide program with a work group focused on how to implement AI in health care and public health and make sure the AI models that researchers make can run inside the electronic health record system. Precision Health brings together researchers spanning across 19 colleges with the goals of developing fundamental social, medical, computational, and engineering science; translating these basic science discoveries into promising treatments that are evaluated in partnership with Michigan Medicine patients and regional health systems; and evaluating and increasing the public health impact of effective therapies, working with community health systems, policy makers, and payers to implement these therapies nationally.

e-HAIL (E-health and artificial intelligence), a joint venture between Michigan Medicine and the College of Engineering that supports a multidisciplinary approach to innovations in health care as well as AI and machine learning methodologies. E-HAIL aims to make U-M a premier hub for research innovative health research using AI. They focus on collaboration, grant development, and infrastructure to support a multidisciplinary approach to innovations in health care and AI/machine learning methodologies. They provide opportunities for researchers to connect and learn from others operating at the intersection of AI and health. They also help researchers prepare grant applications, including finding collaborators, grant writing assistance, and hiring student support.

 

What is the take-home message?

At Michigan Medicine, the main goal of implementing AI tools is to augment the work of clinicians, researchers, educators, and learners so that we can improve clinical care, research, and education. “AI will serve as our future personalized digital assistant,” says Athey. “AI won’t replace clinicians and researchers; if used responsibly, it will enhance our work as well as our ability to teach and learn. We are currently working collaboratively to develop a future strategy to do so safely, securely, ethically, and inclusively.”

Featured News & Stories Health Lab Podcast in brackets with a background with a dark blue translucent layers over cells
Health Lab Podcast
IUD 101
IUDs are becoming one of the most popular forms of birth control for all ages. In this episode, we cover a recent Health Lab article Q&A on IUDs with Dr. Monica Rosen of University of Michigan Health.
surgical area of clinicians drawn out with blue background
Health Lab
New tools that leverage NIH’s ‘All of Us’ dataset could improve anesthesia and surgical care
In a report in JAMA Surgery, researchers propose two novel tools that leverage the All of Us dataset to look at acute health events such as surgery.
man in pink shirt close up with hand on stomach
Health Lab
Potential culprit identified in lingering Crohn’s disease symptoms
A study from University of Michigan researchers may explain why some patients with Crohn’s disease continue to experience symptoms, even in the absence of inflammation.
prescription pad drawn
Health Lab
Reducing dose of popular blood thinners may limit risk of future bleeding
For people taking the popular blood thinners rivaroxaban (brand name Xarelto) and apixaban (brand name Eliquis), after having a blood clot, a reduced dose may limit the future risk of bleeding as well as hospital visits, a Michigan Medicine-led study suggests.
Minding Memory with a microphone and a shadow of a microphone on a blue background
Minding Memory
The Link Between Hearing Loss and Cognitive Decline
Hearing loss is one of the most common conditions of aging, affecting nearly two-thirds of older adults over the age of 70, but it’s not just a matter of diminished hearing. Hearing loss can contribute to poor psychosocial outcomes for patients including loneliness, depression, and social isolation. New research also shows that hearing loss is linked to a higher risk of cognitive decline and dementia. In fact, the 2024 Lancet Commission on Dementia Prevention, Intervention, and Care identified hearing loss as one of 14 modifiable risk factors for dementia. According to the commission, treating hearing loss could prevent up to 7% of dementia cases globally, making it one of the most impactful areas for potential prevention. This raises the question of whether use of hearing aids in people with hearing loss can reduce or mitigate this increased dementia risk. To help us understand these connections and the latest research in this area, we are joined today by Dr. Alison Huang, an epidemiologist and Senior Research Associate from the Johns Hopkins Cochlear Center for Hearing and Public Health. Her research studies the impact of sensory loss on cognitive and mental health in older adults. Dr. Huang was an author of the Aging and Cognitive Health Evaluation in Elders (ACHIEVE) study, a large, multicenter randomized controlled trial that tested whether treating hearing loss in older adults could help slow cognitive decline published in the Lancet. Alison Huang, PhD, MPH Link to article: Lin FR, Pike JR, Albert MS, Arnold M, Burgard S, Chisolm T, Couper D, Deal JA, Goman AM, Glynn NW, Gmelin T, Gravens-Mueller L, Hayden KM, Huang AR, Knopman D, Mitchell CM, Mosley T, Pankow JS, Reed NS, Sanchez V, Schrack JA, Windham BG, Coresh J; ACHIEVE Collaborative Research Group. Hearing intervention versus health education control to reduce cognitive decline in older adults with hearing loss in the USA (ACHIEVE): a multicentre, randomised controlled trial. Lancet. 2023 Sep 2;402(10404):786-797. doi: 10.1016/S0140-6736(23)01406-X. Epub 2023 Jul 18. PMID: 37478886; PMCID: PMC10529382.
four tiles with top left washing hands with blue background, top right yellow background and two cutting boards with one having meat and one having cucumbers and knives, then bottom left is red background and pot of soup steaming and then bottom right open white fridge with food in it on black background
Health Lab
How to prevent your kids from getting food poisoning
About 48 million people fall victim to food poisoning each year. Prevent getting food poisoning with these six tips.