
I don’t have to remind you that AI is ubiquitous.
Nor that it is increasingly utilized in healthcare settings. Physicians are employing AI to assist them with documentation tasks. AI-driven applications are combing through patient files, identifying individuals who might need specific assistance or treatments. They are also employed to analyze lab results and X-rays.
A rising number of research efforts indicate that many of these applications can yield precise results. However, a more significant question arises: Does their utilization genuinely lead to improved health results for patients?
We still lack a definitive answer.
That’s the position taken by Jenna Wiens, a computer scientist at the University of Michigan, alongside Anna Goldenberg from the University of Toronto, in a paper published this week in Nature Medicine.
Wiens shares that she has dedicated years to examining how AI could enhance healthcare. In the first decade of her work, she endeavored to advocate for the technology among clinicians. Over the past few years, she notes, it seems that “a switch flipped.” Healthcare providers not only seem significantly more interested in the potential of these technologies, but they have also started swiftly integrating them.
The challenge is that numerous providers aren’t thoroughly evaluating how effectively they function.
Consider “ambient AI” systems, for instance. Also referred to as AI scribes, they “listen” in on discussions between healthcare professionals and patients, subsequently transcribing and summarizing them. Various tools are accessible, and they are already being embraced widely by healthcare practitioners.
A few months prior, an employee at a prominent medical center in New York, who develops AI solutions for physicians, informed me that, based on anecdotal evidence, healthcare providers are “thrilled” with the technology—it enables them to concentrate entirely on their patients during consultations, and it spares them from a lot of tedious paperwork. Initial studies back up these anecdotes and propose that the tools could alleviate clinician fatigue.
That’s certainly positive. But what about the health outcomes for patients? “[Researchers] have looked at provider or clinician and patient satisfaction, but not really at how these tools influence clinical decision-making,” remarks Wiens. “We lack that knowledge.”
The same applies to other AI-driven technologies utilized in healthcare environments. Some are employed to anticipate patients’ health pathways, while others suggest treatments. They are crafted to enhance the effectiveness and efficiency of care.
However, even a tool deemed “accurate” may not necessarily lead to improved health outcomes. For instance, AI could accelerate the reading of a chest X-ray. But to what extent will a physician trust its evaluation? How will that tool influence the manner in which a physician engages with patients or suggests treatment? And ultimately: What implications will this have for the patients involved?
The responses to these queries may vary between healthcare facilities or departments and could hinge on clinical workflows, according to Wiens. They may also change among doctors at different points in their careers.
Take the AI scribes again, for example. Some research on AI in educational contexts indicates that such tools might alter how individuals cognitively process information. Could they influence how a physician interprets patient data? Will the tools modify the perceptions of medical students regarding patient information in a way that impacts care? These inquiries require investigation, states Wiens. “We appreciate tools that conserve our time, but we must consider the unanticipated effects of this,” she suggests.
In a study released in January 2025, Paige Nong at the University of Minnesota and her team discovered that approximately 65% of hospitals across the US utilized AI-assisted predictive tools. Only two-thirds of those institutions evaluated their precision. An even smaller proportion assessed them for potential bias.
The percentage of hospitals implementing these tools has likely surged since then, notes Wiens. These hospitals, or organizations other than those developing the tools, must assess how effectively they perform in specific environments. There’s a chance that they could leave patients in a worse situation, although it’s more plausible that AI tools simply aren’t as advantageous as healthcare providers may believe, according to Wiens.
“I genuinely believe in the ability of AI to significantly enhance clinical care,” states Wiens, emphasizing that she does not aim to halt the integration of AI tools in healthcare. She simply desires more insight regarding their impact on patients. “I have to be optimistic that in the future it’s not all AI or none AI,” she expresses. “It resides somewhere in between.”
This article initially appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it directly in your inbox every Thursday, and to read articles like this first, sign up here.