This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

Exploring the role of AI in the ICU

AI
Credit: Pixabay/CC0 Public Domain

Clinicians in an intensive care unit need to make complex decisions quickly and precisely, monitoring critically ill or unstable patients around the clock.

Researchers from Carnegie Mellon University's Human-Computer Interaction Institute (HCII) collaborated with physicians and researchers from the University of Pittsburgh and UPMC to determine if artificial intelligence could help in this and if clinicians would even trust such assistance.

The team gave 24 ICU physicians access to an AI-based tool designed to help make decisions and found that most incorporated the assistance into some of their decisions.

"It feels like clinicians are excited about the potential for AI to help them, but they might not be familiar with how these AI tools would work. So it's really interesting to bring these systems to them," said Venkatesh Sivaraman, a Ph.D. student in the HCII and member of the research team.

Using the AI Clinician model introduced in Nature by a group of researchers in 2018, the team designed an interactive clinical decision support (CDS) interface—called the AI Clinician Explorer—that provides recommendations for treating sepsis. The model was trained on a data set of more than 18,000 patients who met standard diagnostic criteria for sepsis at some point during their ICU stays. The system enables clinical experts to filter and search for patients in the data set, visualize their disease trajectories, and compare the model predictions to actual treatment decisions delivered at the bedside.

"Clinicians are always entering a lot of data about the patients they see into these computer systems and ," Sivaraman said. "The idea is that maybe we can learn from some of that data so we can try to speed up some of their processes, make their lives a little bit easier and also maybe improve the consistency of care."

The team put their system to the test via a think-aloud study with 24 clinicians who practice in the ICU and have experience treating sepsis. During the study, participants used a simplified AI Clinician Explorer interface to assess and make treatment decisions for four simulated patient cases.

"We thought the clinicians would either let the AI make the decision entirely or ignore it completely and make their own decision," Sivaraman said.

But the results were not so binary. The team identified four common behaviors among the clinicians: ignore, rely, consider and negotiate. The "ignore" group did not let the AI influence their decision and mostly made their decisions before even looking at the recommendation.

By contrast, the "rely" group consistently accepted at least part of the AI's input in every decision. In the "consider" group, physicians thought about the AI recommendation in every case and then either accepted or rejected it. Most participants, however, fell into the "negotiate" group, which includes practitioners who accepted individual aspects of the recommendations in at least one of their decisions, but not all.

The team was surprised by these results, which also provided insight on ways to improve the AI Clinician Explorer. Clinicians expressed concerns that the AI did not have access to more holistic data, such as the patient's general appearance, and were skeptical when the AI made recommendations contrary to what they were taught.

"When the CDS deviates from what would normally do or consider to be best practice, there was not a good sense of why," Sivaraman said. "So right now, we're focusing on determining how to provide that data and validate these recommendations, which is a challenging problem that will require machine learning and AI."

The team's research doesn't attempt to replace or replicate clinician decision-making, but instead hopes to use AI to reveal patterns that may have gone unnoticed in past patient outcomes.

"There are a lot of diseases, like sepsis, that might present very differently for each patient, and the best course of action might be different depending on that," Sivaraman said. "It's impossible for any one human to amass all that knowledge to know how to do things best in every situation. So maybe AI can nudge them in a direction they hadn't considered or help validate what they consider the best course of action."

Sivaraman's collaborators include Adam Perer, an assistant research professor in the HCII; Leigh Bukowski, a senior research manager at the University of Pittsburgh's School of Public Health; Joel Levin, a doctoral candidate in Pitt's Katz Graduate School of Business; and Jeremy Kahn, a physician in UPMC's Department of Critical Care Medicine and associate professor of critical care medicine and health policy in Pitt's School of Medicine and School of Public Health.

Sivaraman presented the team's paper "Ignore, Trust or Negotiate: Understanding Clinician Acceptance of AI-Based Treatment Recommendations in Health Care" this month at the Association for Computing Machinery's Conference on Human Factors in Computing Systems (CHI 2023) in Hamburg, Germany.

The paper is published on the arXiv preprint server.

More information: Conference: chi2023.acm.org/

Venkatesh Sivaraman et al, Ignore, Trust, or Negotiate: Understanding Clinician Acceptance of AI-Based Treatment Recommendations in Health Care, arXiv (2023). DOI: 10.48550/arxiv.2302.00096

Journal information: Nature , arXiv
Citation: Exploring the role of AI in the ICU (2023, April 27) retrieved 10 May 2024 from https://medicalxpress.com/news/2023-04-exploring-role-ai-icu.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Patient-reported social risks and clinician decision making: Results of a clinician survey

67 shares

Feedback to editors