This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

reputable news agency

proofread

Physicians' role crucial in using AI in patient care, say experts

physician
Credit: Pixabay/CC0 Public Domain

Artificial intelligence is quickly transforming the health care landscape from helping to diagnose diseases to assisting in surgery. Its rapid progression has the potential to transform how health care teams work by streamlining processes and improving patient outcomes.

As AI is used more in health care, researchers stress that the technology should be a tool guided by bioethical principles and safeguarded by human decision-making.

Focusing on ethics from the start, not as an afterthought, is crucial for the responsible development of AI-driven tools and also for ensuring that health care teams feel at ease using AI for patient well-being.

"Often ethics are seen as a 'nice to have' or [brought in] as triage when an AI system has an unintended negative consequence," says Barbara Barry, Ph.D., a health care delivery researcher at the Mayo Clinic Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery. Dr. Barry is also a member of Mayo's Artificial Intelligence Bioethics Advisory Council.

Dr. Barry emphasizes that using AI in will augment physicians' work in several ways, including determining prognoses, diagnosing conditions, reducing diagnostic errors and treatment, improving workflow efficiency through chart summarization and order automation, and expanding access for patients by delivering care without a physician's visit.

But that does not mean a physician does not need to be in the picture.

"One of the that physicians and other clinicians may encounter as AI is integrated into health care is the role of professional authority," says Michelle McGowan, Ph.D., an empirical bioethicist whose research explores the ethical and social implications of the rapid increase in emerging health technologies and policies. She is a senior associate consultant in the Department of Quantitative Health Sciences at Mayo Clinic.

"As machines increase capacity to analyze data, propose diagnoses or predict treatment responses, it will be incumbent upon physicians to ensure that their judgment is not substituted in ways that could jeopardize patient care and introduce potential liabilities," she says.

Another concern is the recent advancement of large language models (LLMs). These powerful AI tools are trained on massive amounts of data, giving the models the ability to analyze and generate human-like language.

In health care research, LLMs are used to sift through vast amounts of medical records and scientific literature. While there are many positives that can come from applying AI in this manner, it is critical to use it responsibly and always review and evaluate its predictions.

"A big concern is the burden of human oversight and automation bias when we follow a recommendation from an AI system because it has been accurate in the past," says Dr. Barry.

"It's like when we follow our GPS navigation system in a car even when we know a better route or when common sense tells us otherwise."

Evaluating the effectiveness of a large language model

With the promise of what AI can help achieve, the Kern Center is working with Cardiovascular Medicine on a to evaluate the effectiveness of a Mayo-created LLM that generates a discharge summary from electronic health record (EHR) data for clinicians to review.

Usually, staff must put in considerable time and effort to produce an informative and accurate document summarizing each patient's hospital stay and discharge recommendations.

"If this LLM works as intended, it could save providers a lot of time," says Shannon Dunlay, M.D., a heart failure cardiologist and the Kern Center's associate medical director. However, Dr. Dunlay says the team needs to review how accurate and complete the reports are before recommending it for widespread use.

Researchers emphasize that it is important for providers and patients to have support as AI tools are introduced into health care.

Fundamental to the ethical use of AI is rigorous evaluation.

"Traditional evaluation focuses on technical performance metrics, such as prediction accuracy, which is an essential initial step," says Xiaoxi Yao, Ph.D., the Robert D. and Patricia E. Kern Scientific Director for Pragmatic Trials and Evaluation.

"However, subsequent evaluations should delve deeper, considering usability, acceptability among end users in everyday practice, and the effect on care delivery and ."

In addition to rigorous evaluation, researchers underscore that patients want transparency when AI is used in their care.

"They want to know the performance of the AI tool, if it has been created with data from patients like them, if it has been proven to be safe in clinical trials, and if the tools are equitable and used justly, not only in their care but also in the care of others," says Dr. Barry.

2024 Mayo Clinic News Network. Distributed by Tribune Content Agency, LLC.

Citation: Physicians' role crucial in using AI in patient care, say experts (2024, July 25) retrieved 25 July 2024 from https://medicalxpress.com/news/2024-07-physicians-role-crucial-ai-patient.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

New policy recommends AI tech should augment physician decision-making, not replace it

0 shares

Feedback to editors