This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

Study shows people place little trust in medical advice if they suspect AI involvement

Study shows people place little trust in Dr. ChatGPT
Average ratings for each dimension (empathy, reliability, comprehensibility) and author label (human, AI, human + AI). Credit: Nature Medicine (2024). DOI: 10.1038/s41591-024-03180-7

People trust medical advice less if they suspect that an artificial intelligence is involved in its creation. This is the key finding of a study by psychologists from the University of Würzburg. Their paper is published in the journal Nature Medicine.

The study shows that people rate medical advice as less reliable and empathetic whenever an AI was believed to be involved. This was the case even when the study participants could assume that a doctor had made these recommendations with the help of an AI. Consequently, respondents were also less willing to follow AI-supported recommendations compared to advice based solely on medical expertise of human doctors.

Moritz Reis and Professor Wilfried Kunde from the Chair of Psychology III at Julius-Maximilians-Universität (JMU) are responsible for this study, which was conducted in collaboration with Florian Reis from Pfizer Pharma GmbH.

"The setting of our study is based on a digital health platform where information on medical issues can be obtained—in other words, a setting that will become increasingly relevant with increasing digitalization," the authors describe their approach.

As part of the study, more than 2,000 participants received identical and were asked to evaluate it for reliability, comprehensibility and empathy. The only difference: while one group was told that this advice came from a doctor, the second group was told that an AI-supported chatbot was responsible. The third group was led to believe that a doctor had made the with the help of an AI.

The results are clear: People medical recommendations less if they suspect that AI is involved. This also applies if they believe that contributed to advice generation. Advice labeled as human-generated also scored better than the two AI variants in the "empathy" category. Only in terms of comprehensibility there were hardly any differences between the three groups. Apparently, people have no reservations about the technology from this point of view.

"This is an important finding, as trust in medical diagnoses and therapy recommendations is known to be a very important factor for the success of the treatment," the authors of the study say.

These findings are particularly important against the backdrop of a possible reduction in bureaucracy and relief for doctors' day-to-day work through cooperation with AI. In their opinion, the study therefore represents a starting point for detailed research into the conditions under which AI can be used in diagnostics and therapy without jeopardizing patients' trust and cooperation.

More information: Moritz Reis et al, Influence of believed AI involvement on the perception of digital medical advice, Nature Medicine (2024). DOI: 10.1038/s41591-024-03180-7

Journal information: Nature Medicine
Citation: Study shows people place little trust in medical advice if they suspect AI involvement (2024, July 25) retrieved 25 July 2024 from https://medicalxpress.com/news/2024-07-people-medical-advice-ai-involvement.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Chatbots for medical advice: Three ways to avoid misleading information

1 shares

Feedback to editors