This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

proofread

Opinion: To be effective, deep learning in medicine needs to be safe, equitable and needs-focused

Algorithms that help people
Oriana Ciani, SDA Associate Professor of Practice, Public Management and Policy Group. Credit: Bocconi University

Research on artificial intelligence (AI) has made great strides in recent years, and applications and positive impacts are also evident in the medical field. To date, probably the most advanced AI-based solutions are in diagnostics, for example for the analysis of radiological images. Other areas of interest with high commercial impact are projects that concern the identification of molecules with therapeutic potential. The uses of AI for the improvement of the quality of life, and more generally of the quality of care and patient experience, are also multiplying.

With respect to the latter, tools like ChatGPT, which have reached millions of users in just a few weeks, may play a role. Some initial experiences document the use of ChatGPT to support bureaucratic activities, such as the release of medical certificates or reimbursement claims to medical insurance companies. Other uses concern the use in more complex activities, such as triage, that is, establishing priorities—on the basis of numerous and predefined criteria—in accessing services in contexts where resources are limited, or the possibility of providing information about therapies in progress and at the same time collect, directly from the patient, feedback that could influence the course of care.

However, tools such as ChatGPT, and in general AI solutions designed as consumer products and not for medical purposes, raise a series of ethical and legal issues. For example, in one of the currently underway at the Government, Health and Not-for-Profit Division of SDA Bocconi School of Management and the CERGAS research center on the , I recently found myself evaluating the possibility of inserting ChatGPT in a medical app, so it could draft a text designed to provide all the necessary information to a woman who is preparing to face a course of treatment following the diagnosis of breast cancer.

In a very short time, the chatbot generated a plausible text that we are validating with the essential contribution of clinicians and in view of publication, because the issue is whether the information thus generated is accurate and impartial enough to be applied in a sensitive context such as that of health protection.

Other challenges related to the use of ChatGPT in health care concern possible and, unfortunately, probable risks of privacy violations due to the communication of sensitive data that are potentially available for future disclosures to third parties. To date, there is also a lack of informed consent forms that are suitable for the multiple possible uses of AI.

Currently at CERGAS, we are working on another solution designed for women patients who are set to undergo mastectomy and subsequent breast reconstruction. CINDERELLA is a four-year European project, which aims to improve the level of satisfaction, and therefore the psychological well-being and quality of life of patients operated for breast cancer through the automation of the aesthetic evaluation of the results of the surgery and the prediction of results before the intervention, so as to encourage an active and conscious participation of the woman in the choice of treatment.

Faced with a multiplicity of possible surgical alternatives, today it is difficult for a woman to be able to judge which type of surgery can provide the best aesthetic results in her own case. To automate and make these evaluations objective, for some years an algorithm (BCCT.core) prepared by INESC TEC of Porto and the Breast Unit of the Champalimaud Foundation of Lisbon, has begun to classify photographs of the chests of patients operated for breast cancer according to aesthetic outcomes codified on the basis of mostly geometric metrics. For example, the software is able to autonomously establish breast volume and size of bra cup starting from a photograph. It goes without saying that the images being used are anonymized after the consent was given by the women whose chests were photographed.

To limit any unfairness in the responses provided by the software, the image database used for training AI in CINDERELLA needed to be expanded to include a variety of women who were initially underrepresented in terms of skin tone, clearness and shape. Thanks to recent advances in so-called deep learning, which allows to optimize image recognition, this advanced software will be able to provide personalized predictions for individual patients about the results of the various surgical approaches.

Underlying everything, however, there must be one element: an effective and fair implementation of AI solutions, centered on people's needs and on adequate safety and , which is a fundamental condition for potential of these innovations to emerge, by improving the quality of health care assistance and the well-being of communities.

Provided by Bocconi University
Citation: Opinion: To be effective, deep learning in medicine needs to be safe, equitable and needs-focused (2023, May 16) retrieved 14 July 2024 from https://medicalxpress.com/news/2023-05-opinion-effective-deep-medicine-safe.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

ChatGPT helpful for breast cancer screening advice with certain caveats, new study finds

2 shares

Feedback to editors