Health policy analysts call for more stringent rules for AI medical applications

ai
Credit: CC0 Public Domain

A trio of health policy analysts from the University of Pennsylvania and the University of California has published a Policy Forum piece in the journal Science calling for more stringent rules for the introduction of AI medical applications. In their paper, Ravi Parikh, Ziad Obermeyer and Amol Navathe outline five standards they believe should be instituted as part of allowing AI medical applications to be used for medical purposes.

The authors note that the use of AI-based tools to diagnose or to provide predictions for outcomes based on different treatment approaches is still very new. It has only been in the past few years that AI-based algorithms and tools that use them have been developed. Because of that, rules for their use have not been well established. In their paper, the researchers suggest five standards that should be implemented to safeguard patients who are part of medical treatment that involves the use of AI applications or devices.

The first involves establishing endpoints that are meaningful—benefits should be clearly identifiable and thus subject to validation by the FDA in the same way that drugs and other devices have been for many years. The second standard involves establishing benchmarks that are appropriate to the area in which they are applied, allowing their usefulness and quality to be properly evaluated. The third standard involves ensuring that variable input specifications are clear so that they can be used by more than one institution when testing a new application or device. The fourth standard involves possible interventions associated with findings by AI systems and whether they are appropriate and successful. And finally, the fifth standard would be the implementation of regular rigorous audits—a standard that has been used when introducing new drugs for many years. Auditing AI applications takes on added importance when considering that the data that underlie their predictive abilities would change over time.

The authors also note that due to the newness of AI and devices that use them, it is not clear how well current regulations are working. Because of that, they suggest the take a "promise and protection" attitude towards such technology to ensure that the new technology truly does promote better health care for patients.

More information: Ravi B. Parikh et al. Regulation of predictive analytics in medicine, Science (2019). DOI: 10.1126/science.aaw0029

Journal information: Science

© 2019 Science X Network

Citation: Health policy analysts call for more stringent rules for AI medical applications (2019, February 22) retrieved 27 April 2024 from https://medicalxpress.com/news/2019-02-health-policy-analysts-stringent-ai.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Law professor suggests a way to validate and integrate deep learning medical systems

76 shares

Feedback to editors