Researchers say use of artificial intelligence in medicine raises ethical questions

March 15, 2018 by Patricia Hannon, Stanford University Medical Center
Credit: CC0 Public Domain

In a perspective piece, Stanford researchers discuss the ethical implications of using machine-learning tools in making health care decisions for patients.

Artificial intelligence is hard at work crunching health data to improve diagnostics and help doctors make better decisions for their patients. But researchers at the Stanford University School of Medicine say the furious pace of growth in the development of machine-learning tools calls for physicians and scientists to carefully examine the ethical risks of incorporating them into decision-making.

In a perspective piece published March 15 in the New England Journal of Medicine, the authors acknowledged the tremendous benefit that machine learning can have on patient health. But they cautioned that the full benefit of using this type of tool to make predictions and take alternative actions can't be realized without careful consideration of the accompanying ethical pitfalls.

"Because of the many potential benefits, there's a strong desire in society to have these tools piloted and implemented into ," said the lead author, Danton Char, MD, assistant professor of anesthesiology, perioperative and pain medicine. "But we have begun to notice, from implementations in non-health care areas, that there can be ethical problems with algorithmic learning when it's deployed at a large scale."

Among the concerns the authors raised are:

  • Data used to create algorithms can contain bias that is reflected in the algorithms and in the clinical recommendations they generate. Also, algorithms might be designed to skew results, depending on who's developing them and on the motives of the programmers, companies or health care systems deploying them.
  • Physicians must adequately understand how algorithms are created, critically assess the source of the data used to create the statistical models designed to predict outcomes, understand how the models function and guard against becoming overly dependent on them.
  • Data gathered about patient health, diagnostics and outcomes become part of the "collective knowledge" of published literature and information collected by health care systems and might be used without regard for clinical experience and the human aspect of patient care.
  • Machine-learning-based clinical guidance may introduce a third-party "actor" into the physician-patient relationship, challenging the dynamics of responsibility in the relationship and the expectation of confidentiality.

"We need to be cautious about caring for people based on what algorithms are showing us," Char said. "The one thing people can do that machines can't do is step aside from our ideas and evaluate them critically."

Sources of bias

In discussing designer intent, which is one source of bias, the authors pointed to private-sector examples of algorithms meant to ensure specific outcomes, such as Volkswagen's algorithm that allowed vehicles to pass emissions tests by reducing their nitrogen oxide emissions during the tests.

David Magnus, Ph.D., senior author of the piece and director of the Stanford Center for Biomedical Ethics, said bias can play into in three ways: human bias; bias that is introduced by design; and bias in the ways health care systems use the data.

"You can easily imagine that the algorithms being built into the health care system might be reflective of different, conflicting interests," said Magnus, who is also the Thomas A. Raffin Professor of Medicine and Biomedical Ethics. "What if the algorithm is designed around the goal of saving money? What if different treatment decisions about patients are made depending on insurance status or their ability to pay?"

The authors called for a national conversation about the "perpetual tension between the goals of improving health and generating profit … since the builders and purchasers of machine-learning systems are unlikely to be the same people delivering bedside care."

They also put the responsibility for finding solutions and setting the agenda on .

"Ethical guidelines can be created to catch up with the age of machine learning and that is already upon us," the authors wrote. "Physicians who use machine-learning systems can become more educated about their construction, the data sets they are built on and their limitations. Remaining ignorant about the construction of machine-learning systems or allowing them to be constructed as black boxes could lead to ethically problematic outcomes."

The authors acknowledge the social pressure to incorporate the latest tools in order to provide better health outcomes for patients.

"Artificial intelligence will be pervasive in health care in a few years," said co-author Nigam Shah, MBBS, Ph.D., associate professor of medicine. But health care systems need to be aware of the pitfalls that have happened in other industries, he added.

Shah noted that models are only as trustworthy as the data being gathered and shared. "Be careful about knowing the data from which you learn," he said.

Could data become the doctor?

The authors wrote that what physicians learn from the data needs to be heavily weighed against what they know from their own clinical experience. Overreliance on machine guidance might lead to self-fulfilling prophesies.

For example, they said, if clinicians always withdraw care in patients with certain diagnoses, such as extreme prematurity or brain injury, machine-learning systems may learn that such diagnoses are always fatal. Conversely, machine-learning systems, properly deployed, may help resolve disparities in by compensating for known biases or by identifying where more research is needed to balance the underlying data.

Magnus said the example of a current pilot study of an algorithm developed at Stanford to predict the need for a palliative care consultation illustrates how collaborative, careful consideration in the design of an algorithm and use of the data can guard against the misinterpretation of data in making care decisions.

Shah is helping to lead the pilot study. In this case, Magnus said, physicians and designers work closely to ensure that the incorporation of the predictions into the care equation includes guarantees that the physician "has a full understanding that the patient problems are answered and well-understood."

The insertion of an algorithm's predictions into the patient-physician relationship also introduces a third party, turning the relationship into one between the patient and the health care system.

It also means significant changes in terms of a patient's expectation of confidentiality.

"Once machine-learning-based decision support is integrated into clinical care, withholding information from electronic records will become increasingly difficult, since patients whose data aren't recorded can't benefit from machine-learning analyses," the authors wrote.

Magnus said the pressure to turn to data for answers is especially intense in fields that are growing quickly, such as genetic testing and sequencing.

"In a situation where you're looking for any evidence in informing your decision-making that you can get, and now you have all this genetics information and you don't know how to deal with," having clear data can be enormously helpful, he said.

Char, who is doing research funded by the National Institutes of Health on the ethical and social implications of expanded genetic testing of critically ill children, said it's important for care professionals to figure out how to minimize negative outcomes of -based decisions in all fields.

"I think society has become very breathless in looking for quick answers," he said. "I think we need to be more thoughtful in implementing machine learning."

Explore further: Machine learning techniques show promise for supporting medical decisions

Related Stories

Machine learning techniques show promise for supporting medical decisions

February 28, 2018
Several studies being presented at the American College of Cardiology's 67th Annual Scientific Session demonstrate how the computer science technique known as machine learning can be used to accurately predict clinical outcomes ...

How long will patient live? Deep Learning takes on predictions

January 20, 2018
End of life care might be improved with Deep Learning. An AI program in a successful pilot study predicted how long people will live.

Will a machine pick your next medication?

February 16, 2018
What once seemed like a scene from a 22nd century sci-fi movie is reality today. High speed, big data-processing computers combine artificial intelligence with human know-how to crack complex health care conditions. This ...

Physicians' work should focus on personalized care, not transactional tasks

March 13, 2018
Shifting physicians' roles from transactional tasks to personalized care would best serve patients, physicians and society.

Electronic triage tool improves patient care in emergency departments

September 25, 2017
When a patient arrives in any emergency department, one of the first steps in their care process is triage, an opportunity for a care team member to identify critically ill patients and assign priority treatment levels.

Recommended for you

Skin wounds in older mice are less likely to scar

September 25, 2018
Researchers have discovered a rare example in which the mammalian body functions better in old age. A team at the University of Pennsylvania found that, in skin wounds in mice, being older increased tissue regeneration and ...

3-D bioPen: A hydrogel injection to regenerate cartilage

September 25, 2018
Highly specialized cartilage is characteristically avascular and non-neural in composition with low cell numbers in an aliphatic environment. Despite its apparent simplicity, bioengineering regenerative hyaline cartilage ...

Evidence that addictive behaviors have strong links with ancient retroviral infection

September 24, 2018
New research from an international team led by Oxford University's Department of Zoology and the National-Kapodistrian University of Athens, published today in Proceedings of the National Academy of Sciences (PNAS), shows ...

Taking a catnap? Mouse mutation shown to increase need for sleep

September 24, 2018
Sleep is vital for adequate functioning across the animal kingdom, but little is known about the physiological mechanisms that regulate it, or the reasons for natural variation in people's sleep patterns.

Know someone sick? Your own smell might give it away

September 24, 2018
Odors surround us, providing cues about many aspects of personal identity, including health status. Now, research from the Monell Center extends the scope and significance of personal odors as a source of information about ...

New findings on the muscle disease Laing early-onset distal myopathy

September 24, 2018
New avenues are now being opened toward treatment of Laing distal myopathy, a rare disorder that causes atrophy of the muscles in the feet, hands and elsewhere. In a study published in the journal PNAS, researchers have identified ...

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

OmikronPi777
not rated yet Mar 15, 2018
Well of course it brings ethical issues to light, just like every single other thing or nothing, possible or impossible does. At least Stanford didn't throw all their credibility out the window talking as if current A.I. systems equate to A.I. "Machine based learning artificial intelligence systems" should be legislated as the term that must be used for "A.I." as it is used today - that would please me greatly.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.