Mirror neuron activity predicts people's decision-making in moral dilemmas, study finds

Mirror neuron activity predicts people’s decision-making in moral dilemmas, study finds
Researchers found that the brain’s inferior frontal cortex (circled) is more active in people who are more averse to harming others when facing moral dilemmas. Credit: UCLA Health

It is wartime. You and your fellow refugees are hiding from enemy soldiers, when a baby begins to cry. You cover her mouth to block the sound. If you remove your hand, her crying will draw the attention of the soldiers, who will kill everyone. If you smother the child, you'll save yourself and the others.

If you were in that situation, which was dramatized in the final episode of the '70s and '80s TV series "M.A.S.H.," what would you do?

The results of a new UCLA study suggest that scientists could make a good guess based on how the responds when watch someone else experience pain. The study found that those responses predict whether people will be inclined to avoid causing harm to others when facing .

"The findings give us a glimpse into what is the nature of morality," said Dr. Marco Iacoboni, director of the Neuromodulation Lab at UCLA's Ahmanson-Lovelace Brain Mapping Center and the study's senior author. "This is a foundational question to understand ourselves, and to understand how the brain shapes our own nature."

In the study, which was published in Frontiers in Integrative Neuroscience, Iacoboni and colleagues analyzed mirror neurons, brain cells that respond equally when someone performs an action or simply watches someone else perform the same action. Mirror neurons play a vital role in how people learn through mimicry and feel empathy for others.

When you wince while seeing someone experience pain—a phenomenon called "neural resonance"—mirror neurons are responsible.

Iacoboni wondered if neural resonance might play a role in how people navigate complicated problems that require both conscious deliberation and consideration of another's feelings.

To find out, researchers showed 19 volunteers two videos: one of a hypodermic needle piercing a hand, and another of a hand being gently touched by a cotton swab. During both, the scientists used a functional MRI machine to measure activity in the volunteers' brains.

Researchers later asked the participants how they would behave in a variety of moral dilemmas, including the scenario involving the crying baby during wartime, the prospect of torturing another person to prevent a bomb from killing several other people and whether to harm research animals in order to cure AIDS.

Participants also responded to scenarios in which causing harm would make the world worse—inflicting harm on another person in order to avoid two weeks of hard labor, for example—to gauge their willingness to cause harm for moral reasons and for less-noble motives.

Iacoboni and his colleagues hypothesized that people who had greater neural resonance than the other participants while watching the hand-piercing video would also be less likely to choose to silence the baby in the hypothetical dilemma, and that proved to be true. Indeed, people with stronger activity in the inferior frontal cortex, a part of the brain essential for empathy and imitation, were less willing to cause direct harm, such as silencing the baby.

But the researchers found no correlation between people's brain activity and their willingness to hypothetically harm one person in the interest of the greater good—such as silencing the baby to save more lives. Those decisions are thought to stem from more cognitive, deliberative processes.

The study confirms that genuine concern for others' pain plays a causal role in moral dilemma judgments, Iacoboni said. In other words, a person's refusal to silence the baby is due to concern for the baby, not just the person's own discomfort in taking that action.

Iacoboni's next project will explore whether a person's decision-making in moral dilemmas can be influenced by decreasing or enhancing activity in the areas of the brain that were targeted in the current study.

"It would be fascinating to see if we can use brain stimulation to change complex moral decisions through impacting the amount of concern people experience for others' pain," Iacoboni said. "It could provide a new method for increasing concern for others' well-being."

The research could point to a way to help people with mental disorders such as schizophrenia that make interpersonal communication difficult, Iacoboni said.


Explore further

Study says empathy plays a key role in moral judgments

More information: Leonardo Christov-Moore et al. Deontological Dilemma Response Tendencies and Sensorimotor Representations of Harm to Others, Frontiers in Integrative Neuroscience (2017). DOI: 10.3389/fnint.2017.00034
Citation: Mirror neuron activity predicts people's decision-making in moral dilemmas, study finds (2018, January 5) retrieved 24 April 2019 from https://medicalxpress.com/news/2018-01-mirror-neuron-people-decision-making-moral.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
1637 shares

Feedback to editors

User comments

Jan 05, 2018
Questions like this are moral false dilemmas, because the question is always set up in a way to force one to act unethically, and if you should find a loophole - like sticking your finger in the baby's mouth for a pacifier - they always find some excuse why you couldn't solve it that way.

So in essence, the people put under the question are not making free decisions - the researchers are deciding for them. By suppressing free thought, the researchers are distorting the decisionmaking process by forcing the subject to think unethically, when in reality they might be trying to solve the problem in an ethical way. Once primed such, no true behaviour can result.

This problem is illustrated by the short fictional story about Mr. Feynman and the Lightbulb: "What would Feynman do?"
https://blogs.msd...nman-do/

Jan 05, 2018
In other words, when one is primed to think unethically, "I must kill one of them", then the question becomes about how to choose, and that question also involves evaluating the social effects of the choice, i.e. "which choice reflects the best on me". People with stronger social affinity are more likely to go for the socially acceptable choice, like "save the baby" as an intuition.

Empathy involves reasoning about other minds and externalizing your own self, so the observation doesn't actually tell whether the people are feeling empathy for the baby, or empathy for themselves in avoiding social shame.

To test that, you'd have to isolate the social aspect by ensuring that nobody will know whether one chose to kill the baby.

Jan 06, 2018
Questions like this are moral false dilemmas, because the question is always set up in a way to force one to act unethically, and if you should find a loophole - like sticking your finger in the baby's mouth for a pacifier - they always find some excuse why you couldn't solve it that way.


I don't understand you, that would defeat the purpose of the study.
It's not meant to be a study in creative problem solving, they explicitly want to force you to choose the least bad (in your opinion) of only two rotten choices. How else can you study morals? Only ask pleasant questions?
And it works.
Would you kill an innocent person to save two other innocents or 5, 10, 100, 1000?

Different people will give different answers that's how you study these things.

Jan 07, 2018
"Researchers found that the brain's inferior frontal cortex (circled) is more active in people who are more averse to harming others... The study confirms that genuine concern for others' pain plays a causal role in moral dilemma judgments"

-Perhaps they've found the location of the psychopath defect. Next will be to examine psychopath brains for evidence.

Jan 07, 2018
It's not meant to be a study in creative problem solving


Most people are trying to be good, and suppressing that reaction by force has the effect of fudging up the result.

It no longer represents the person's morality, but their behaviour in an entirely artifical situation where they are being primed to act against their moral instincts. In such situations, people's moral compass tends to go hang.

Unfortunately it's no longer ethically permissible to do tests like the Milgram experiment, or the electric shock test.

Jan 08, 2018
Unfortunately it's no longer ethically permissible to do tests like the Milgram experiment, or the electric shock test
Sure it is.

"Haggard and his colleagues wanted to find out what participants were feeling. They designed a study in which volunteers knowingly inflicted real pain on each other...

"In his experiments [2016], the volunteers (all were female, as were the experimenters, to avoid gender effects) were given £20 (US$29). In pairs, they sat facing each other across a table, with a keyboard between them. A participant designated the 'agent' could press one of two keys; one did nothing. But for some pairs, the other key would transfer 5p to the agent from the other participant, designated the 'victim'; for others, the key would also deliver a painful but bearable electric shock to the victim's arm."

Jan 08, 2018
Sure it is.


Not really. In the Milgram experiment, the test subjects were intentionally made to believe they were seriously harming another person, and told to continue. They were subjected to a genuinely traumatic experience.

The Stanford experiment in turn quicky spiraled down to cruelty and physical violence, with one "inmate" suffering a mental breakdown.

Then there were other questionable studies, like the one where they gathered a church full of priests and gave them all LSD without them knowing they'd been drugged - to see what would happen.

Jan 08, 2018
Then there were other questionable studies, like the one where they gathered a church full of priests and gave them all LSD without them knowing they'd been drugged - to see what would happen
- And black soldiers were infected with syphilis. So what?

What do any other immoral experiments have to do with this one?
The Stanford experiment in turn quicky spiraled down to cruelty and physical violence, with one "inmate" suffering a mental breakdown
Just a suggestion... you really ought to post refs and excerpts rather than ad libbing.

Jan 08, 2018
This article is another step up the technological ladder that could have grave consequences for our society. The devices to read human thought, are veing researched and developed.

Which leads to several issues we, as a society, will need to address within this century.

Compulsory or voluntary? With all the privately owned firearms available... How do you convince the guntotters that they should obediently allow themselves to be tested? And what if they decide not to abide by the 'scientific' testing results?

How will we confirm that the results are correct? And not a manipulation by some special-interest group?

When it goes wrong, and an innocent party comes to harm? Whether by failing to detect a criminal psychopath or by persecuting a sane person with accusations of being a threat to society? How will you determine the error and make amends?

Jan 19, 2018
So what?


The point is, after the Standford experiment, ethical standards were raised to exclude such studies. It is no longer permissible to perform them.

It is not possible, or permissible, anymore to design a test where a person is genuinely convinced that they're in command of another person's life or death - so it is not possible to commit an experiment that would reveal how they really would react to such a moral dilemma.

So these test devolve down to social expectations. The mirror neurons activate when people think of what other people would think of them, like trying to reason what choice would reflect the best on their person on a social point of view, rather than feeling empathy for the potential victims.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more