When robots commit wrongdoing, people may incorrectly assign the blame

robot
Credit: CC0 Public Domain

Last year, a self-driven car struck and killed a pedestrian in Tempe, Arizona. The woman's family is now suing Arizona and the city of Tempe for negligence. But, in an article published on April 5 in the journal Trends in Cognitive Sciences, cognitive and computer scientists ask at what point people will begin to hold self-driven vehicles or other robots responsible for their own actions—and whether blaming them for wrongdoing will be justified.

"We're on the verge of a technological and social revolution in which will replace humans in the workplace, on the roads, and in our homes," says Yochanan Bigman of the University of North Carolina, Chapel Hill. "When these robots inevitably do something to harm humans, how will people react? We need to figure this out now, while regulations and laws are still being formed."

The article explores how the human moral mind is likely to make sense of robot responsibility. The authors argue that the presence—or perceived presence—of certain key capacities could make people more likely to hold a machine morally responsible.

Those capacities include autonomy, the ability to act without human input. The appearance of a robot also matters, as the more humanlike a robot looks, the more likely people are to ascribe a to it. Other factors that can lead people to perceive robots as having "minds of their own" include an awareness of the situations they find themselves in as well as the ability to act freely and with intention.

Such issues have important implications for people in their interactions with robots. They're also critical considerations for the people and companies who create and operate autonomous machines—and the authors argue that there could be cases where robots that take the blame for harm caused to humans could shield the people and companies who are ultimately responsible for programming and directing them.

As the technology continues to advance, there will be other intriguing questions to consider, including whether robots should have rights. Already, the authors note, the American Society for the Prevention of Cruelty to Robots and a 2017 European Union report have argued for extending certain moral protections to machines. They explain that such debates often revolve around the impact machine rights would have on people, as expanding the moral circle to include machines might in some cases serve to protect people.

While robot morality might still sound like the stuff of science fiction, the authors say that's exactly why it's critical to ask such questions now.

"We suggest that now—while machines and our intuitions about them are still in flux—is the best time to systematically explore questions of morality," they write. "By understanding how human minds make sense of morality, and how we perceive the mind of machines, we can help society think more clearly about the impending rise of robots and help roboticists understand how their creations are likely to be received."

As the early experience in Tempe highlights, people are already sharing roads, skies, and hospitals with autonomous machines. Inevitably, more people will get hurt. How robots' capacity for moral responsibility is understood will have important implications for real-world public policy decisions. And those decisions will help to shape a future in which people may increasingly coexist with ever more sophisticated, decision-making .


Explore further

The good and evil of ghosts, governments, and machines

More information: Trends in Cognitive Sciences, Bigman et al.: "Holding Robots Responsible: The Elements of Machine Morality" https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(19)30063-4 , DOI: 10.1016/j.tics.2019.02.008
Journal information: Trends in Cognitive Sciences

Provided by Cell Press
Citation: When robots commit wrongdoing, people may incorrectly assign the blame (2019, April 5) retrieved 21 October 2019 from https://medicalxpress.com/news/2019-04-robots-commit-wrongdoing-people-incorrectly.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
208 shares

Feedback to editors

User comments

Apr 05, 2019
I think that the law is pretty clear. The owner and or the manufacturer of the robot would be held liable.

Apr 05, 2019
MR,
A similar story is happening at Boeing...

Apr 05, 2019
Unless the lady was at fault for stepping in front of a moving vehicle suddenly. Even if it was a robot, most people don't robot reflexes to stop that fast. Should the robot be held to higher standards than a more common human driver?

Apr 05, 2019
It is the corporation that is responsible for directing the program.
The robot is not sentient or responsible.

Apr 05, 2019
It makes no more sense to blame a self driving car as an individual for an error it makes than to blame an individual car or a fridge or a washing machine for a manufacturing or design fault.

All manufactured goods are the same to within some manufacturing tolerance, they are not individuals. By analogy, we could say that all AI implementations are clones of the original and therefore it is the original and its designer who are responsible for flaws in the design.

Apr 07, 2019
It makes no more sense to blame a self driving car as an individual for an error it makes than to blame an individual car or a fridge or a washing machine for a manufacturing or design fault.


Depends on if that particular unit was faulty instead of the design being faulty.

In general, seeing that all AI today is either statistical inference machines, massively parallel search engines / data mines, or "frozen" neural networks that effectively implement a black box algorithm that is fixed during run-time, blaming the robot would be like taking a grandfather clock to court - they're literally just doing what they're programmed to do and cannot do otherwise.

You can start to assign agency to the robot the moment it has the true ability to alter its own programming - that however gives you a dilemma: when the programming is altered according to programmed rules, it's still just a program and the designers are still responsible.

Apr 07, 2019
Then, if the AI changes its programming randomly in the classical sense of randomness, first of all you wouldn't trust it to behave correctly in the first place because you could be sure that some of your AIs would inevitably go rogue. You're unlikely to implement a robot that is flipping a coin to decide what to do.

But if you do, there's another tripfall: how to generate randomness. A computer is pseudo-random, which means it's -still- just following a program. If you tell it to seed its random number generator with some external variable, like the local time, you're just making a program that says "When the time is X do Y" because your RNG algorithm is fully deterministic.

The point is that a deterministic algorithm doesn't have agency - it can't "choose" in any sense of the word. It's just a tick-tock clock going from one inevitable state to the next. The AI's designer is just choosing not to know what those are, so they can pretend the machine made the choice.

Apr 07, 2019
I am going to take a more cynical view of this article. Isn't the author lowering the meaning and implied responsibility of human free will? If you give a manufactured robot and a human the same legal standing as far as the decisions that each makes then you are reducing the respect for human life.

Apr 07, 2019
If you blame the manufacturers
what if the robot did it deliberately
this is not James bond - Licence to ....
carefull
what you wish for!

Apr 08, 2019
Robot rights are fine and well, but long before we consider them we must follow-through and enforce in an absolute manner all Universal Human Rights and Animal Rights. Everything wrong with the world today can be described in terms of violating one of those principles.

May 10, 2019
Ekka,
In the broadest sense humans are no less programmed by their DNA.

Here is how it works:
DNA provides a heuristic guide that predisposes an individual to acquire knowledge and experience via interaction with the environment. This requires a period of maturation.

Likewise, a Robot that had only heuristic programming that compelled the robot to learn from its environment then the same conditions would apply to both human and robot.

In crime and punishment, sanctions are given with the following goals in mind:
1) to restore any wrong doing (as determined by community standards)
2) to modify the individual's behaviour by providing a cost for the behaviour eg jail time, a fine;
3) to educate the offender to their wrong doing by community standards;
4) to protect society from rogue elements;
5) as a warning to others thus in a preventative manner avert the same errant behaviour by deterring others.

'Blame' only identifies the agent to be modified

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more