Self-driving cars may soon be able to make moral and ethical decisions as humans do

car
Credit: CC0 Public Domain

Can a self-driving vehicle be moral, act like humans do, or act like humans expect humans to? Contrary to previous thinking, a ground-breaking new study has found for the first time that human morality can be modelled meaning that machine based moral decisions are, in principle, possible.

The research, Virtual Reality experiments investigating human and moral assessments, from The Institute of Cognitive Science at the University of Osnabrück, and published in Frontiers in Behavioral Neuroscience, used to allow the authors to study in simulated road traffic scenarios.

The participants were asked to drive a car in a typical suburban neighborhood on a foggy day when they experienced unexpected unavoidable dilemma situations with inanimate objects, animals, and humans and had to decide which was to be spared. The results were conceptualized by statistical models leading to rules, with an associated degree of explanatory power to explain the observed behavior. The research showed that moral decisions in the con?ned scope of unavoidable traffic collisions can be explained well, and modeled, by a single value-of-life for every human, animal, or inanimate object.

Leon Sütfeld, first author of the study, says that until now it has been assumed that moral decisions are strongly context dependent and therefore cannot be modeled or described algorithmically, "But we found quite the opposite. Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object." This implies that human moral behavior can be well described by algorithms that could be used by machines as well.

The study's findings have major implications in the debate around the behavior of self-driving cars and other machines, like in unavoidable situations. For example, a leading new initiative from the German Federal Ministry of Transport and Digital Infrastructure (BMVI) has defined 20 ethical principles related to self-driving vehicles, for example, in relation to behavior in the case of unavoidable accidents, making the critical assumption that human moral behavior could not be modeled.

Prof. Gordon Pipa, a senior author of the study, says that since it now seems to be possible that machines can be programmed to make human like it is crucial that society engages in an urgent and serious debate, "we need to ask whether autonomous systems should adopt , if yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?"

As an example, within the new German ethical principles, a child running onto the road would be classified as significantly involved in creating the risk, thus less qualified to be saved in comparison to an adult standing on the footpath as a non-involved party. But is this a moral value held by most people and how large is the scope for interpretation?

"Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma," explains Prof. Peter König, a senior author of the paper. "Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines should act just like humans."

The study's authors say that autonomous cars are just the beginning as robots in hospitals and other artificial intelligence systems become more common place. They warn that we are now at the beginning of a new epoch with the need for clear rules otherwise will start marking decisions without us.


Explore further

Helping autonomous vehicles and humans share the road

More information: Frontiers in Behavioral Neuroscience, DOI: 10.3389/fnbeh.2017.00122
Provided by Frontiers
Citation: Self-driving cars may soon be able to make moral and ethical decisions as humans do (2017, July 5) retrieved 23 October 2019 from https://medicalxpress.com/news/2017-07-self-driving-cars-moral-ethical-decisions.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
90 shares

Feedback to editors

User comments

Jul 05, 2017
Human driving morality: Swerve to miss frigging squirrel by fool, kills pedestrian instead. Darwin...

Jul 05, 2017
Humans typically make ethical decisions *wrong*. Bad model for entities as powerful and impervious to pain or imprisonment as a car.

Jul 05, 2017
AI cars are intrinsically more moral and ethical because they are far better at driving than the typical distracted, impaired, emotional, unconcerned and malicious human is.

And their occasional mistakes can be used to improve their behavior in consistent and rational ways. With humans, the best you can do is modify their machines to attempt to compensate for their shortcomings. Which inevitably leads to machines which do all the driving for them anyway.

IOW AI vehicles are the inevitable result of humans inability to drive safely.

Jul 05, 2017
"Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines should act just like humans."

...probably not a good idea. Especially in high pressure situations (like an unavoidable accident) our ability to judge is severly limited. So I don't think we should hold machines to a higher standard than humans. Particularly we should not open ourselves up to situation-dependent value-of-life judgements...which can be supremely subjective.

And I think the german ethics guidelines formulated by the comission is misrepresented in the above article. The guidelines state that there shall be no differentiation based on age, gender, race, etc. when an accident involving bodily harm is unavoidable (only numbers shall be mimimized). So the given example with the child in the street is wrong.

Jul 05, 2017
Morality implies intelligent choice and autonomous cars are not intelligent.

If you program your autonomous vehicle to destroy itself (and its passengers) to avoid causing damage or death to someone outside the vehicle, you will find your autonomous vehicle to be difficult to sell.

Autonomous vehicles should be programmed to drive safely in so far as is possible with programming and to protect its passengers, recognizing that accidents will happen.

Jul 05, 2017
I think we should program them as close to what humans would do as possible.
Always save the driver and passengers first.
Save people outside the vehicle only if it's not going put the driver in harms way.
Animals come behind that, and inanimate objects behind them, it's not that complicated.

What if you have to crash into 5 people to save the driver from hitting a wall? Yes it should do that - they have the chance to jump out of the way and might be ok. If the car hits the wall the driver is dead.
In an emergency situation, people will always save themselves first and not drive into the wall. Also the car might have a 3rd option that no driver would have ever come up with that would save everyone since they have much more control than a person.

Jul 05, 2017
creepies,
Actually, a human driver might kill himself to avoid killing, for example, a group of children. But you don't want your car deliberately killing you.

Machines are not moral. Morality is predicated on intelligence.

Jul 05, 2017
The problem with intelligent machines like cars is that humans hold machines to a higher performance standard than we hold other humans to. We're much more forgiving and accepting of failure from other humans. After all, we're using the machines because they're better than humans.

So we're going to have cars that drive better and more safely than humans do. And when the cars damage something or someone, we're going to insist they do better than humans would in the event. So we're going do insist that cars are more ethical than humans are. Even though humans are very inconsistent in ethical conduct, even in a single human, let alone across the diverse billions of us. And we're even worse at stating our ethics articulately, and worse still at applying them to others as we do to ourselves. In short, we're unethical, and even more "a-ethical" (operating in an absence of an ethical regime) when holding ourselves to account.

Making ethical machines impossible.

Jul 05, 2017
Machines are not moral. Morality is predicated on intelligence
Morality is biological. It is intrinsic in most of our species.

Internal altruism in conjunction with external animosity. This is the tribal dynamic. Those tribes with a stronger tribal identity consistently prevailed in competition over resources. This is how tribalism became encoded within our genes.

You should know - the mechanism is succinctly described in your book. Of course your book also tells us that we cant be moral unless we believe, which is a lie. Lying is distinctly immoral unless done in defense of your tribe, in which case it is distinctly moral.
Morality implies intelligent choice and autonomous cars are not intelligent
Animals behave morally. It is instinctive with them as well.

Jul 05, 2017
Any cultural anthropologist will argue (and provide copious evidence) that there is no single set of "moral" or "ethical" positions to be found out there among the many cultures of the world. So the ethical positions of which culture should be converted to rules and thence to software? Thinking that one set can be chosen is nonsense, but it's pervasive nonsense. The techies of the world are not known for their grasp of world cultures. Cf. ethnocentrism, which is what we're seeing here.

Jul 06, 2017
Any cultural anthropologist will argue (and provide copious evidence)
Few 'cultural anthropologists' will acknowledge that there is such a thing as tribalism because it is far too incendiary.
that there is no single set of "moral" or "ethical" positions to be found out there among the many cultures of the world
There IS one morality and it is called the tribal dynamic.

Internal altruism in conjunction with external animosity.

Your anthros don't like it because it explains too much and restricts too little.

Jul 06, 2017
Otto, you're right about tribalism. But you're wrong about anthropologists. Of course anthropologists acknowledge tribalism, and of course they're the leading experts in facts and analysis of it.

Eg.
http://onlinelibr...294/full

Jul 07, 2017
Wow. I'm not sure I want to engage TheGhostofOtto1923 in this, because "There IS one morality and it is called the tribal dynamic" is a very odd, meaningless statement. Please explain it, and a few references to go along with the explanation would be welcome. I have a PhD in anthropology, many years of field experience . . . all the usual things that professionals in any field have. I have the feeling I'm dealing with an amateur here. Do you have any training in anthropology? Can you say how it is that you "know" what anthropologists acknowledge or don't?

Jul 07, 2017
Wow. I'm not sure I want to engage TheGhostofOtto1923 in this, because "There IS one morality and it is called the tribal dynamic" is a very odd, meaningless statement
I did explain it. Internal altruism in conjunction with external animosity. Group selection consistently favored tribes with this combination of traits.
I have the feeling I'm dealing with an amateur here
Of course youre dealing with an amateur here. An amateur who knows where to look for corroboration.
http://rint.recht...rid2.htm

-The experts quoted in this excellent paper are not amateurs.

Liberal arts soft science philos and anthros et al are in the business of directing human behavior, not explaining it. Which explains the whole tabula rasa fiasco.

Heres an explanation from a contemporary perspective.
http://bigthink.c...angerous

Jul 08, 2017
TheGhostofOtto1923. I'm not really impressed. The first article you linked to is a history-of-thought article. It's interesting, but I fail to see anything from contemporary (or recent) anthropology there. EO Wilson? Hey, I actually know EO. Sociobiology isn't anthropology; there was and is considerable animosity between those fields. Group selection, yes, that's interesting. As early as 1968 I suggested in a paper written for Ernst Mayr's graduate seminar that group selection could have a role in human societies, and why. But it's not nearly as simple as you seem to think, because altruism is a very tricky concept -- definitely not unitary. Bob Trivers and I had many discussions about it. This is not new material to me. I suggest putting current politics aside. Really. It adds nothing to the discussion. "Directing human behavior" criticism is just silly. As for the bigthink link, I'm completely unimpressed.

Jul 09, 2017
TheGhostofOtto1923:
Few 'cultural anthropologists' will acknowledge that there is such a thing as tribalism because it is far too incendiary


I'd like to see you admit you were wrong when you asserted that in this thread. You immediately faced a CA disagreeing with you but of course they acknowledge such a thing as tribalism. You're posting arguments about tribalism from anthropology yourself.

Jul 09, 2017
but I fail to see anything from contemporary (or recent) anthropology there
That's what I said. Tribalism contradicts the current sociopolitical narrative. Among other things it says that street gangs and bigotry are normal. Society is busy shaming people to resist these things. Real explanations are counterproductive.
altruism is a very tricky concept -- definitely not unitary
There was a recent case involving welfare fraud in Lakewood NJ
http://www.nj.com...obe.html

- and there was a telling quote in one of the articles I can't find now. A rabbi was explaining that nowhere is there a community more willing to help it's residents. 'If anyone needs something we give it to them.'

I haven't seen a clearer example of the tribal dynamic. Crimes committed against competing tribes are not considered crimes.
Cont>

Jul 09, 2017
Lakewood is also a revealing example of the tribalist mandate to outgrow and overrun.
http://www.dailym...mes.html

- See, the acceptance of tribalism would even allow people to revisit the root causes of pogrom and holocaust in europe.

Even so, this acceptance is the only way to produce a lasting peace in places like Palestine where a one-state solution is the only feasible one.

As long as religions exist there which have survived to the present expressly because they were better at acting on this dynamic than the religions they extincted, there will be no peace.

IOW there will be no peace until the influence of these tribalist institutions is thoroughly mitigated.

Jul 09, 2017
wailuku1943:
The first article you linked to is a history-of-thought article. It's interesting, but I fail to see anything from contemporary (or recent) anthropology there.


TheGhostofOtto1923:
but I fail to see anything from contemporary (or recent) anthropology there
That's what I said.


No, wailuku1943 the anthropoligist said only that single article had no recent anthro in it. You said:
Few 'cultural anthropologists' will acknowledge that there is such a thing as tribalism


Do I have to explain the difference between a single article you cherrypicked and the entire field of cultural anthropology? Or can you admit that cultural anthropologists acknowledge the existence of tribalism?

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more