Self driving cars & The Moral Machine

Page may contain affiliate links. Please see terms for details.

PK99

Legendary Member
Location
SW19
I wasn't sure where to put this, so it's in Café...

The topic of self driving cars is ever more current and the dilemmas posed in designing the decision algorithms by which they operate are complex and difficult to resolve. It is not simply a question of "Will self driving cars be safer?", but "What moral judgements should be embedded in the algorithms?"

Trolleyology (no I did not make that up) addresses the issue in a conceptual and philosophical sense, this web site (from MIT) poses some interesting car related questions/dilemmas, that both illustrate the sort of programming that will be needed and seeks to gather data for research purposes.

http://moralmachine.mit.edu/
 
There was an author some 50 years ago(Isaac Asimov) who in a short story suggested a basic set of rules for robots

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These were later added to with another rule

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.



There are possibilities of applying these to automated cars, but conflict would be constant

Take the example above by @User9609

Would the vehicle be allowed to swerve off a road where the decision would cause harm to the occupants?
 

classic33

Leg End Member
There was an author some 50 years ago(Isaac Asimov) who in a short story suggested a basic set of rules for robots



These were later added to with another rule





There are possibilities of applying these to automated cars, but conflict would be constant

Take the example above by @User9609

Would the vehicle be allowed to swerve off a road where the decision would cause harm to the occupants?
Some are now saying that those "laws" should be re-written, or at best ignored as far as self driving cars are concerned.
 

classic33

Leg End Member
I wasn't sure where to put this, so it's in Café...

The topic of self driving cars is ever more current and the dilemmas posed in designing the decision algorithms by which they operate are complex and difficult to resolve. It is not simply a question of "Will self driving cars be safer?", but "What moral judgements should be embedded in the algorithms?"

Trolleyology (no I did not make that up) addresses the issue in a conceptual and philosophical sense, this web site (from MIT) poses some interesting car related questions/dilemmas, that both illustrate the sort of programming that will be needed and seeks to gather data for research purposes.

http://moralmachine.mit.edu/
Could never happen when I asked a similar question, that most of the driverless car manufacturers were looking at it
 
Some are now saying that those "laws" should be re-written, or at best ignored as far as self driving cars are concerned.


Cynical me says that is because they raise too many questions...and the companies would be unable to apply them. The reason why I used that dilemma

The other "argument" against applying these rules is that there is an element of human control still.

Remember the fuss when a "Robot" was used to kill the gunman in the states?

Arguably it was not a true robot, but a remote control vehicle without autonomous control and therefore NOT a Robot
 

NorthernDave

Never used Über Member
Self driving cars can't brake any harder than a human driver, but because they're constantly monitoring the road ahead they should be better prepared to deal with any incident.
While they will undoubtedly reduce the number of collisions and the severity of those that do occur, they won't eliminate collisions completely - unless the programming is so cautious that they become unusable, there will still be incidents when there simply isn't time to avoid the collision.
 

classic33

Leg End Member
Cynical me says that is because they raise too many questions...and the companies would be unable to apply them. The reason why I used that dilemma

The other "argument" against applying these rules is that there is an element of human control still.

Remember the fuss when a "Robot" was used to kill the gunman in the states?

Arguably it was not a true robot, but a remote control vehicle without autonomous control and therefore NOT a Robot
A robot killed a worker at a car(Toyotat, I think) plant in Japan. Taken off the assembly line and broken up for scrap after the incident.
 

classic33

Leg End Member
I wasn't sure where to put this, so it's in Café...

The topic of self driving cars is ever more current and the dilemmas posed in designing the decision algorithms by which they operate are complex and difficult to resolve. It is not simply a question of "Will self driving cars be safer?", but "What moral judgements should be embedded in the algorithms?"

Trolleyology (no I did not make that up) addresses the issue in a conceptual and philosophical sense, this web site (from MIT) poses some interesting car related questions/dilemmas, that both illustrate the sort of programming that will be needed and seeks to gather data for research purposes.

http://moralmachine.mit.edu/
Just under a year ago since asking, Driverless Dilemma and being told it could never happen, so nothing to worry about.
 
OP
OP
P

PK99

Legendary Member
Location
SW19
Self driving cars can't brake any harder than a human driver, but because they're constantly monitoring the road ahead they should be better prepared to deal with any incident.
While they will undoubtedly reduce the number of collisions and the severity of those that do occur, they won't eliminate collisions completely - unless the programming is so cautious that they become unusable, there will still be incidents when there simply isn't time to avoid the collision.

Have a look at the linked site - the dilemmas presented are interesting. Essentially the questions posed are, "Given a brake failure and guaranteed fatalities, who do you choose to kill?"
 
OP
OP
P

PK99

Legendary Member
Location
SW19
I'd expect driveless cars to massively reduce the rate of accidents / incidents on account of removing the aggression, impatience and inattentiveness from the cockpit. :okay:

That is pretty much a given.

But restating the sort of question on the MIT site:

Given a complete brake failure, and only two choices - straight ahead or swerve into the opposite carriage way - what should the car be programmed to do: Kill the 5 people crossing the road in front of it or the cyclist in the opposite carriage way? Does the age, gender, size or criminality of the potential victims influence you choice? If the people are crossing on a Red man and the cyclist is correctly stopped at the crossing is the answer different?
 

swee'pea99

Squire
Given the range and extent of variables involved, I'd think it all but impossible to program the things using algorithms. Might they go about it in a completely different way? I seem to remember that they programmed production robots by having them mimic the actions of a production line worker. Might they program driverless cars be sending them out in real-life driving conditions with very good drivers? Of course you'd never end up with perfection, but then there's no such thing, as the moral dilemmas illustrate. All you could hope for would be that they'd be pretty damn good. If they could standardise the behaviour (including the errors) of a really good driver, tasked to drive as best/safest as possible, it would certainly make for far better driving than current averages.
 

Vapin' Joe

Formerly known as Smokin Joe
That is pretty much a given.

But restating the sort of question on the MIT site:

Given a complete brake failure, and only two choices - straight ahead or swerve into the opposite carriage way - what should the car be programmed to do: Kill the 5 people crossing the road in front of it or the cyclist in the opposite carriage way? Does the age, gender, size or criminality of the potential victims influence you choice? If the people are crossing on a Red man and the cyclist is correctly stopped at the crossing is the answer different?
Pretty irrelevant really. Assuming that self driving cars reduce the chances of being involved in an accident to around the same as being struck by lightening the odd anomaly won't really matter.
 
Top Bottom