Driverless Dilemma

What should the car do?


  • Total voters
    14
Page may contain affiliate links. Please see terms for details.

Levo-Lon

Guru
Ill ignore your PM @classic33 ..my post was humour..

pm's are Not private here as you know..so i wont reply to pm's
 

winjim

Straddle the line, discord and rhyme
If that's the case @winjim, who "pulls" the lever, and when?
Human operator could override, although as the technology progresses we will become lazier and less attentive until we're just passengers. I don't think the car should be able to pull the lever.

Anyway, why all this talk of driverless cars? I'm waiting for the development of riderless bikes :hyper:.
 

Inertia

I feel like I could... TAKE ON THE WORLD!!
It is one of the reasons they failed last time

I remember a version from the 70's using sensors to enforce speed / distance in order to make driving safer

Worked perfectly, the sensors allowed the driver to choose their speed, but if you got within the "safe braking distance" the sensors would override

Worked perfectly until the first road trials when they fund that too may drivers saw the safe distance as a gap to fit into.

The car would be travelling at 60 mph, and another would suddenly fill the gap. The new gap dictated a speed of 20 mph so the car would immediataley slow to this speed, causing emergency braking in the traffic behind and a few shunts because of it.

The system only worked if everyone else obeyed the rules.
Sounds like the moral of this story is that humans cant be trusted to drive cars safely.
 
Sounds like the moral of this story is that humans cant be trusted to drive cars safely.

The issue with humans is that we are prone to mistakes, humans also get tired, and complacent.

Not so with robotics, and computers. They follow the exact rules we tell them to, every time.

They don't need to be perfect to replace us, they just need to be better than we are.
 

Inertia

I feel like I could... TAKE ON THE WORLD!!
You're falling for the fallacious assumption that robots do not make mistakes - they do, particularly if they are AI based. They're just quicker at learning from them.

If we want truly autonomous vehicles then they'll have to be AI-based, in order to cope with the real world, and that means they will make mistakes.
Technically they dont make mistakes, they do what they are programmed to do which may not be what we intended, programmers can make mistakes.

Education is the most realistic tool to prevent car crashes, educate drivers to take care of each other on the roads. Driving on the roads today is too adversarial.

The only way to really make a really big dent on deaths on roads would be to remove people from the equation. We dont want truly autonomous cars as they would only be concerned with themselves. Instead of them being autonomous it might be better to have the cars communicate with each other and work together to get everyone from A to B safely.

A lot of people like their cars and like driving so Im not sure if that happen for a long time.
 
You're falling for the fallacious assumption that robots do not make mistakes - they do, particularly if they are AI based. They're just quicker at learning from them.

If we want truly autonomous vehicles then they'll have to be AI-based, in order to cope with the real world, and that means they will make mistakes.

They follow instructions, they're not "mistakes" in the same sense. They do what is programmed, even if what is programmed is incorrect. But incorrect instructions is not the fault of the robot following them.

Though, does this mean we will need computers programming computers to avoid programming mistakes? If so, who programs the computers that program computers?


hmmmmmmmm
 
Only if you assume that robots are limited to machines that are purely programmed and not machines that learn. AI is very much based on learning and, just like humans, AI robots make mistakes.

Until they learn to think independently, their "learning" is simply an illusion of intelligence. They are learning in a programmed manner, simply following instructions.
 
I think it is going to depend on your definition of "mistake". Doing something incorrectly, or doing something incorrectly against what it knows to be correct.
 

Inertia

I feel like I could... TAKE ON THE WORLD!!
Only if you assume that robots are limited to machines that are purely programmed and not machines that learn. AI is very much based on learning and, just like humans, AI robots make mistakes.
I dont know too much about A.I. and how it works but you did imply that that non-A.I. robots make mistakes too.
 
Until they learn to think independently, their "learning" is simply an illusion of intelligence. They are learning in a programmed manner, simply following instructions.
Nope. Algorithms that software have developed by learning are now untranslatable by humans. They are not intelligent. They are not sentient. But they do not simply follow a series at programmed instructions.
 
That would only be true if the only inputs were its programming. It's not - that's the whole basis of a learning machine.

But how it processes that data, how it sorts it, how it identifies key parts of what it senses. Is programmed. At the moment, no machine can "think", they are not capable of abstract thought.

Although impressive, it is simply following pre-determined algorithms to process data based on sensors to the outside world.
 

Inertia

I feel like I could... TAKE ON THE WORLD!!
This is a good article on AI, this part is a particularly relevant

Myth: “Artificial superintelligence will be too smart to make mistakes.”

Reality: AI researcher and founder of Surfing Samurai Robots, Richard Loosemore thinks that most AI doomsday scenarios are incoherent, arguing that these scenarios always involve an assumption that the AI is supposed to say “I know that destroying humanity is the result of a glitch in my design, but I am compelled to do it anyway.” Loosemore points out that if the AI behaves like this when it thinks about destroying us, it would have been committing such logical contradictions throughout its life, thus corrupting its knowledge base and rendering itself too stupid to be harmful. He also asserts that people who say that “AIs can only do what they are programmed to do” are guilty of the same fallacy that plagued the early history of computers, when people used those words to argue that computers could never show any kind of flexibility.

Peter McIntyre and Stuart Armstrong, both of whom work out of Oxford University’s Future of Humanity Institute, disagree, arguing that AIs are largely bound by their programming. They don’t believe that AIs won’t be capable of making mistakes, or conversely that they’ll be too dumb to know what we’re expecting from them.

“By definition, an artificial superintelligence (ASI) is an agent with an intellect that’s much smarter than the best human brains in practically every relevant field,” McIntyre told Gizmodo. “It will know exactly what we meant for it to do.” McIntyre and Armstrong believe an AI will only do what it’s programmed to, but if it becomes smart enough, it should figure out how this differs from the spirit of the law, or what humans intended.

McIntyre compared the future plight of humans to that of a mouse. A mouse has a drive to eat and seek shelter, but this goal often conflicts with humans who want a rodent-free abode. “Just as we are smart enough to have some understanding of the goals of mice, a superintelligent system could know what we want, and still be indifferent to that,” he said.

http://gizmodo.com/everything-you-know-about-artificial-intelligence-is-wr-1764020220
 
Top Bottom