Self-driving cars to be allowed on UK roads this year

Page may contain affiliate links. Please see terms for details.

classic33

Leg End Member
Overheard two parking enforcement officers talking about these yesterday. As it stands, if they get such a vehicle parked illegally, they have to prove it was the driver not the car that parked up.
 
Forgive my scepticism but that sounds like cobblers to me. (What they were saying, not that you overheard it).
Does sound unlikely - if it applies to parking then surely it would apply to all offences
The driver is still responsible for where the car parks - or actually the owner unless he points out another person who was in charge at the time

Anyone got anything that actually says this?
Or os it just a rumour spreading amongst the parking people??
 

gzoom

Über Member
I am confident that the next death of a cyclist as a result of a computer malfunction will lead to much greater advances in cyclist safety than the next death of a cyclist as a result of a human malfunction.

Am amazed people here don't see the massive jump in road safety that will occur if we remove the most unpredictable part of driving from the equation - emotional, easily distracted, law breaking humans.

I love cars and driving, but for me if a no brainer to hand over driving responsibilities to an automated system once the software is good enough.

The debate is still out of current AI Neural Networks are the end solution to enable ture automation is still out, the progress of AI development though is unrelenting. I have little doubt my 5 year daughter will never need to learn to drive.

Automaton cannot come quickly enough for my liking, in the future we'll look back in amazement humans were ever trusted to operate these death traps with zero monitoring or preset boundaries.
 

Bazzer

Setting the controls for the heart of the sun.
Am amazed people here don't see the massive jump in road safety that will occur if we remove the most unpredictable part of driving from the equation - emotional, easily distracted, law breaking humans.

I love cars and driving, but for me if a no brainer to hand over driving responsibilities to an automated system once the software is good enough.

The debate is still out of current AI Neural Networks are the end solution to enable ture automation is still out, the progress of AI development though is unrelenting. I have little doubt my 5 year daughter will never need to learn to drive.

Automaton cannot come quickly enough for my liking, in the future we'll look back in amazement humans were ever trusted to operate these death traps with zero monitoring or preset boundaries.
I suspect many of us are cynical.
Quite apart from that incident in the USA a few years back of the cyclist crossing the road with her bike and being killed by a self driving car, most of not all, will have had to some degree, had unwanted physical interactions with cars. Trust is earned.
 

gzoom

Über Member
Quite apart from that incident in the USA a few years back of the cyclist crossing the road with her bike and being killed by a self driving car, most of not all, will have had to some degree, had unwanted physical interactions with cars. Trust is earned.

The Uber incident was actually a result of HUMAN ERROR. The car 'saw' the pedestrian via 2 different sensors BEFORE the human operator tried to take any kind of avoiding action. The reason the car didn't brake by it self was because Uber had DISABLED auto braking in the belief the human operator would be better at judging edge cases than the software, sadly once again human fallibility was shown up.

https://www.theguardian.com/technology/2018/may/24/emergency-brake-was-disabled-on-self-driving-uber-that-killed-woman#:~:text=A federal investigation into a,emergency braking system was disabled.&text=The car was traveling at,impact, according to the report.

I work with computer algorithms all the time, algorithms don't make mistakes ever but humans do all the time. Software 'crashes' because the code is written by humans using human language so that we can understand them but its by no means the most efficient way to code for a computer. Current AI Neural Nets program/code themselves, they are 'black boxes' where we (humans) have no idea how the algorithm has been written inorder to achieve the outcome needed. The realisation that AI Neural Networks generate better code without human input was one of the biggest steps fowards in recent years. The next step is if AI Neural Networks can 'think' of new patterns/pathways rather than just be superhuman at identifying patterns based on historical data - at that point, we really will be going into the unknown, and its coming much quicker than people think.

https://www.nature.com/articles/nature24270

WHEN and its a BIG WHEN, AI development is good enough to take over driving, I will have zero worries about trusting the code..........Will it depends on how much Arine you watched back in the 1990s :laugh:
 
Last edited:

classic33

Leg End Member
The Uber incident was actually a result of HUMAN ERROR. The car 'saw' the pedestrian via 2 different sensors BEFORE the human operator tried to take any kind of avoiding action. The reason the car didn't brake by it self was because Uber had DISABLED auto braking in the belief the human operator would be better at judging edge cases than the software, sadly once again human fallibility was shown up.

https://www.theguardian.com/technology/2018/may/24/emergency-brake-was-disabled-on-self-driving-uber-that-killed-woman#:~:text=A federal investigation into a,emergency braking system was disabled.&text=The car was traveling at,impact, according to the report.

I work with computer algorithms all the time, algorithms don't make mistakes ever but humans do all the time. Software 'crashes' because the code is written by humans using human language so that we can understand them but its by no means the most efficient way to code for a computer. Current AI Neural Nets program/code themselves, they are 'black boxes' where we (humans) have no idea how the algorithm has been written inorder to achieve the outcome needed. The realisation that AI Neural Networks generate better code without human input was one of the biggest steps fowards in recent years. The next step is if AI Neural Networks can 'think' of new patterns/pathways rather than just be superhuman at identifying patterns based on historical data - at that point, we really will be going into the unknown, and its coming much quicker than people think.

https://www.nature.com/articles/nature24270

WHEN and its a BIG WHEN, AI development is good enough to take over driving, I will have zero worries about trusting the code..........Will it depends on how much Arine you watched back in the 1990s :laugh:
"Uber self-driving cars were involved in 37 crashes before the killing of Elaine Hezerberg."
https://policyadvice.net/insurance/insights/self-driving-car-statistics/
 
"Uber self-driving cars were involved in 37 crashes before the killing of Elaine Hezerberg."
https://policyadvice.net/insurance/insights/self-driving-car-statistics/

How many crashes were human-driven cars involved in over the same period?
 

classic33

Leg End Member
How many crashes were human-driven cars involved in over the same period?
A better way of asking, since it was the same computer program in charge, would be "How many other drivers were involved in 37 crashes during the same time".

Then ask, would you let them back on the road, in charge of a vehicle?
 
A better way of asking, since it was the same computer program in charge, would be "How many other drivers were involved in 37 crashes during the same time".

So if 100 million drivers, who between them have 1 million crashes per year, migrate to 100 million self-driving cars, all with the same computer programme in charge, and those 100 million self-driving cars then have 100,000 crashes per year, that 90% reduction in crashes is actually a bad thing?
 

classic33

Leg End Member
So if 100 million drivers, who between them have 1 million crashes per year, migrate to 100 million self-driving cars, all with the same computer programme in charge, and those 100 million self-driving cars then have 100,000 crashes per year, that 90% reduction in crashes is actually a bad thing?
Doesn't actually answer the question asked though. Diverts, but doesn't answer.

To answer your question though, if a vehicle was involved in 100,000 crashes, most would be calling for their removal from the roads.

Individual drivers are just that, individual. And each will react in differing ways.
 

Baldy

Über Member
Location
ALVA
I'm waiting for self drive trucks, I really could do with a laugh. Let's see how they cope with finding remote farms in the Highlands, that aren't on any maps. Or, the "we don't want it here take it to our other warehouse. You can't miss it, just go right round the one way system then it's a right and left. Or is it left and then right. Anyway its over there ". Sort of situation.
 

HMS_Dave

Grand Old Lady
There are technical challenges to still overcome. Poorly maintained roads and infrastructure, hidden signage, poorly painted road markings both on public roads and privately owned car parks etc... Money clearly needs to be pumped in, where does that come from? We should expect a better service for the authorities, granted, but they are all on a budget.

There are of course ethical questions too.

- Do you lock the occupants in the car, or not?
- Who really will be responsible for an accident?
- What will the car do if it recognises a fault? Stop immediately? Pull over at the next lay-by? Drive to the nearest dealers?
- What does the car do in a medical emergency? Lets keep it simple. Lets say an occupant loses consciousness. Will it break the speed limits? Will it stop and call an ambulance? How will it decide what medical emergency the occupant requires if many scenario decisions are pre-programmed?
- What does the car with occupants do when faced with moral dilemma's? Such as an elderly person stepping out on to the road in front, car on coming the one side, pedestrians on the pavement the other with no chance of stopping. Who does the machine think is worthy of survival? This may be a rare scenario, but similar is possible. Humans face this, yes. But what about the machine who will approach it clinically with no emotion.

Some answers might be simple, that's OK, but how about some official recognition from those pushing for level 5 autonomy?
 
Last edited:

Bazzer

Setting the controls for the heart of the sun.
The Uber incident was actually a result of HUMAN ERROR. The car 'saw' the pedestrian via 2 different sensors BEFORE the human operator tried to take any kind of avoiding action. The reason the car didn't brake by it self was because Uber had DISABLED auto braking in the belief the human operator would be better at judging edge cases than the software, sadly once again human fallibility was shown up.

https://www.theguardian.com/technology/2018/may/24/emergency-brake-was-disabled-on-self-driving-uber-that-killed-woman#:~:text=A federal investigation into a,emergency braking system was disabled.&text=The car was traveling at,impact, according to the report.

I work with computer algorithms all the time, algorithms don't make mistakes ever but humans do all the time. Software 'crashes' because the code is written by humans using human language so that we can understand them but its by no means the most efficient way to code for a computer. Current AI Neural Nets program/code themselves, they are 'black boxes' where we (humans) have no idea how the algorithm has been written inorder to achieve the outcome needed. The realisation that AI Neural Networks generate better code without human input was one of the biggest steps fowards in recent years. The next step is if AI Neural Networks can 'think' of new patterns/pathways rather than just be superhuman at identifying patterns based on historical data - at that point, we really will be going into the unknown, and its coming much quicker than people think.

https://www.nature.com/articles/nature24270

WHEN and its a BIG WHEN, AI development is good enough to take over driving, I will have zero worries about trusting the code..........Will it depends on how much Arine you watched back in the 1990s :laugh:
Humans making the mistake in programming is one problem. Humans making assumptions of a computer programmer and thus relieving themselves of immediate responsibility is another. There are also many humans, whose pastime is breaking computer code.
If in my lifetime we reach the stage of AI in cars, based upon 400 miles of the M6 and M1 today, I would hope any AI programme would recognize a f***wit (lane hogger/speeder/BMW or Audi driver wanting to have sex with my car's exhaust pipe,), in the driver's seat, steer towards the nearest hard shoulder/refuge and refuse to start the car until their backside was in the rear passenger seat.
 
Last edited:
Top Bottom