Self-driving cars to be allowed on UK roads this year


Grand Old Lady
There are technical challenges to still overcome. Poorly maintained roads and infrastructure, hidden signage, poorly painted road markings both on public roads and privately owned car parks etc... Money clearly needs to be pumped in, where does that come from? We should expect a better service for the authorities, granted, but they are all on a budget.

There are of course ethical questions too.

- Do you lock the occupants in the car, or not?
- Who really will be responsible for an accident?
- What will the car do if it recognises a fault? Stop immediately? Pull over at the next lay-by? Drive to the nearest dealers?
- What does the car do in a medical emergency? Lets keep it simple. Lets say an occupant loses consciousness. Will it break the speed limits? Will it stop and call an ambulance? How will it decide what medical emergency the occupant requires if many scenario decisions are pre-programmed?
- What does the car with occupants do when faced with moral dilemma's? Such as an elderly person stepping out on to the road in front, car on coming the one side, pedestrians on the pavement the other with no chance of stopping. Who does the machine think is worthy of survival? This may be a rare scenario, but similar is possible. Humans face this, yes. But what about the machine who will approach it clinically with no emotion.

Some answers might be simple, that's OK, but how about some official recognition from those pushing for level 5 autonomy?
Last edited:


Setting the controls for the heart of the sun.
The Uber incident was actually a result of HUMAN ERROR. The car 'saw' the pedestrian via 2 different sensors BEFORE the human operator tried to take any kind of avoiding action. The reason the car didn't brake by it self was because Uber had DISABLED auto braking in the belief the human operator would be better at judging edge cases than the software, sadly once again human fallibility was shown up. federal investigation into a,emergency braking system was disabled.&text=The car was traveling at,impact, according to the report.

I work with computer algorithms all the time, algorithms don't make mistakes ever but humans do all the time. Software 'crashes' because the code is written by humans using human language so that we can understand them but its by no means the most efficient way to code for a computer. Current AI Neural Nets program/code themselves, they are 'black boxes' where we (humans) have no idea how the algorithm has been written inorder to achieve the outcome needed. The realisation that AI Neural Networks generate better code without human input was one of the biggest steps fowards in recent years. The next step is if AI Neural Networks can 'think' of new patterns/pathways rather than just be superhuman at identifying patterns based on historical data - at that point, we really will be going into the unknown, and its coming much quicker than people think.

WHEN and its a BIG WHEN, AI development is good enough to take over driving, I will have zero worries about trusting the code..........Will it depends on how much Arine you watched back in the 1990s :laugh:
Humans making the mistake in programming is one problem. Humans making assumptions of a computer programmer and thus relieving themselves of immediate responsibility is another. There are also many humans, whose pastime is breaking computer code.
If in my lifetime we reach the stage of AI in cars, based upon 400 miles of the M6 and M1 today, I would hope any AI programme would recognize a f***wit (lane hogger/speeder/BMW or Audi driver wanting to have sex with my car's exhaust pipe,), in the driver's seat, steer towards the nearest hard shoulder/refuge and refuse to start the car until their backside was in the rear passenger seat.
Last edited:


For automation to stand any chance of working, you would have to rip up the entire road network and start again. Everything would have to be totally standardised and consistent throught the land in every respect. Road layouts, lane widths, gradients, signage, stopping and parking control, avoiding overhanging objects like buildings and trees. All loose debris would have to be eliminated, drainage would have to prevent little rivers of rainfall run-off cascading across rural lanes in hilly areas. Cars would need to be able to identify areas likely to contain black ice. Who mounts the grassy bank and scrapes their bodywork on the bushes when faced by an oncoming car with a farmer in a tractor with a big trailer immediately behind it.? What if the only available passing place is on the "wrong" side of the road? Does the automated car just stop and refuse to move? Many a time I've squeezed into a gap on my right facing the traffic flow to allow an oncoming large vehicle to pass me with a clear lane. Technically I'm driving on the wrong side of the road and so is the HGV who wants to pass me, but sometimes doing that is the only way you can pass because the HGV can't get into the gap on his side whereas I can. In the real word these sorts of situations occur all the time which often require various rules of the road to be bent or broken to facilitate progress. Any half decent human driver can deal with these situations by making the best of the limited options available. If you employ rules-based automation that doesn't allow a self-driving car to do anything that is technically illegal, then you've got a recipe for chaos and gridlock.


Quite dreadful
lost somewhere
On a similar note, I'm just waiting for the occasion when someone uses the 'self parking' function on their car and with hands off, their car whips the wing off a nice shiny Beemer or similar. 'Er, but I wasn't parking the car - !' :laugh:
I'm sure there will be a simply brilliant algorithm that allows the clever vehicle to slope off quietly without leaving a note under the victim's wiper.


Quite dreadful
lost somewhere
Am amazed people here don't see the massive jump in road safety that will occur if we remove the most unpredictable part of driving from the equation - emotional, easily distracted, law breaking humans.

I love cars and driving, but for me if a no brainer to hand over driving responsibilities to an automated system once the software is good enough.

The debate is still out of current AI Neural Networks are the end solution to enable ture automation is still out, the progress of AI development though is unrelenting. I have little doubt my 5 year daughter will never need to learn to drive.

Automaton cannot come quickly enough for my liking, in the future we'll look back in amazement humans were ever trusted to operate these death traps with zero monitoring or preset boundaries.
Didn't the Boeing 737 MAX have similar wizzo clever clogs technology?


Flouncing Nobber
Overheard two parking enforcement officers talking about these yesterday. As it stands, if they get such a vehicle parked illegally, they have to prove it was the driver not the car that parked up.
Thats why theyre minimum wage parking taliban, and not 6 figure solicitors ina warm office.


Kilometre nibbler
Didn't the Boeing 737 MAX have similar wizzo clever clogs technology?
A good point, but it's also worth bearing in mind that modern airliners with advanced avionics and automation are loads safer than their dumb predecessors, and the majority of accidents are down to human error and maintenance failings.

The Max tragedy was more a matter of cost cutting implementation by Boeing (reliance on only a single angle of attack sensor, minimising the conversion training required to fly the Max) and bodged governance (allowing Boeing to mark their own homework).

It's not a great argument against automation per se. Just against allowing a profit oriented organisation to do it on the cheap and then trust them when they say it's safe. That lesson most definitely is applicable to automotive automation.

There's a saying that the ideal aircrew would consist of a computer, a pilot and a dog. The computer is there to fly the plane. The pilot is there to feed the dog. The dog is there to bite the pilot if they try to touch anything.


Über Member
A better way of asking, since it was the same computer program in charge, would be "How many other drivers were involved in 37 crashes during the same time".
Then ask, would you let them back on the road, in charge of a vehicle?
An even better question would be - have you checked your sources and understood the data? This is what the NTSB said:-

The NTSB said between September 2016 and March 2018, there were 37 crashes of Uber vehicles in autonomous mode, including 33 that involved another vehicle striking test vehicles.
So not really 37 crashes. 4 crashes once the Driverless Uber was deemed safe enough to use on actual roads rather than during testing and development of the AI.

Then we need to look at those 4 crashes:-

In one incident, the test vehicle struck a bent bicycle lane post that partially occupied the test vehicle's lane of travel. In another incident, the operator took control to avoid a rapidly approaching vehicle that entered its lane of travel. The vehicle operator steered away and struck a parked car.
So - at least one of those crashes was not caused by the AI but by - a human being.

Of course, Uber is also at fault.

"The system design did not include a consideration for jaywalking pedestrians," NTSB said.
So the AI was hobbled, and was not given proper information in its design to safely operate. These sorts of scenarios should have been dealt with at the test stage, not with real traffic and pedestrians. Would I use a driverless Uber using a flawed AI and developed by a company that seems to have huge gaps in its knowledge and skills? Nope.

Contrast this with a company like Tesla who have millions of miles of telemetry data from real time drives, the ability to constantly train their AI from human driving and Autopilot feedback, and you can see that Uber is on a different planet technologically speaking. Event Alphabet have better data thanks to all those mobile phones driving around in cars.
Top Bottom