'Crash' testing programme needed to achieve autonomous vehicle security
'Crash' testing programme needed to achieve autonomous vehicle security

In yesterday's 2017 Autumn Budget, Chancellor Philip Hammond announced that driverless cars will be on UK roads by 2021. Although self-driving cars will revolutionise the road, rushing autonomous vehicle testing may provoke the ‘biggest testing crisis' in the history of the world.

 

As seen in recent terrorist attacks, cars can be used as a ubiquitous weapon of destruction. Totally non-deterministic circumstances—with (not fully tested) artificial intelligence—may turn these cars into lethal weapons. So, how are we going to assure the public and certify that vehicles are safe? A new testing approach is needed.

 

Repeating the past

In recent years, there has been a big ‘philosophical' change that happened with mobile phones and desktops. With a PC you are in complete control and have full ownership. However, a phone is different. A lot of the “security” on a phone is actually there to stop YOU doing things that you're not supposed to, as defined by its regulation.

 

Cars versus autonomous vehicles are a bit like the PC vs mobile phone conundrum. If vehicles are fully self-driving by 2021 and an incident occurs, who is liable? Will the blame be placed on the manufacturer? The owner? The insurance companies?

 

Because of this, there needs to be some thought on how we ethically program vehicles—including, for example, if your car has the choice of driving you off a cliff or to hit pedestrians, which one will it do? There needs to be ethics for autonomous vehicles, and these need to be considered by a regulatory body.

 

It all comes down to a fundamental question – who is the car really working for? The car can work for the owner or for society, or a mix of the two—but that mix needs to be clear. If the car isn't totally working for the driver, the driver may not be liable. Although, if the government is defining constraints, the manufacturer isn't necessarily liable either. Before self-driving cars are on UK roads, this liability issue needs to be clarified, otherwise incidents may result in grey areas with no clear answer.

 

Hardware and software liability issues

As we've also seen in other areas of technology, there is a high chance that we'll see the separation of hardware and software in cars. For example, a customer will be able to buy the car hardware, and can then download separate software to run on it. Although, if a problem were to arise, is it the fault of the hardware or the software?

 

The algorithm could have decided to do the ‘right' thing, but the car hardware may not implement it fast enough. This lack of clarity will become a challenge for both the ones testing the self-driving cars and the ones deciding who is liable.

 

The jobs market

The Autumn Budget wasn't the only big announcement around autonomous vehicles in the past week, as Tesla's autonomous new electric lorry also has strong implications for the UK. Long distance driving is one of the biggest jobs in the developed world, and technologies are eventually going to automate a lot of key jobs—including lorry driving. When self-driving vehicles are introduced onto our roads, the government needs to reflect on their impact on the job market—alongside the wider impact on safety and regulations.

 

For innovative thinking and the emphasis on electric and self-driving cars, the Chancellor deserves an A for effort. However, government has not given enough thought on the issues surrounding autonomous vehicles and the testing process—and this may cause problems later down the line.

 

Contributed by Dr. John Bates, CEO of Testplant

 

*Note: The views expressed in this blog are those of the author and do not necessarily reflect the views of SC Media or Haymarket Media.