Have you seen these new driverless cars that Google is developing? Apart from being a boon to headline writers and a source of fascination to automobile-geeks everywhere, they're also garnering a great deal of interest from the information security industry. Ever since the concept was first unveiled, researchers have speculated that such systems could be hacked, with potentially disastrous consequences. Well, I'd like to suggest that to focus on this element is missing the point – there are far more important legal and privacy issues that we should be pressing first.
First up, things are moving pretty fast in the car industry. Some vehicles already contain a bewildering array of on-board technology controlling everything from the music system, anti-lock brakes, climate control and air bags to the cruise control and keyless entry system. What's more, these computers are monitoring and recording data from a growing number of sensors all over the vehicle including air pressure, emissions, engine temperature and throttle position. My own Citroen mini-van even tells me how much fuel I've used compared to other drivers.
This is Big Data meets the Internet of Things, and as yet it's an area woefully lacking in any kind of industry regulations or legislation. Let's hope those policymakers currently finalising the EU General Data Protection Regulation have thought to consider the privacy implications of the huge quantity of data already being generated by hi-tech cars, smart appliances and other IoT-related machines. Does this even qualify as ‘private' information if it's data on, say, vehicle emissions? If so, how does the manufacturer go about getting consent from the driver for using it?
Fail to address these questions at this stage and it will become impossible to do so in 10 or 15 years' time when these systems will be ubiquitous and embedded into the very fabric of our lives. Rip and replace simply won't be an option by then.
New rules of the road
The ultimate when it comes to hi-tech cars, of course, is a driverless model controlled completely by an on-board computer – using sensors, radar, software and perhaps external ‘Smart City' systems to navigate without the need for human input. But rather than worry whether they could eventually be hacked and controlled by cyber criminals to ram police cars, carry stolen goods or even to steal the vehicles themselves, there's a more fundamental issue we need to consider.
Visionary science fiction author Isaac Asimov devised back in the 1940s the Three Laws of Robotics, as a framework set of principles which should govern all artificial intelligence created thereafter. The laws boil down to: 1) robots may not injure humans; 2) robots must obey all human unless this would conflict with 1); and 3) robots must protect their own existence as long as that doesn't conflict with 1) and 2).
So far, no similar groundwork has been laid to work out, for example, who is legally responsible if a software flaw in a driverless car causes that vehicle to crash into a group of pedestrians. Or what happens if the driver tries to override the car and swerve, only to find the car wresting control back to resume its original course. Whose life is more important, the driver's or the pedestrians'? These are not easy questions to answer but governments need to wake-up now and start addressing them. The airline industry, for example, mandates that on-board flight controllers are provided by three different companies – similar checks and balances need to be applied to driverless cars.
Already self-parking cars are appearing on our streets. In Germany, certain Mercedes models are allowed to reach speeds of 30 km/h without the need for the driver to put his hands on the wheel. Meanwhile, Smart City-like sensors by the roadside in Munich monitor traffic density and control roadside speed signs accordingly.
We're hurtling towards this future faster than policymakers realise. Let's hope they wake-up before we drive off the edge of a cliff.