Jonathan Petit comes with the lofty title of principal scientist for Security Innovation Inc, a software and application security company. In his work there, he specialises in the security of automated vehicles, a concern that has loomed large in the industry since Chris Valasek and Charlie Miller hacked a Jeep at last year's Defcon.
Petit has also spent time as a research fellow at University College Cork, Paul Sabatier University in France where he received his PhD and the University of Twente where he was post doctoral researcher in their services, cyber-security and safety group.
In any case, he arrives on the stage at Black Hat in Amsterdam, well prepared to deliver a talk to this year's European black hats on that most timely of subjects: "Self driving and connected cars: fooling sensors and tracking drivers".
There is, of course, a distinction to make here. Automated vehicles are a relatively (though not too) distant development. Uber predicts that their robot cars will be on the road by 2030 and Toyota has predicted that we'll see the first automated cars in 2020.
Still, best to be sure: One need not imagine the physical havoc that a malicious party, so inclined, could wreak on a network of automated vehicles.
Petit's experiments into remotely hacking automated vehicles were mostly carried out in the last half of 2014 at the University of Twente. While a lot of the concerns around networked cars revolve around the networks that they belong to and the segregation of infotainment systems, namely the radio and GPS, from the actual drive systems, Petit picked up on the sensors that the car uses to prompt the driver on decisions, or in the case of fully automated vehicles, actually make those decisions.
"Every car will be equipped with a bunch of sensors" said Petit, including radar, Lidar, ultrasonic sensors, cameras, on-board maps, wheel encoders and so on.
Petit chose two of these to test: the camera and the LIDAR system. "We decided to go with cheap hardware," he said, things cheaply available on the internet.
The camera, as it stands, is the only sensor on the car that can read traffic signs or text that might be relevant for driving. Using LEDs and laser pointers and a camera installed on the windshield, Petit could effectively jam the sensor with even a small flash of light directly into the camera.
LIDAR, much like radar, is a remote sensor technology which works by shooting a laser at a target and then taking readings from the reflection. As Petit found, this too was not hard to exploit. Many automated and non-automated vehicles are fitted with LIDAR, commonly with a range of a couple of hundred meters, for environmental detection.
By hijacking the LIDAR's echo, the point between the LIDAR's laser shooting out and receiving its reflection, Petit could effectively feed it false information. Essentially, "you can create false objects" for the car to detect and treat as its real surrounding environment.
Obviously, not all automated vehicles are or will be without drivers, but those models will exist. By taking away the human element behind driving instead of these hackable systems prompting you about, say, an overly close car or pedestrian, it will literally make decisions based upon the readings from those all-too-hackable sensors.
It will not only rely on these sensors to make short-term safety-related decisions but also long-term, planning decisions.
As Petit said, without that human element, "there is no driver to detect the attack", bring this potential danger to the fore.