Researchers in the US at Carnegie Mellon University and the University of North Carolina at Chapel Hill developed a technique to fool facial recognition algorithms including those used at Airports.
Using seemingly inconspicuous glasses, a user can trick the algorithm into producing an inaccurate reading of a person's face prompting researchers to present their findings to the Transportation Security Administration and recommend the agency require people to remove glasses and jewellery to prevent the attack from being carried out, according to their study.
Researchers developed five pairs of adversarial generative nets (AGN) glasses that could be used by 90 percent of the population to evade detection. Furthermore, researchers claim these attacks can be scaled up.
The glasses are able to deceive the software by making the texture in the glasses as close to possible as real designs found online and then were subjected to user scrutiny to determine whether or not the glasses would raise alarm under normal wear.