Facial-recognition software maker Clearview AI has conceded that it has suffered a data breach, after a Daily Beast report said an intruder had gained unauthorised access to its lists of customers.
“Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security,” said the company’s stock reply to all media outlets.
"It’s unclear what “unauthorised access” means, and I’m just guessing, but the general contours of what has been reported seems to indicate that an unauthorised person was able to perform limited commands or queries against the server or database without the expected authentication,” said Roger Grimes, data-driven defense evangelist at KnowBe4.
The company’s notification hardly provides any actionable information for anyone involved or just trying to avoid the same mistakes, noted Tim Erlin, VP at Tripwire.
Clearview was already under fire for violating privacy norms. The app literally allowed anyone to use a person’s picture to find personally identifiable details such as name and address. The app worked by comparing that photo to a database of more than three billion pictures scraped off Facebook, Venmo, YouTube and other sites.
US senator Ron Wyden tweeted earlier that Clearview's activities were "extremely troubling".
"Americans have a right to know whether their personal photos are secretly being sucked into a private facial-recognition database," his tweet said. Google, Twitter and Facebook have already slapped cease-and-desist orders on the company.
“A breach like this just adds fuel to the fire for Clearview’s critics. We’re likely to hear more about the extent of this breach as investigations uncover more data, and history tells us that it’s likely to expand in scope," said Erlin.
Last year, a security tool named Biostar 2, used by thousands of companies worldwide, including the UK's Metropolitan Police and several banks, allowed access to data that include more than a million fingerprints.
The problem with facial recognition technology like Clearview is that consumers’ biometric data was acquired without consent by analysing photos publically available online, said Ray Walsh, digital privacy expert at ProPrivacy.
“The question is whether companies should be legally permitted to extract biometric information from publicly available photos - without the knowledge or consent of those subjects. The ethical answer is no,” he told SC Media UK.
“Companies that hold and transact biometric data hold the keys to those individuals’ identities and if that data is accidentally leaked or breached there is a very real threat to the people involved.”
Effectively, not sharing our information with anyone online would mean resigning the ability to complete any online transaction, said Ciaran Byrne, head of platform operations at edgescan.
“The safety measures in place to secure our data are down to the company we make that data available to. Customers should be selective when sharing their information and should ascertain how their data is going to be used. Biometric Data should be treated the same way as any other personal data and protected to the same degree,” Byrne told SC Media UK.
There is currently no system in place for an individual to protect themselves once their data is stolen, pointed out Stuart Sharp, VP of solution engineering at OneLogin.
“Facial recognition is no different to fingerprints or dna -- in some ways, it is more sensitive since it can be used to identify people from a distance without their knowledge, as is proven when police use it for mass surveillance during sporting events. The collection, storage and use of facial recognition data must be subject to a higher standard of security than currently required by UK or EU law,” he said.
Banning unrestricted facial recognition is the only real way to stop people’s freely available photos from being exploited by technology like Clearview, suggested Walsh. However, a total ban on facial recognition software is not feasible, as it is used to solve and prevent crimes among other extremely useful things, said Byrne.
“The problem is that the governments’ desire for facial recognition capabilities currently outweighs their desire to protect consumer privacy. This makes it highly unlikely that governments will decide to ban the technology or to create legislation that substantially limits how it can be exploited,” said Walsh.
“The best thing consumers can hope for is for governments to massively restrict how companies are able to acquire biometric data.”