Facebook’s announcement that it is to add deepfakes to the categories of banned content, which include nudity, hate speech, and graphic violence, came as a rare move from the company acting before a problem blew up out of control. However, the decision is hardly a patch on the growing misinformation campaigns on the platform, said privacy and security experts.
"After having previously refused to take down deep fake videos, this is an interesting announcement by Facebook. However, this in itself will not achieve much," commented Javvad Malik, security awareness advocate at KnowBe4:
"Manipulations can be made through simple technology like Photoshop or through sophisticated tools that use artificial intelligence or "deep learning" techniques to create videos that distort reality – usually called "deepfakes." While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases," Facebook vice president Monika Bickert wrote in an announcement.
This new found privacy focus comes from Facebook’s need to rebuild the trust it has lost with both users and regulators around the world, ProPrivacy’s Damien Mason told SC Media UK.
"Facebook has shown a lot of resistance when it comes to regulating its platform, sometimes under the guise of freedom and other times admitting to its own ignorance. Since the Cambridge Analytica scandal, the platform has desperately tried to reinvent itself as a privacy advocate." Mason said.
Any "misleading manipulated media" will be removed if it has been "edited or synthesised – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say" and it is made using artificial intelligence or machine learning that "merges, replaces or superimposes content onto a video, making it appear to be authentic", Bickert said in the announcement.
However, parody or satire, or video that has been "edited solely to omit or change the order of words" are exempted.
"The fact that parody and satire is excluded could mean that most people could argue that any flagged video is merely intended to be satire. Secondly, the issue of fake news, or manipulating the facts that people are exposed to, goes beyond deep fake videos. Facebook should also consider its stance on whether or not it will vet political ads or other stories for accuracy," said Malik.
Facebook’s attempt to maintain balance when trying to avoid political allegiance would be a major obstacle for the platform, agreed Mason.
"There are ways in which videos can be manipulated without the use of deep fake technology. Splicing together reactions from different shots, changing the audio, or even the speed of a video can drastically alter the message the original video intended to give," warned Malik.
"Consistent with our existing policies, audio, photos or videos, whether a deepfake or not, will be removed from Facebook if they violate any of our other Community Standards including those governing nudity, graphic violence, voter suppression and hate speech," Bickert said in the announcement.
Deepfakes have been reportedly used for targeted attacking, from revenge porn to countering political dissent. In his predictions for the year 2020 Varonis field CTO Brian Vecci said at least one major figure would fall under a deepfake campaign.
"Thanks to leaky apps and loose data protection practices, our data and photos are everywhere. It will be game-on for anyone with a grudge or a sick sense of humour. It raises the ultimate question: What is real and what is fake?"
Another major challenge is the amount of personnel needed to undertake such a task and developing automated detection technology to sweep the site in a more effective way, said Mason.
"The short-term answer is to train the workforce and teach users how to independently identify and report fake content. This is imperfect, however, and leaves a lot of room for human error when spotting increasingly realistic renders," he said.
The announcement said Facebook has partnered with academic institutions including the University of California Berkeley and MIT, media houses such as the BBC and Reuters, and several others in civil society and the technology, media and academic communities to identify and remove manipulated content.
"The preferred method, albeit long-term approach is to use artificial intelligence to spot deception within deepfakes, which will decrease the amount of exposure users have to fake videos. The process cannot be fully automated, though, as manual intervention will be required when differentiating context to keep parodies and satire live," Mason added.
Rather than trying to find and ban deep fakes, Facebook could have considered placing a big watermark on deep fake videos which indicate it is a computer generated video and not real, suggested Malik.
"But that needs to be done against more than just deep fakes if it is to make any measurable difference to the proliferation of fake news."