The terrible events that unfurled in Westminster yesterday and led to Parliament being suspended, might have a knock-on effect on the Digital Economy Bill which was having its last day reading in the report stage.
Although the bill itself was passed last year, it won't become law until after possible amendments have been debated, which is due to happen later this month.
One of the most controversial things covered by the bill involves age verification. It will, as it stands, force ISPs in the UK to block access to porn sites that cannot verify their users are over 18.
A debate on the Radio 4 ‘Today' programme yesterday added fuel to the fire by suggesting that existing US legislation, the Children's Online Privacy Protection Act (COPPA), isn't taken seriously by companies.
That children under the age of 13 routinely click through sites covered by COPPA is, frankly, no surprise. Research from Ofcom last year that revealed a third of users as young as 12 were exposed to hate speech was equally unsurprising.
It appears that everyone has an angle on age verification, but few understand how to implement a system that actually works. Given that enterprise organisations may find themselves required to implement age verification on public facing websites, it's an area that deserves some further discussion.
SC Media UK asked Carolyn Kimber, secretary-general of the Digital Policy Alliance, which has helped formulate some of the language around age verification provisions in the Digital Economy Bill, whether the security industry is failing children by not properly getting to grips with the issue?
“It would be difficult for the security industry to be held responsible,” Kimber replied, “as it is a problem that occurs on the frontline for all suppliers of age restricted products and services.” As for who should be held responsible for ensuring any age verification scheme works, Kimber told us, with an eye firmly on progress of the Digital Economy Bill, “In order to gain protection at an international level ISPs are to be involved in site blocking once they have been advised of infringements by the yet to be named enforcer of this area. It would therefore be logical that the sites trading in age restricted products and services be responsible for what they are selling and to whom.”
Lee Munson, security researcher at Comparitech.com, agreed that it's more a business decision than a security one. “Should an organisation decide more robust age verification is required,” Munson explained, “the decision on how to implement any additional measures should be signed off by a broad spectrum of stakeholders from the data privacy officer and chief information security officer, through to the legal department and CEO who, ultimately, will carry the can should anything break down in the implementation, collection, storage and dissemination of that data.”
Whether lawmakers need to be part of the process of defining how age verification should work in the first place is debatable and, according to Munson, unwelcome given the disconnect between those who create laws and those who must create the solutions to comply with them. “It is my opinion,” he concluded, “that government should offer no more than guidance.”
Robin Tombs, CEO at Yoti, doesn't let the security industry off the hook however. In conversation with SC he argued, “Education is a good place to start, which is why the Children's Commissioner recently called for the integration of digital citizenship into the school curriculum. However, education alone is not enough. There needs to be cooperation between technology and security providers, websites and lawmakers if we are to make any progress in addressing the issue of online age verification.”
Tombs admitted it's a tough nut to crack though, suggesting, “If a resource-rich giant like Facebook can't keep out under-age users, how can smaller companies do it?” One option is to place the responsibility with the ISPs. Another is to force the requirement for age verification at the point of entry. Whatever, all parties need to work together.
SC asked Nick Brown, group managing director at GBG, how to best go about regulating the protection of children in a way that ensures protections are applied properly without impinging upon freedom of speech? “Just as we do offline, we need to make sure children are prevented from accessing content which should only be viewed by adults online,” Brown stated. “The age checking process must require data that cannot be easily recalled and should include checking against multiple different sources for maximum assurance.”
Brown admitted that the real challenge is how to impose a robust check that will block minors without creating a cumbersome paper chasing exercise for every legitimate customer. “As government looks to pass landmark legislation in this important but complex area, there needs to be a process in place that can identify someone who looks like using false or stolen data very early on,” he concluded.
This would mean that any red flags could be investigated further, allowing the majority of valid customers to pass straight through and prevent minors from accessing 18+ content. “A one size fits all approach will lead to pushback from site operators,” Brown insisted, “but any less robust process will lead to intensified regulation.”
As for the technology options that are available beyond the self-declaration that have so obviously failed up to date, there are some interesting solutions emerging.
Yoti has created an app that allows individuals to prove their identity or their age, both online and in person. Once consumers have created their Yoti they can use it time and time again.
The creation process is simple: download the Yoti app, take a selfie, enter a 5 digit PIN and scan an ID document – either a passport or driving licence. The selfie is then matched with the ID document to ensure it is the same person. “We provide an identity and attribute platform, as well as age checking,” Yoti CEO Robin Tombs explains. “This means that the same technology can assist in age checking for access to children's sites, where solely an ‘18+' or ‘under 18' attribute can be transferred with anonymity where required. Yoti can also support parental consent as required by the GDPR.”
Alistair Kelman, founder of the SafeCast Headcode system, points out that a system where a nine-year-old child can view a beheading video online is obviously broken.
As a well-known former barrister who understands technology better than most (he famously defended Steve Gold and Robert Schifreen in the 30 year old hacking case that led directly to the creation of the Computer Misuse Act) Kelman took an interest in the legal implications of the Digital Economy Bill proposals as they relate to age checks: “a restriction on access for child protection or other purposes is actually age verification” he wrote.
The bill encourages all providers of internet access services to implement the BSI Age Verification Standard, yet few know what that is.
“Filtering from age zero upwards,” Kelman warns, “can only work without censorship if content is labelled by its creator.” And that's where SafeCast comes in, with a process of labelling video content mapped onto the Ofcom regulatory framework.