As we wait for David Cameron's consultation on whether ISPs should have to implement opt-out blocks on pornography, it's worth taking the time to think more broadly about whether the network level is the right place to catch the wrong sort of content.
Whilst the most recent news was driven by the report from MP Claire Perry's independent parliamentary inquiry into online child protection, it is not just adult content that ISPs are being asked to block.
Another lobby group are the media organisations seeking and winning actions to prevent the infringement of their intellectual property rights, with BT, O2 and Sky recently acting in their favour.
Of course, the internet is already being filtered for some specific illegal subject matter, most notably using the Internet Watch Foundation's blacklists for online child sexual abuse content. It may seem quite reasonable to follow proposals to extend this to the clearly illegal material published about hate crime, terrorism and extremism.
The argument for making ISPs responsible can be viewed in two lights. You could say an ISP is the natural point at which to block content, but on the other hand it could be seen in the same light as making road builders liable for the accidents that drivers cause.
Will there be a new government agency to sweep the net for illegal content and will it have the necessary legal support to decide whether any particular piece of content crosses the line? Will other citizens be able to propose that some content is unacceptable, and how many would have to complain for action to be taken?
Underneath all of these questions of administration is the deeper question of how effective this type of blocking would be. Ofcom reviewed four methods of blocking sites to protect copyright material and found that none would be effective against someone determined to gain access.
What is more worrying is the idea that if the techniques used to bypass the blocks have become more widely understood, it may make the work of the police in tracking truly illegal material far more difficult.
There are more sophisticated methods of content and context analysis that allow decisions based on a risk model. This type of filtering still needs to be told what types of material are inappropriate and has to be kept up to date in the same way.
Again the question has to be asked whether the ISP is the right place to do this kind of filtering. In homes and offices with many connected devices, who can decide what level of filtering should be applied? Am I allowed to see different content to my children and, if so, what's to stop them using my device?
It's likely that simply requiring ISPs to be responsible for policing the content that people access will be unwieldy to manage, likely to overblock and have a significant cost burden that will flow down to users. If we are serious about these issues, then blocking may seem to provide a reassuring solution but could leave us blasé to more dangerous breaches going unseen.
Simon Wilcox is head of innovation at Smoothwall Group