Adam Smith Institute

View Original

The Internet Watchmen

As Tim Worstall notes, new government plans to block online terrorist and extremist content are extremely worrying. Along with the introduction of default 'opt-out' porn filters and the criminalisation of rape porn, they are another example of Cameron's politicised censorship of the web. Whereas reducing the proliferation of child abuse images is a good thing, this new measure results in the censorship of ideas. Furthermore, whilst it is relatively straightforward to identify child abuse imagery, it is much less so (and arguably impossible) to decide which ideas are 'too dangerous' to viewed in the UK.

Aside from these issues there is also the question of how such a content block would work in practice. In many ways, how to block can be as problematic as the censorship itself.

The government has said that it wants to model the new blocking unit on the Internet Watch Foundation: a part-EU, part internet industry-funded UK 'hotline' for child abuse imagery. The IWF assesses material submitted by the public and flags up UK-hosted content to be removed by service providers. Content from outside the UK is added to a URL 'blacklist' which ISPs then block UK access to.

There are a number of issues with this model. First, there is no guarantee that what the IWF flags up is actually illegal. With no legal clout, the IWF acts on content it deems 'potentially illegal' - and there is little to stop legitimate content getting wrongly marked. One controversial case saw a picture of an album cover on Wikipedia getting blocked until the backlash forced the IWF to reverse their decision. Appealing against the IWF's decisions can be a difficult and opaque process, not least because of the difficulty of appealing against the illegality of an image you can't even see.

Despite the IWF's lack of legal authority, the Open Rights Group claims that their blacklist has never been assessed by a court or legal body. This makes their actions rather murky. Given its sensitivity ISPs can't see the content of the blacklist to make their own judgement; they must either block all of it or none.  On top of this, there are also problems with the technology ISPs use to actually block the URLs - which can be unreliable and block too broadly.

In addition, from April 2014 the IWF will shift from a being reactive body -acting only on content sent to it - to a proactive one, actively seeking out images of abuse behind pay walls and on peer-to-peer networks.  This approach is another step in the active policing of the web, and is also likely to be followed by the new anti-extremist unit.

Issues of political and religious censorship are much more complicated than that of child pornography. The unaccountability of the IWF and its lack of judicial oversight  therefore make it a poor model to copy for what is an incredibly controversial (and dangerous) policy. Since the new unit will be publicly funded, its decisions may come under greater legal scrutiny. On the other hand, a government-funded body could become politicised and overzealous in its mission. In any case, a clear due process and a rigorous appeals system will be absolutely essential.

Crime & security minister James Brokenshire says an update on the proposals will arrive shortly, though the fact that civil liberty groups claim not to have been consulted on the matter is rather worrying. The sensible thing to do would be to scrap this idea altogether. Since this is unlikely to happen, both the politics and the technicalities of the initiative are bound to prove difficult indeed.

Cyberterrorist.jpg