Today we were presented with the Online Harms White Paper, a document which proposes to add a new bill to the existing cornucopia of laws – ss. 4 and 5 of the Public Order Act 1986, ss. 1(a)(1) and 1(b) of the Malicious Communications Act 1988, s. 12 of the Terrorism Act 2000, s. 127 of the Communications Act 2003, and ss. 1-2 of the Terrorism Act 2006, among others – that regulate political speech in the United Kingdom which is legal in much of the rest of the English-speaking world, particularly in the United States.
As the digital space increasingly supplants physical space as the preferred venue for the exchange of ideas, smaller jurisdictions – in particular, the Commonwealth jurisdictions – have begun to consider and in some cases enact draconian speech controls that would have seemed very alien to our grandfathers who stormed the beaches at Normandy, and very familiar to the soldiers attempting to repel our campaign.
Existing British laws that criminalize speech have failed, so far, to turn the Internet into a civil and gentle place. The Internet, of course, isn’t the problem; speech online is nasty because people are nasty. So the Government now wants to enlist tech platforms as de facto morality police to achieve with algorithms what the Government could not achieve with legislation: controlling thought on the Internet.
The Government proposes to do so by borrowing a concept from tort law, the “duty of care,” and saying that social media platforms owe such a duty to their customers.
Of course, the duty of care in tort usually means that there are “such close and direct relations” between the victim of harm and the person inflicting it – teacher-pupil, road users, client-solicitor, doctor-patient – "that the act complained of directly affects a person whom the person alleged to be bound to take care would know would be directly affected by his careless act."
Where such a proximate relationship exists, if a failure to discharge the duty would cause harm which is reasonably foreseeable, and it is fair, just and reasonable to impose liability on the service provider, a duty of care may be said to exist. Caparo v. Dickman, 1990 UKHL 2. Where such a breach is committed, damage is caused, and the breach is a proximate cause of the damage, the law recognizes a private cause of action in negligence in favour of the damaged party against the tortfeasor.
That’s not what the government proposes here. In the Online Harms White Paper, the phrase “Duty of Care” is used as a pleasant-sounding, but legally hollow, punch line rather than as a description of a 90-year-old bedrock principle of English law.
The proposed bill would require any company “that provide(s) services or tools that allow, enable, or facilitate users to share or discover user-generated content, or interact with each other online,” i.e. any company that operates an interactive application, to “keep their users safe and tackle illegal and harmful activity on their services.” Proposed penalties would be up to 4% of the offending company’s worldwide turnover.
But what does this mean? “Illegal” of course, has a certain definition and can be understood. “Harmful” is a wider term which encompasses prima facie lawful activity, under which the government includes “cyberbullying and trolling,” i.e., discourse.
What about the term “safe?” Recalling the playground proverb that “sticks and stones may break my bones; but words will never hurt me,” the Government appears to believe that exposure to harmful ideas is itself harmful. Maybe it is, to a political agenda. But it certainly isn’t in terms of any category of civil wrong recognized by English law up to today.
Furthermore, if we were looking at the traditional "duty of care," it would be worth noting that the law of negligence which creates the duty of care tells us that knowing and voluntary assumption of risk is an absolute defense to a negligence action. Users who log on to the Internet are not helpless lemmings. They know what they’re getting themselves into. They consent to being there. They can log off at any time. Nobody is forcing them to read this content.
This shows us that the Online Harms White Paper really isn’t about a “duty of care” at all, because no duty of care actually exists and even if it did, social media companies have an absolute defense. This proposal is about preventing Internet users from engaging in knowing and voluntary speech, and it's about recruiting vast armies of private sector policemen to patrol their thoughts, even if those thoughts are perfectly legal.
There are other measures – improving free speech protections to permit data sharing with the United States, or banning under-13s from use of social media – which would be more effective in preventing serious crime, more proportional to the stated aims of the white paper, and directed at actual, physical, actionable “harm” that could possibly arise from social media, without affecting one whit the ability of adult Britons to engage in legitimate political discourse.
The Online Harms White Paper is, at its core, illiberal, and incompatible with English ideas of harm and of freedom. As presently conceived, it would give the Government – not only this Government, but also any Government that might succeed it in the near or distant future – almost unlimited power to use large tech companies as proxies to control expression online, anywhere in the world. Existing controls on speech and enforcement mechanisms are more than adequate; indeed, if anything, Britain’s speech laws are too restrictive and many should be repealed or reformed.
All freedom-loving people should oppose this proposal.