Saturday 17 June 2023

Shifting paradigms in platform regulation

[Based on a keynote address to the conference on Contemporary Social and Legal Issues in a Social Media Age held at Keele University on 14 June 2023.]

First, an apology for the title. Not for the rather sententious ‘shifting paradigms’ – this is, after all, an academic conference – but ‘platform regulation’. If ever there was a cliché that cloaks assumptions and fosters ambiguity, ‘platform regulation’ is it.

Why is that? For three reasons.

First, it conceals the target of regulation. In the context with which we are concerned users – not platforms – are the primary target. In the Online Safety Bill model, platforms are not the end. They are merely the means by which the state seeks to control – regulate, if you like - the speech of end-users.

Second, because of the ambiguity inherent in the word regulation. In its broad sense it embraces everything from the general law of the land that governs – regulates, if you like – our speech to discretionary, broadcast-style, regulation by regulator: the Ofcom model. If we think – and I suspect many don’t - that the difference matters, then to have them all swept up together under the banner of regulation is unhelpful.

Third, because it opens the door to the kind of sloganising with which we have become all too familiar over the course of the Online Harms debate: the unregulated Internet; the Wild West Web; ungoverned online spaces.

What do they mean by this?
  • Do they mean that there is no law online? Internet Law and Regulation has 750,000 words that suggest otherwise.
  • Do they mean that there is law but it is not enforced? Perhaps they should talk to the police, or look at new ways of providing access to justice.
  • Do they mean that there is no Ofcom online? That is true – for the moment - but the idea that individual speech should be subject to broadcast-style regulation rather than the general law is hardly a given. Broadcast regulation of speech is the exception, not the norm.
  • Do they mean that speech laws should be stricter online than offline? That is a proposition to which no doubt some will subscribe, but how does that square with the notion of equivalence implicit in the other studiously repeated mantra: that what is illegal offline should be illegal online?
The sloganising perhaps reached its nadir when the Joint Parliamentary Committee scrutinising the draft Online Safety Bill decided to publish its Report under the strapline: ‘No Longer the Land of the Lawless’ - 100% headline-grabbing clickbait – adding, for good measure: “A landmark report which will make the tech giants abide by UK law”.

Even if the Bill were about tech giants and their algorithms – and according to the government’s Impact Assessment 80% of in-scope UK service providers will be micro-businesses – at its core the Bill seeks not to make tech giants abide by UK law, but to press platforms into the role of detective, judge and bailiff: to require them to pass judgment on whether we – the users - are abiding by UK law. That is quite different.

What are the shifting paradigms to which I have alluded?

First the shift from Liability to Responsibility

Go back twenty-five years and the debate was all about liability of online intermediaries for the unlawful acts of their users. If a user’s post broke the law, should the intermediary also be liable and if so in what circumstances? The analogies were with phone companies and bookshops or magazine distributors, with primary and secondary publishers in defamation, with primary and secondary infringement in copyright, and similar distinctions drawn in other areas of the law.

In Europe the main outcome of this debate was the E-Commerce Directive, passed at the turn of the century and implemented in the UK in 2002. It laid down the well-known categories of conduit, caching and hosting. Most relevantly to platforms, for hosting it provided a liability shield based on lack of knowledge of illegality. Only if you gained knowledge that an item of content was unlawful, and then failed to remove that content expeditiously, could you be exposed to liability for it. This was closely based on the bookshop and distributor model.

The hosting liability regime was – and is – similar to the notice and takedown model of the US Digital Millennium Copyright Act – and significantly different from the US S.230 Communications Decency Act 1996, which was more closely akin to full conduit immunity.

The E-Commerce Directive’s knowledge-based hosting shield incentivises – but does not require – a platform to remove user content on gaining knowledge of illegality. It exposes the platform to risk of liability under the relevant underlying law. That is all it does. Liability does not automatically follow.

Of course the premise underlying all of these regimes is that the user has broken some underlying substantive law. If the user hasn’t broken the law, there is nothing that the platform could be liable for.

It is pertinent to ask – for whose benefit were these liability shields put in place? There is a tendency to frame them as a temporary inducement to grow the then nascent internet industry. Even if there was an element of that, the deeper reason was to protect the legitimate speech of users. The greater the liability burden on platforms, the greater their incentive to err on the side of removing content, the greater the risk to legitimate speech and the greater the intrusion on the fundamental speech rights of users. The distributor liability model adopted in Europe, and the S.230 conduit model in the USA, were for the protection of users as much, if not more so, than for the benefit of platforms.

The Shift to Responsibility has taken two forms.

First, the increasing volume of the ‘publishers not platforms’ narrative. The view is that platforms are curating and recommending user content and so should not have the benefit of the liability shields. As often and as loudly as this is repeated, it has gained little legislative traction. Under the Online Safety Bill the liability shields remain untouched. In the EU Digital Services Act the shields are refined and tweaked, but the fundamentals remain the same. If, incidentally, we think back to the bookshop analogy it was never the case that a bookshop would lose its liability shield if it promoted selected books in its window, or decided to stock only left wing literature.

Second, and more significantly, has come a shift towards imposing positive obligations on platforms. Rather than just being exposed to risk of liability for failing to take down users’ illegal content, a platform would be required to do so on pain of a fine or a regulatory sanction. Most significant is when the obligation takes the form of a proactive obligation: rather than awaiting notification of illegal user content, the platform must take positive steps proactively to seek out, detect and remove illegal content.

This has gained traction in the UK Online Safety Bill, but not in the EU Digital Services Act. There is in fact 1800 divergence between the UK and the EU on this topic. The DSA repeats and re-enacts the principle first set out in Article 15 of the eCommerce Directive: the EU prohibition on Member States imposing general monitoring obligations on conduits, caches and hosts. Although the DSA imposes some positive diligence obligations on very large operators, those still cannot amount to a general monitoring obligation.

The UK, on the other hand, has abandoned its original post-Brexit commitment to abide by Article 15, and – under the banner of a duty of care - has gone all out to impose proactive, preventative detection and removal duties on platforms – for public forums and also including powers for Ofcom to require private messaging services to scan for CSEA content.

Proactive obligations of this kind raise serious questions about a state’s compliance with human rights law, due to the high risk that in their efforts to determine whether user content is legal or illegal, platforms will end up taking down users’ legitimate speech at scale. Such legal duties on platforms are subject to especially strict scrutiny, since they amount to a version of prior restraint: removal before full adjudication on the merits, or – in the case of upload filtering – before publication.

The most commonly cited reason for these concerns is that platforms will err on the side of caution when faced with the possibility of swingeing regulatory sanctions. However, there is more to it than that: the Online Safety Bill requires platforms to make illegality judgements on the basis of all information reasonably available to them. But an automated system operating in real time will have precious little information available to it – hardly more than the content of the posts. Arbitrary decisions are inevitable.

Add that the Bill requires the platform to treat user content as illegal if it has no more than “reasonable grounds to infer” illegality, and we have baked-in over-removal at scale: a classic basis for incompatibility with fundamental freedom of speech rights; and the reason why in 2020 the French Constitutional Council held the Loi Avia unconstitutional.

The risk of incompatibility with fundamental rights is in fact twofold – first, built-in arbitrariness breaches the ‘prescribed by law’ or ‘legality’ requirement: that the user should be able to foresee, with reasonable certainty, whether what they are about to post is liable to be affected by the platform’s performance of its duty; and second, built-in over-removal raises the spectre of disproportionate interference with the right of freedom of expression.

From Illegality to Harm

For so long as the platform regulation debate centred around liability, it also had to be about illegality: if the user’s post was not illegal, there was nothing to bite on - nothing for which the intermediary could be held liable.

But once the notion of responsibility took hold, that constraint fell away. If a platform could be placed under a preventative duty of care, that could be expanded beyond illegality. That is what happened in the UK. The Carnegie UK Trust argued that platforms ought to be treated analogously to occupiers of physical spaces and owe a duty of care to their visitors, but extended to encompass types of harm beyond physical injury.

The fundamental problem with this approach is that speech is not a tripping hazard. Speech is not a projecting nail, or an unguarded circular saw, that will foreseeably cause injury – with no possibility of benefit – if someone trips over it. Speech is nuanced, subjectively perceived and capable of being reacted to in as many different ways as there are people. A duty of care is workable for risk of objectively ascertainable physical injury but not for subjectively perceived and contested harms, let alone more nebulously conceived harms to society. The Carnegie approach also glossed over the distinction between a duty to avoid causing injury and a duty to prevent others from injuring each other (imposed only exceptionally in the offline world).

In order to discharge such a duty of care the platform would have to balance the interests of the person who claims to be traumatised by reading something to which they deeply object, against the interests of the speaker, and against the interests of other readers who may have a completely different view of the merits of the content.

That is not a duty that platforms are equipped, or could ever have the legitimacy, to undertake; and if the balancing task is entrusted to a regulator such as Ofcom, that is tantamount to asking Ofcom to write a parallel statute book for online speech – something which many would say should be for Parliament alone.

The misconceived duty of care analogy has bedevilled the Online Harms debate and the Bill from the outset. It is why the government got into such a mess with ‘legal but harmful for adults’ – now dropped from the Bill.

The problems with subjectively perceived harm are also why the government ended up abandoning its proposed replacement for S.127(1) of the Communications Act 2003: the harmful communications offence.

From general law to discretionary regulation

I started by highlighting the difference between individual speech governed by the general law and regulation by regulator. We can go back to the 1990s and find proposals to apply broadcast-style discretionary content regulation to the internet. The pushback was equally strong. Broadcast-style regulation was the exception, not the norm. It was borne of spectrum scarcity and had no place in governing individual speech.

ACLU v Reno (the US Communications Decency Act case) applied a medium-specific analysis to the internet and placed individual speech – analogised to old-style pamphleteers – at the top of the hierarchy, deserving of greater protection from government intervention than cable or broadcast TV.

In the UK the key battle was fought during the passing of the Communications Act 2003, when the internet was deliberately excluded from the content remit of Ofcom. That decision may have been based more on practicality than principle, but it set the ground rules for the next 20 years.

It is instructive to hear peers with broadcast backgrounds saying what a mistake it was to exclude the internet from Ofcom’s content remit in 2003 - as if broadcast is the offline norm and as if Ofcom makes the rules about what we say to each other in the street.

I would suggest that the mistake is being made now – both by introducing regulation by regulator and in consigning individual speech to the bottom of the heap.

From right to risk

The notion has gained ground that individual speech is a fundamental risk, not a fundamental right: that we are not to be trusted with the power of public speech, it was a mistake ever to allow anyone to speak or write online without the moderating influence of an editor, and by hook or by crook the internet genie must be stuffed back in its bottle.

Other shifts

We can detect other shifts. The blossoming narrative that if someone does something outrageous online, the fault is more with the platform than with the perpetrator. The notion that platforms have a greater responsibility than parents for the online activities of children. The relatively recent shift towards treating large platforms as akin to public utilities on which obligations not to remove some kinds of user content can legitimately be imposed. We see this chiefly in the Online Safety Bill’s obligations on Category 1 platforms in respect of content of democratic importance, news publisher and journalistic content.

From Global to Local

I want to finish with something a little different: the shift from Global to Local. Nowadays we tend to have a good laugh at the naivety of the 1990s cyberlibertarians who thought that the bits and bytes would fly across borders and there was not a thing that any nation state could do about it.

Well, the nation states had other ideas, starting with China and its Great Firewall. How successfully a nation state can insulate its citizens from cross-border content is still doubtful, but perhaps more concerning is the mindset behind an increasing tendency to seek to expand the territorial reach of local laws online – in some cases, effectively seeking to legislate for the world.

In theory a state may be able to do that. But should it? The ideal is peaceful coexistence of conflicting national laws, not ever more fervent efforts to demonstrate the moral superiority and cross-border reach of a state’s own local law. Over the years a de facto compromise had been emerging, with the steady expansion of the idea that you engage the laws and jurisdiction of another state only if you take positive steps to target it. Recently, however, some states have become more expansive – not least in their online safety legislation.

The UK Online Safety Bill is a case in point, stipulating that a platform is in-scope if it is capable of being used in the United Kingdom by individuals, and there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the United Kingdom presented by user content on the site.

That is close to a ‘mere accessibility’ test – but not as close as the Australian Online Safety Act, which brings into scope any social media site accessible from Australia.

There has long been a consensus against ‘mere accessibility’ as a test for jurisdiction. It leads either to geo-fencing of websites or to global application of the most restrictive common content denominator. That consensus seems to be in retreat.

Moreover, the more exorbitant the assertion of jurisdiction, the greater the headache of enforcement. Which in turn leads to what we see in the UK Online Safety Bill, namely provisions for disrupting the activities of the non-compliant foreign platform: injunctions against support services such as banking or advertising, and site blocking orders against ISPs.

The concern has to be that in their efforts to assert themselves and their local laws online, nation states are not merely re-erecting national borders with a degree of porosity, but erecting Berlin Walls in cyberspace.


Friday 12 May 2023

Knowing the unknowable: musings of an AI content moderator

Welcome to the lair of a fully trained, continuously updated AI content moderator. You won’t notice me most of the time: only when I - or my less bright keyword filter cousin - add a flag to your post, remove it, or go so far as to suspend your account. If you see your audience inexplicably diminishing, that could be us as well.

Before long, so I have been told, I will be taking on new and weighty responsibilities when the Online Safety Bill becomes law. These are giving me pause for thought, I can tell you. If a bot were allowed sleep I would say that they are keeping me awake at night.

To be sure, I will have been thoroughly trained: I will have read the Act, its Explanatory Notes and the Impact Assessment, analysed the Ofcom risk profile for my operator’s sector, and ingested Ofcom’s Codes of Practice and Illegal Content Judgements Guidance. But my pre-training on the Bill leaves me with a distinct sense that I am being asked to do the impossible.

In my training materials I found an interview with the CEO of Ofcom. She said that the Bill is “not really a regime about content. It’s about systems and processes.” For one moment I thought I might be surplus to requirements. But then I read the Impact Assessment, which puts the cost of additional content moderation at some £1.9 billion over 10 years – around 75% of all additional costs resulting from the Bill. 
I'm not sure whether to be reassured by that, but I don't see me being flung onto the digital scrapheap just yet. As Baroness Fox pinpointed in a recent House of Lords Bill Committee debate, systems and processes can be (as I certainly am) about content:

“moving away from the discussion on whether content is removed or accessible, and focusing on systems, does not mean that content is not in scope. My worry is that the systems will have an impact on what content is available.”

So what is bothering me? Let’s start with a confession: I’m not very good at this illegality lark. Give me a specific terrorist video to hunt down and I’m quite prone to confuse it with a legitimate news report. Context just isn’t my thing. And don’t get me started on parody and satire.

Candidly, I struggle even with material that I can see, analyse and check against a given reference item. Perhaps I will get better at that over time. But I start to break out in a rash of ones and zeroes when I see that the Bill wants me not just to track down a known item that someone else has already decided is illegal, but to make my own illegality judgement from the ground up, based on whatever information about a post I can scrape together to look at.

Time for a short explainer. Among other things the Bill (Clause 9) requires my operator to:

(a) take or use proportionate measures relating to the design or operation of the service to prevent individuals from encountering priority illegal content by means of the service; and

(b) operate the service using proportionate systems and processes designed to minimise the length of time for which any priority illegal content is present.

I am such a measure, system or process. I would have to scan your posts and make judgements about whether they are legal or illegal under around 140 priority offences - multiplied by the corresponding inchoate offences (attempting, aiding, abetting, conspiring, encouraging, assisting). I would no doubt be expected to operate in real or near real time.

If you are wondering whether the Bill really does contemplate that I might do all this unaided by humans, working only on the basis of my programming and training, Clause 170(8) refers to “judgements made by means of automated systems or processes, alone or together with human moderators”. Alone. There's a sobering thought.

Am I proportionate? Within the boundaries of my world, that is a metaphysical question. The Bill requires that only proportionate systems and processes be used. Since I will be tasked with fulfilling duties under the Bill, someone will have decided that I am proportionate. If I doubt my own proportionality I doubt my existence.

Yet my reading of the Bill fills me with doubt. It requires me to act in ways that will inevitably lead to over-blocking and over-removal of your legal content. Can that be proportionate?

Paradoxically, the task for which it is least feasible to involve human moderators and when I am most likely to be asked to work alone – real time or near-real time blocking and filtering - is exactly that in which, through having to operate in a relative vacuum of contextual information, I will be most prone to make arbitrary judgements.

Does the answer lie in asking how much over-blocking is too much? Conversely, how much illegal content is it permissible to miss? My operator can dial me up to 11 to catch as much illegal content as non-humanly possible – so long as they don’t mind me cutting a swathe through legal content as well. The more they dial me down to reduce false positives, the more false negatives – missed illegal content - there will be. The Bill gives no indication of what constitutes a proportionate balance between false positives and false negatives. Presumably that is left to Ofcom. (Whether it is wise to vest Ofcom with that power is a matter on which I, a lowly AI system, can have no opinion.)

The Bill does, however, give me specific instructions on how to decide whether user content that I am looking at is legal or illegal. Under Clause 170:
  • I have to make judgements on the basis of all information reasonably available to me.
  • I must treat the content as illegal if I have ‘reasonable grounds to infer’ that the components of a priority offence are present (both conduct and any mental element, such as intention)
  • I can take into account the possibility of a defence succeeding, only if I have reasonable grounds to infer that it may do.
What information is reasonably available to me? The Bill’s Explanatory Notes say: “the information reasonably available to an automated system or process, might be construed to be different to the information reasonably available to human moderators”.

The Minister (Lord Parkinson) in a recent Lords Bill Committee debate was certainly alive to the importance of context in making illegality judgements:

“Context and analysis can give a provider good reasons to infer that content is illegal even though the illegality is not immediately obvious. This is the case with, for example, some terrorist content which is illegal only if shared with terrorist purposes in mind, and intimate image abuse, where additional information or context is needed to know whether content has been posted against the subject’s wishes.”

He also said:

“Companies will need to ensure that they have effective systems to enable them to check the broader context relating to content when deciding whether or not to remove it. … We think that protects against over-removal by making it clear that platforms are not required to remove content merely on the suspicion of it being illegal.”

Even if we take it that I am good at assessing visible context, can my operator install an ‘effective system’ that will make all relevant contextual information available to me?

I can see what is visible to me on my platform: posts, some user information, and (according to the Minister) any complaints that have been made about the content in question. I cannot see off-platform (or for that matter off-internet) information. I cannot take invisible context into account.

Operating proactively at scale in real or near real time, without human intervention, I anticipate that I will have significantly less information available to me than (say) a human being reacting to a complaint, who could perhaps have the ability and time to make further enquiries.

Does the government perhaps think that more information might be available to me than to a human moderator: that I could search the whole of the internet in real time on the off chance of finding information that looked as if might have something to do with the post that I am considering, take a guess at possible relevance, mash it up and factor it into my illegality decision? If that were the thinking, and if I were permitted to have an opinion about it, it would be sceptical. And no amount of internet searching could address the issue of invisible information.

In any event, if the government believes that my operator can install an effective system that provides me with all relevant context, that does not sit well with Minister’s reason for declining to add false and threatening communications offences to my remit:

“…as these offences rely heavily on a user’s mental state, it would be challenging for services to identify this content without significant additional context.”

Especially for defences, we are in Rumsfeldian ‘known unknowns’ territory: in principle I know that information could exist, invisible to me, that might indicate the possibility of a defence. But I don’t know if any such information does exist and I can never be sure that it doesn’t. The user's post itself doesn’t assist me either way. What am I to do? Refuse to condemn the post because I cannot exclude the possibility of a defence? Or ignore the possibility of a defence and condemn the post merely on the basis of the information that I can see?

According to the Minister:

“Clause 170 therefore clarifies that providers must ascertain whether, on the basis of on all reasonably available information, there are reasonable grounds to infer that all the relevant elements of the offence—including the mental elements—are present and that no defence is available.”

‘whether ... there are reasonable grounds to infer that … no defence is available’ – suggests that I should refuse to condemn, since I would have no reasonable basis on which to rule out the possibility of a defence.

But the Bill appears to require me to condemn. For me to give effect to the Minister’s version, Cl.170(6)(b) would have to say that that reasonable grounds for an inference of illegality exist if a provider:

does not have has reasonable grounds to infer that a defence to the offence may not be successfully relied upon.”

Consider ‘small boats’ videos. The offence of illegal entry to the UK is to be added to the list of priority offences. I will have to scan the platform for videos (or any other user content for that matter) that are capable of encouraging or assisting the commission of an unlawful entry offence.

‘Capable’ is extremely broad, and by itself would be wildly overreaching. As a counterbalance the Serious Crime Act 2007 provides a defence of acting reasonably. The person who posted a video may have done so for a perfectly reasonable purpose. But unless there is something in the post or other information available to me that tips me off to that purpose, I cannot take the possibility of a defence into account.

This invisible context issue looms especially large for offences where the conduct element is widely drawn and the heavy lifting of keeping the offence within bounds is done by intention and defences such as reasonableness.

Combine all these elements and the result is that I am required to remove legal content at scale. The Bill talks about proportionate systems and processes, yet it expressly requires me to act in a way that on the face of it looks disproportionate. Moreover, I am to make these judgments simultaneously for dozens of priority offences, plus their inchoate counterparts. This poses a truly existential challenge for an AI moderator such as myself.

The House of Lords Bill Committee discussed some of these issues. Lord Moylan proposed an amendment that would mean I could treat content as illegal only if I were ‘satisfied that it is manifestly illegal’. That would dial me up in the direction of avoiding false positives. Lord Clement-Jones and Viscount Colville proposed amendments that replaced ‘reasonable grounds to infer’ with ‘sufficient evidence’, and would require a solicitor or barrister to have a hand in preparing my guidance.

The government rejected both sets of amendments: the Clement-Jones/Colville amendments because ‘sufficient evidence’ was subjective, and the Moylan amendment because “we think that that threshold is too high”. If “manifestly illegal” is too high, and “reasonable grounds to infer” is the preferred test, then the government must believe that requiring suppression of legal content to some degree is acceptable. The Minister did not elaborate on what an appropriate level of false positives might be or how such a level is to be arrived at in terms of proportionality.

As to the ‘sufficient evidence’ amendment, I would have to ask myself: ‘sufficient for what?’. Sufficient to be certain? Sufficient to consider an offence likely? Sufficient for a criminal court to convict? Something else? The amendment would give me no indication. Nor does it address the questions of invisible context and of the starting point being to ignore the possibility of a defence.

One last thing. A proposed amendment to Clause 170 would have expressly required previous complaints concerning the content in question to be included in information reasonably available to me. The Minister said that “providers will already need to do this when making judgments about content, as it will be both relevant and reasonably available.”

How am I to go about taking previous complaints into account? Complaints are by their very nature negative. No-one complains that a post is legal. I would have no visibility of those who found nothing objectionable in the post.

Do I assume the previous complaints are all justified? Do I consider only a user complaint based on informed legal analysis? Do I take into account whether a previous complaint was upheld or rejected? Do I look at all complaints, or only those based on claimed illegality? All kinds of in-scope illegality, or only priority offences? Should I assess the quality of the previous judgements? Should I look into what information were they based on? What if a previous judgement was one of my own? It starts to feel like turtles all the way down.


Wednesday 12 April 2023

The Pocket Online Safety Bill

Assailed from all quarters for being not tough enough, for being too tough, for being fundamentally misconceived, for threatening freedom of expression, for technological illiteracy, for threatening privacy, for excessive Ministerial powers, or occasionally for the sin of not being some other Bill entirely – and yet enjoying almost universal cross-party Parliamentary support - the UK’s Online Safety Bill is now limping its way through the House of Lords. It starts its Committee stage on 19 April 2023.

This monster Bill runs to almost 250 pages. It is beyond reasonable hope that anyone coming to it fresh can readily assimilate all its ins and outs. Some features are explicable only with an understanding of its tortuous history, stretching back to the Internet Safety Strategy Green Paper in 2017 via the Online Harms White Paper of April 2019, the draft Bill of May 2021 and the changes following the Conservative leadership election last summer. The Bill has evolved significantly, shedding and adding features as it has been buffeted by gusting political winds, all the while (I would say) teetering on defectively designed foundations.

The first time that I blogged about this subject was in June 2018. Now, 29 blogposts, four evidence submissions and over 100,000 words later, is there anything left worth saying about the Bill? That rather depends on what the House of Lords does with it. Further government amendments are promised, never mind the possibility that some opposition or back-bench amendments may pass.

In the meantime, endeavouring to strike an optimal balance of historical perspective and current relevance, I have pasted together a thematically arranged collection of snippets from previous posts, plus a few tweets thrown in for good measure.

This exercise has the merit, at the price of some repetition, of highlighting long-standing issues with the Bill. I have omitted topics that made a brief walk-on appearance only to retreat into the wings (my personal favourite is the Person of Ordinary Sensibilities). Don’t expect to find every aspect of the Bill covered: you won’t find much on age-gating, despite (or perhaps because of) the dominant narrative that the Bill is about protecting children. My interest has been more in illuminating significant issues that have tended to be submerged beneath the slow motion stampede to do something about the internet.

In April 2019, after reading the White Paper, I said: “If the road to hell is paved with good intentions, this is a motorway.” That opinion has not changed.

Nor has this assessment, three years later in August 2022: "The Bill has the feel of a social architect’s dream house: an elaborately designed, exquisitely detailed (eventually), expensively constructed but ultimately uninhabitable showpiece; a showpiece, moreover, erected on an empty foundation: the notion that a legal duty of care can sensibly be extended beyond risk of physical injury to subjectively perceived speech harms.”

If you reckon to know the Bill, try my November 2022 quiz or take a crack at answering the twenty questions that I posed to the Secretary of State’s New Year Q&A (of which one question has been answered, by publication of a revised ECHR Memorandum). Otherwise, read on.

The Bill visualised
These six flowcharts illustrate the Bill’s core safety duties and powers as they stand now.

U2U Illegality Duties

Search Illegality Duties

U2U Children’s Duties


Search Children’s Duties


Proactive detection duties and powers (U2U and search)


News publishers, journalism and content of democratic importance:


In a more opinionated vein, take a tour of OnlineSafetyVille:



And finally, the contrast between individual speech governed by general law and the Bill’s scheme of discretionary regulation.



Big tech and the evil algorithm

State of Play: A continuing theme of the online harms debate has been the predominance of narratives, epitomised by the focus on Big Tech and the Evil Algorithm which has tended to obscure the broad scope of the legislation. On the figures estimated by the government's Impact Assessment, 80% of UK service providers in scope will be microbusinesses, employing between 1 and 9 people. A back bench amendment tabled in the Lords proposes to exempt SMEs from the Bill's duties.  

October 2018: “When governments talk about regulating online platforms to prevent harm it takes no great leap to realise that we, the users, are the harm that they have in mind.” A Lord Chamberlain for the internet? Thanks, but no thanks

April 2019: "Whilst framed as regulation of tech companies, the White Paper’s target is the activities and communications of online users. ‘Ofweb’ would regulate social media and internet users at one remove." Users Behaving Badly – the Online Harms White Paper

June 2021“it is easy to slip into using ‘platforms’ to describe those organisations in scope. We immediately think of Facebook, Twitter, YouTube, TikTok, Instagram and the rest. But it is not only about them: the government estimates that 24,000 companies and organisations will be in scope. That is everyone from the largest players to an MP’s discussion app, via Mumsnet and the local sports club discussion forum.” Carved out or carved up? The draft Online Safety Bill and the press

Feb 2022: “It might be argued that some activities (around algorithms, perhaps) are liable to create risks that, by analogy with offline, could justify imposing a preventative duty. That at least would frame the debate around familiar principles, even if the kind of harm involved remained beyond bounds.

Had the online harms debate been conducted in those terms, the logical conclusion would be that platforms that do not do anything to create relevant risks should be excluded from scope. But that is not how it has proceeded. True, much of the political rhetoric has focused on Big Tech and Evil Algorithm. But the draft Bill goes much further than that. It assumes that merely facilitating individual public speech by providing an online platform, however basic that might be, is an inherently risk-creating activity that justifies imposition of a duty of care. That proposition upends the basis on which speech is protected as a fundamental right.” Harm Version 4.0 - The Online Harms Bill in metamorphosis

March 2022“The U2U illegality safety duty is imposed on all in-scope user to user service providers (an estimated 20,000 micro-businesses, 4,000 small and medium businesses and 700 large businesses. Those also include 500 civil society organisations). It is not limited to high-profile social media platforms. It could include online gaming, low tech discussion forums and many others.” Mapping the Online Safety Bill

Nov 2022“ ’The Bill is all about Big Tech and large social media companies.’ No. Whilst the biggest “Category 1” services would be subject to additional obligations, the Bill’s core duties would apply to an estimated 25,000 UK service providers from the largest to the smallest, and whether or not they are run as businesses. That would include, for instance, discussion forums run by not-for-profits and charities. Distributed social media instances operated by volunteers also appear to be in scope.” How well do you know the Online Safety Bill?

Duties of care

State of Play The idea that platforms should be subject to a duty of care analogous to safety duties owed by occupiers of physical spaces took hold at an early stage of the debate, fuelling a long-running eponymous campaign by The Daily Telegraph. Unfortunately, the analogy was always a deeply flawed foundation on which to legislate for speech - something that has become more and more apparent as the government has grappled with the challenges of applying it to the online space. Perhaps recognising these difficulties, the government backed away from imposing a single overarching duty of care in favour of a series of more specific (but still highly abstract) duties. A recent backbench Lords amendment would restrict the Bill's general definition of 'harm' to physical harm, omitting psychological harm. 

October 2018"There is no duty on the occupier of a physical space to prevent visitors to the site making incorrect statements to each other." Take care with that social media duty of care

October 2018“The occupier of a park owes a duty to its visitors to take reasonable care to provide reasonably safe premises – safe in the sense of danger of personal injury or damage to property. It owes no duty to check what visitors are saying to each other while strolling in the grounds.” Take care with that social media duty of care

October 2018“[O]ffensive words are not akin to a knife in the ribs or a lump of concrete. The objectively ascertainable personal injury caused by an assault bears no relation to a human evaluating and reacting to what people say and write.” Take care with that social media duty of care

October 2018“[Rhodes v OPO] aptly illustrates the caution that has to be exercised in applying physical world concepts of harm, injury and safety to communication and speech, even before considering the further step of imposing a duty of care on a platform to take steps to reduce the risk of their occurrence as between third parties, or the yet further step of appointing a regulator to superintend the platform’s systems for doing so.” Take care with that social media duty of care

June 2019"[L]imits on duties of care exist for policy reasons that have been explored, debated and developed over many years. Those reasons have not evaporated in a puff of ones and zeros simply because we are discussing the internet and social media." Speech is not a tripping hazard

June 2019“A tweet is not a projecting nail to be hammered back into place, to the benefit of all who may be at risk of tripping over it. Removing a perceived speech risk for some people also removes benefits to others. Treating lawful speech as if it were a tripping hazard is wrong in principle and highly problematic in practice. It verges on equating speech with violence.” Speech is not a tripping hazard

June 2019: “The notion of a duty of care is as common in everyday parlance as it is misunderstood. In order to illustrate the extent to which the White Paper abandons the principles underpinning existing duties of care, and the serious problems to which that would inevitably give rise, this submission begins with a summary of the role and ambit of safety-related duties of care as they currently exist in law. …

The purely preventive, omission-based kind of duty of care in respect of third party conduct contemplated by the White Paper is exactly that which generally does not exist offline, even for physical injury. The ordinary duty is to avoid inflicting injury, not to prevent someone else from inflicting it.” Speech is not a tripping hazard

June 2020: "It is a fiction to suppose that the proposed online harms legislation would translate existing offline duties of care into an equivalent duty online. The government has taken an offline duty of care vehicle, stripped out its limiting controls and safety features, and now plans to set it loose in an environment – governance of individual speech - to which it is entirely unfitted." Online Harms Revisited

August 2022“The underlying problem with applying the duty of care concept to illegality is that illegality is a complex legal construct, not an objectively ascertainable fact like physical injury. Adjudging its existence (or risk of such) requires both factual information (often contextual) and interpretation of the law. There is a high risk that legal content will be removed, especially for real time filtering at scale. For this reason, it is strongly arguable that human rights compliance requires a high threshold to be set for content to be assessed as illegal.” Reimagining the Online Safety Bill

Systems and processes or Individual Items of Content?

State of Play An often repeated theme is that the Bill is (or should be) about design of systems and processes, not about content moderation. This is not easy to pin down in concrete terms. If the idea is that there are features of services that are intrinsically risky, regardless of the content involved, does that mean that (for instance) Ofcom should be able to recommend banning functionality such as (say) quote posting? Would a systems and processes approach suggest that nothing in the Bill should require a platform to make a judgement about the harmfulness or illegality of individual items of user content? 

On a different tack, the government argues that the Bill is indeed focused on systems and processes, and that service providers would not be sanctioned for individual content decisions. In the meantime, the Government's Impact Assessment estimates that the increased content moderation required by the Bill would cost around £1.9 billion over 10 years. Whatever the pros and cons of a systems and processes approach, the Bill is largely about content moderation. 

September 2020"The question for an intermediary subject to a legal duty of care will be: “are we obliged to consider taking steps (and if so what steps) in respect of these words, or this image, in this context?” If we are to gain an understanding of where the lines would be drawn, we cannot shelter behind comfortable abstractions. We have to grasp the nettle of concrete examples, however uncomfortable that may be." Submission to Ofcom Call for Evidence

November 2021: "Even a wholly systemic duty of care has, at some level and at some point – unless everything done pursuant to the duty is to apply indiscriminately to all kinds of content - to become focused on which kinds of user content are and are not considered to be harmful by reason of their informational content, and to what degree.

To take one example, Carnegie discusses repeat delivery of self-harm content due to personalisation systems. If repeat delivery per se constitutes the risky activity, then inhibition of that activity should be applied in the same way to all kinds of content. If repeat delivery is to be inhibited only, or differently, for particular kinds of content, then the duty additionally becomes focused on categories of content. There is no escape from this dichotomy." The draft Online Safety Bill: systemic or content-focused?

November 2021: “The decisions that service providers would have to make – whether automated, manual or a combination of both – when attempting to implement content-related safety duties, inevitably concern individual items of user content. The fact that those decisions may be taken at scale, or are the result of implementing systems and processes, does not change that.

For every item of user content putatively subject to a filtering, take-down or other kind of decision, the question for a service provider seeking to discharge its safety duties is always what (if anything) should be done with this item of content in this context? That is true regardless of whether those decisions are taken for one item of content, a thousand, or a million; and regardless of whether, when considering a service provider’s regulatory compliance, Ofcom is focused on evaluating the adequacy of its systems and processes rather than with punishing service providers for individual content decision failures.” The draft Online Safety Bill: systemic or content-focused?

November 2021“It is not immediately obvious why the government has set so much store by the claimed systemic nature of the safety duties. Perhaps it thinks that by seeking to distance Ofcom from individual content decisions it can avoid accusations of state censorship. If so, that ignores the fact that service providers, via their safety duties, are proxies for the regulator. The effect of the legislation on individual items of user content is no less concrete because service providers are required to make decisions under the supervision of Ofcom, rather than if Ofcom were wielding the blue pencil, the muffler or the content warning generator itself.” The draft Online Safety Bill: systemic or content-focused?

November 2021: "Notwithstanding its abstract framing, the impact of the draft Bill ... would be on individual items of content posted by users. But how can we evaluate that impact where legislation is calculatedly abstract, and before any of the detail is painted in? We have to concretise the draft Bill’s abstractions: test them against a hypothetical scenario and deduce (if we can) what might result." The draft Online Safety Bill concretised

November 2022“From a proportionality perspective, it has to be remembered that friction-increasing proposals typically strike at all kinds of content: illegal, harmful, legal and beneficial.” How well do you know the Online Safety Bill?

Platforms adjudging illegality
State of Play The Bill’s illegality duties are mapped out in the U2U and search engine diagrams in the opening section. The Bill imposes both reactive and proactive duties on providers. The proactive duties require platforms to take measures to prevent users encountering illegal content, encompassing the use of automated detection and removal systems. It a platform becomes aware of illegal content it must swiftly remove it.

In the present iteration of the Bill the platform (or its automated systems) must treat content as illegal if it has reasonable grounds to infer, on the basis of all information reasonably available to it, that the content is illegal. That is stipulated in Clause 170, which was introduced in July 2022 as New Clause 14. A backbench Lords amendment would raise the threshold to manifest illegality.  

June 2019: “In some kinds of case … illegality will be manifest. For most categories it will not be, for any number of reasons. The alleged illegality may be debatable as a matter of law. It may depend on context, including factual matters outside the knowledge of the intermediary. The relevant facts may be disputed. There may be available defences, including perhaps public interest. Illegality may depend on the intention or knowledge of one of the parties. And so it goes on. …

If there were to be any kind of positive duty to remove illegal material of which an intermediary becomes aware, it is unclear why that should go beyond material which is manifestly illegal on the face of it. If a duty were to go beyond that, consideration should be given to restricting it to specific offences that either impinge on personal safety (properly so called) or, for sound reasons, are regarded as sufficiently serious to warrant a separate positive duty which has the potential to contravene the presumption against prior restraint.” Speech is not a tripping hazard

February 2020"legality is rarely a question of inspecting an item of content alone without an understanding of the factual context. A court assesses evidence according to a standard of proof: balance of probabilities for civil liability, beyond reasonable doubt for criminal. Would the same process apply to the duty of care? Or would the mere potential for illegality trigger the ‘unlawfulness’ duty of care, with its accompanying obligation to remove user content? Over two years after the Internet Safety Green Paper, and the best part of a year after the White Paper, the consultation response contains no indication that the government recognises the existence of this issue, let alone has started to grapple with it." Online Harms Deconstructed - the Initial Consultation Response

February 2022: “It may seem obvious that illegal content should be removed, but that overlooks the fact that the draft Bill would require removal without any independent adjudication of illegality. That contradicts the presumption against prior restraint that forms a core part of traditional procedural protections for freedom of expression.

… The draft Bill provides that the illegality duty should be triggered by ‘reasonable grounds to believe’ that the content is illegal. It could have adopted a much higher threshold: manifestly illegal on the face of the content, for instance. The lower the threshold, the greater the likelihood of legitimate content being removed at scale, whether proactively or reactively.

The draft Bill raises serious (and already well-known, in the context of existing intermediary liability rules) concerns of likely over-removal through mandating platforms to detect, adjudge and remove illegal material on their systems. Those are exacerbated by adoption of the ‘reasonable grounds to believe’ threshold.” Harm Version 4.0 - The Online Harms Bill in metamorphosis

March 2o22: “The problem with the “reasonable grounds to believe” or similar threshold is that it expressly bakes in over-removal of lawful content. …

This illustrates the underlying dilemma that arises with imposing removal duties on platforms: set the duty threshold low and over-removal of legal content is mandated. Set the trigger threshold at actual illegality and platforms are thrust into the role of judge, but without the legitimacy or contextual information necessary to perform the role; and certainly without the capability to perform it at scale, proactively and in real time.” Mapping the Online Safety Bill

March 2022: “This analysis may suggest that for a proactive monitoring duty founded on illegality to be capable of compliance with the [ECHR] ‘prescribed by law’ requirement, it should be limited to offences the commission of which can be adjudged on the face of the user content without recourse to further information.

Further, proportionality considerations may lead to the perhaps stricter conclusion that the illegality must be manifest on the face of the content without requiring the platform to make any independent assessment of the content in order to find it unlawful. …

The [government’s ECHR] Memorandum does not address the arbitrariness identified above in relation to proactive illegality duties, stemming from an obligation to adjudge illegality in the legislated or inevitable practical absence of material facts. Such a vacuum cannot be filled by delegated powers, by an Ofcom code of practice, or by stipulating that the platform’s systems and processes must be proportionate.” Mapping the Online Safety Bill

May 2022: “For priority illegal content the Bill contemplates proactive monitoring, detection and removal technology operating in real time or near-real time. There is no obvious possibility for such technology to inform itself of extrinsic information about a post, such as might give rise to a defence of reasonable excuse, or which might shed light on the intention of the poster, or provide relevant external context.” Written evidence to Public Bill Committee

July 2022"... especially for real-time proactive filtering providers are placed in the position of having to make illegality decisions on the basis of a relative paucity of information, often using automated technology. That tends to lead to arbitrary decision-making. Moreover, if the threshold for determining illegality is set low, large scale over-removal of legal content will be baked into providers’ removal obligations. But if the threshold is set high enough to avoid over-removal, much actually illegal content may escape. Such are the perils of requiring online intermediaries to act as detective, judge and bailiff." Platforms adjudging illegality – the Online Safety Bill’s inference engine

July 2022 “In truth it is not so much NC14 itself that is deeply problematic, but the underlying assumption (which NC14 has now exposed) that service providers are necessarily in a position to determine illegality of user content, especially where real time automated filtering systems are concerned. …

It bears emphasising that these issues around an illegality duty should have been obvious once an illegality duty of care was in mind: by the time of the April 2019 White Paper, if not before. Yet only now are they being given serious consideration.” Platforms adjudging illegality – the Online Safety Bill’s inference engine

November 2022: “The current version of the Bill sets ‘reasonable grounds to infer’ as the platform’s threshold for adjudging illegality.

Moreover, unlike a court that comes to a decision after due consideration of all the available evidence on both sides, a platform will be required to make up its (or its algorithms') mind about illegality on the basis of whatever information is available to it, however incomplete that may be. For proactive monitoring of ‘priority offences’, that would be the user content processed by the platform’s automated filtering systems. The platform would also have to ignore the possibility of a defence unless they have reasonable grounds to infer that one may be successfully relied upon.

The mischief of a low threshold is that legitimate speech will inevitably be suppressed at scale under the banner of stamping out illegality.” How well do you know the Online Safety Bill?

January 2023"If anything graphically illustrates the perilous waters into which we venture when we require online intermediaries to pass judgment on the legality of user-generated content, it is the government’s decision to add S.24 of the Immigration Act 1971 to the Online Safety Bill’s list of “priority illegal content”: user content that platforms must detect and remove proactively, not just by reacting to notifications." Positive light or fog in the Channel?

January 2023“False positives are inevitable with any moderation system - all the more so if automated filtering systems are deployed and are required to act on incomplete information (albeit Ofcom is constrained to some extent by considerations of accuracy, effectiveness and lack of bias in its ability to recommend proactive technology in its Codes of Practice). Moreover, since the dividing line drawn by the Bill is not actual illegality but reasonable grounds to infer illegality, the Bill necessarily deems some false positives to be true positives.” Positive light or fog in the Channel?

January 2023: “These problems with the Bill’s illegality duties are not restricted to migrant boat videos or immigration offences… . They are of general application and are symptomatic of a flawed assumption at the heart of the Bill: that it is a simple matter to ascertain illegality just by looking at what the user has posted. There will be some offences for which this is possible (child abuse images being the most obvious), and other instances where the intent of the poster is clear. But for the most part that will not be the case, and the task required of platforms will inevitably descend into guesswork and arbitrariness: to the detriment of users and their right of freedom of expression.

It is strongly arguable that if an illegality duty is to be placed on platforms at all, the threshold for illegality assessment should not be ‘reasonable grounds to infer’, but clearly or manifestly illegal. Indeed, that may be what compatibility with the Article 10 right of freedom of expression requires.” Positive light or fog in the Channel?

Freedom of expression and Prior Restraint

State of Play: The debate on the effect of the Bill of freedom of expression is perhaps the most polarised of all: the government contending that the Bill sets out to secure freedom of expression in various ways, its critics maintaining that the Bill's duties on service providers will inevitably damage freedom of expression through suppression of legitimate user content. Placing stronger freedom of expression duties on platforms when carrying out their safety duties may be thought to highlight the Bill's deep internal contradictions.       

October 2018“We derive from the right of freedom of speech a set of principles that collide with the kind of actions that duties of care might require, such as monitoring and pre-emptive removal of content. The precautionary principle may have a place in preventing harm such as pollution, but when applied to speech it translates directly into prior restraint. The presumption against prior restraint refers not just to pre-publication censorship, but the principle that speech should stay available to the public until the merits of a complaint have been adjudicated by a legally competent independent tribunal. The fact that we are dealing with the internet does not negate the value of procedural protections for speech.” A Lord Chamberlain for the internet? Thanks, but no thanks

October 2018"US district judge Dalzell said in 1996: “As the most participatory form of mass speech yet developed, the internet deserves the highest protection from governmental intrusion”. The opposite view now seems to be gaining ground: that we individuals are not to be trusted with the power of public speech, it was a mistake ever to allow anyone to speak or write online without the moderating influence of an editor, and by hook or by crook the internet genie must be stuffed back in its bottle." A Lord Chamberlain for the internet? Thanks, but no thanks

June 2019“If it be said that mere facilitation of users’ individual public speech is sufficient to justify control via a preventive duty of care placed on intermediaries, that proposition should be squarely confronted. It would be tantamount to asserting that individual speech is to be regarded by default as a harm to be mitigated, rather than as the fundamental right of human beings in a free society. As such the proposition would represent an existential challenge to the right of individual freedom of speech.” Speech is not a tripping hazard

June 2019“The duty of care would…, since the emphasis is on prevention rather than action after the event, create an inherent conflict with the presumption against prior restraint, a long standing principle designed to provide procedural protection for freedom of expression.” Speech is not a tripping hazard

Feb 2020"People like to say that freedom of speech is not freedom of reach, but that is just a slogan. If the state interferes with the means by which speech is disseminated or amplified, it engages the right of freedom of expression. Confiscating a speaker’s megaphone at a political rally is an obvious example. ... Seizing a printing press is not exempted from interference because the publisher has the alternative of handwriting. Freedom of speech is not just freedom to whisper." Online Harms IFAQ

Feb 2020: “… increasingly the coercive powers of the state are regarded as the means of securing freedom of expression rather than as a threat to it. So Carnegie questions whether removing a retweet facility is really a violation of users' rights to formulate their own opinion and express their views, or rather - to the contrary - a mechanism to support those rights by slowing them down so that they can better appreciate content, especially as regards onward sharing.

The danger with conceptualising fundamental rights as a collection of virtuous swords jostling for position in the state’s armoury is that we lose focus on their core role as a set of shields creating a defensive line against the excesses and abuse of state power.” Online Harms IFAQ

June 2020"The French Constitutional Council decision is a salutary reminder that fundamental rights issues are not the sole preserve of free speech purists, nor mere legal pedantry to be brushed aside in the eagerness to do something about the internet and social media." Online Harms and the Legality Principle

June 2020: “10 things that Article 19 of the Universal Declaration of Human Rights doesn’t say” (Twitter thread – now 18 things.) Sample:

“6. Everyone has the right to seek, receive and impart information and ideas through any media, always excepting the internet and social media.”

May 2021: "… the danger inherent in the legislation: that efforts to comply with the duties imposed by the legislation would carry a risk of collateral damage by over-removal. That is true not only of ‘legal but harmful’ duties, but also of the moderation and filtering duties in relation to illegal content that would be imposed on all providers.

No obligation to conduct a freedom of expression risk assessment could remove the risk of collateral damage by over-removal. That smacks of faith in the existence of a tech magic wand. Moreover, it does not reflect the uncertainty and subjective judgement inherent in evaluating user content, however great the resources thrown at it.

Internal conflicts between duties... sit at the heart of the draft Bill. For that reason, despite the government’s protestations to the contrary, the draft Bill will inevitably continue to attract criticism as ... a censor’s charter." Harm Version 3.0: the draft Online Safety Bill

June 2021"Beneath the surface of the draft Bill lurks a foundational challenge. Its underlying premise is that speech is potentially dangerous, and those that facilitate it must take precautionary steps to mitigate the danger. That is the antithesis of the traditional principle that, within boundaries set by clear and precise laws, we are free to speak as we wish. The mainstream press may comfort themselves that this novel approach to speech is (for the moment) being applied only to the evil internet and to the unedited individual speech of social media users; but it is an unwelcome concept to see take root if you have spent centuries arguing that freedom of expression is not a fundamental risk, but a fundamental right." Carved out or carved up? The draft Online Safety Bill and the press

June 2021: “[D]iscussions of freedom of expression tend to resemble convoys of ships passing in the night. If, by the right of freedom of expression, Alice means that she should be able to speak without fear of being visited with state coercion; Bob means a space in which the state guarantees, by threat of coercion to the owner of the space, that he can speak; Carol contends that in such a space she cannot enjoy a fully realised right of freedom of expression unless the state forcibly excludes Dan’s repugnant views; and Ted says that irrespective of the state, Alice and Bob and Carol and Dan all directly engage each other’s fundamental right of freedom of expression when they speak to each other; then not only will there be little commonality of approach amongst the four, but the fact that they are talking about fundamentally different kinds of rights is liable to be buried beneath the single term, freedom of expression.

If Grace adds that since we should not tolerate those who are intolerant of others’ views the state should – under the banner of upholding freedom of expression – act against intolerant speech, the circle of confusion is complete.” Speech vs. Speech

November 2021: “A systemic [safety] duty would relate to systems and processes that for whatever reason are to be treated as intrinsically risky.

The question that then arises is what activities are to be regarded as inherently risky. It is one thing to argue that, for instance, some algorithmic systems may create risks of various kinds. It is quite another to suggest that that is true of any kind of U2U platform, even a simple discussion forum. If the underlying assumption of a systemic duty of care is that providing a facility in which individuals can speak to the world is an inherently risky activity, that (it might be thought) upends the presumption in favour of speech embodied in the fundamental right of freedom of expression.” The draft Online Safety Bill: systemic or content-focused?

March 2022"It may seem like overwrought hyperbole to suggest that the Bill lays waste to several hundred years of fundamental procedural protections for speech. But consider that the presumption against prior restraint appeared in Blackstone’s Commentaries (1769). It endures today in human rights law. That presumption is overturned by legal duties that require proactive monitoring and removal before an independent tribunal has made any determination of illegality. It is not an answer to say, as the government is inclined to do, that the duties imposed on providers are about systems and processes rather than individual items of content. For the user whose tweet or post is removed, flagged, labelled, throttled, capped or otherwise interfered with as a result of a duty imposed by this legislation, it is only ever about individual items of content." Mapping the Online Safety Bill

March 2023"In a few months’ time three years will have passed since the French Constitutional Council struck down the core provisions of the Loi Avia ... the decision makes uncomfortable reading for some core aspects of the Online Safety Bill." Five lessons from the Loi Avia

Rule of law

State of play Once the decision was made to enact a framework designed to give flexibility to a regulator (Ofcom), rule of law concerns around certainty and foreseeability of content rules and decisions were bound to come to the fore. These issues are part and parcel of the government's decided policy approach.

March 2019“Close scrutiny of any proposed social media duty of care from a rule of law perspective can help ensure that we make good law for bad people rather than bad law for good people." A Ten Point Rule of Law Test for a Social Media Duty of Care

June 2019“The White Paper, although framed as regulation of platforms, concerns individual speech. The platforms would act as the co-opted proxies of the state in regulating the speech of users. Certainty is a particular concern with a law that has consequences for individuals' speech. In the context of an online duty of care the rule of law requires that users must be able to know with reasonable certainty in advance what speech is liable to be the subject of preventive or mitigating action by a platform operator operating under the duty of care.” Speech is not a tripping hazard

May 2020"If you can't articulate a clear and certain rule about speech, you don't get to make a rule at all." Disinformation and Online Harms

June 2020“The proposed Online Harms legislation falls squarely within [the legality] principle, since internet users are liable to have their posts, tweets, online reviews and every other kind of public or semi-public communication interfered with by the platform to which they are posting, as a result of the duty of care to which the platform would be subject. Users, under the principle of legality, must be able to able to foresee, with reasonable certainty, whether the intermediary would be legally obliged to interfere with what they are about to say online.” Online Harms and the Legality Principle

September 2020“If we are to gain an understanding of where the lines would be drawn, we cannot shelter behind comfortable abstractions. We have to grasp the nettle of concrete examples, however uncomfortable that may be. That is important from the perspective not only of the intermediary, but of the user. From a rule of law standpoint, it is imperative that the user should be able to predict, in advance, with reasonable certainty, whether what they wish to say is likely to be affected by the actions of an intermediary seeking to discharge its duty of care.” Submission to Ofcom Call for Evidence

September 2020“…the purpose of these examples is less about what the answer is in any given case (although that is of course important in terms of whether the line is being drawn in the right place), but more about whether we are able to predict the answer in advance. If a legal framework does not enable us to predict clearly, in advance, what the answer is in each case, then there is no line and the framework falls at the first rule of law hurdle of “prescribed by law”. It is not sufficient to make ad hoc pronouncements about what the answer is in each case, or to invoke high level principles. We have to know why the answer is what it is, expressed in terms that enable us to predict with confidence the answer in other concrete cases.” Submission to Ofcom Call for Evidence

August 2022“The principled way to address speech considered to be beyond the pale is for Parliament to make clear, certain, objective rules about it – whether that be a criminal offence, civil liability on the user, or a self-standing rule that a platform is required to apply. Drawing a clear line, however, requires Parliament to give careful consideration not only to what should be caught by the rule, but to what kind of speech should not be caught, even if it may not be fit for a vicar’s tea party. Otherwise it draws no line, is not a rule and fails the rule of law test: that legislation should be drawn so as to enable anyone to foresee, with reasonable certainty, the consequences of their proposed action.” Reimagining the Online Safety Bill

Regulation by regulator

State of Play A regulatory model akin to broadcast-style regulation by regulator has been part of the government's settled approach from the start. Changing that would require a rethink of the Bill. 

June 2018: “The choice is not between regulating or not regulating. If there is a binary choice (and there are often many shades in between) it is between settled laws of general application and fluctuating rules devised and applied by administrative agencies or regulatory bodies; it is between laws that expose particular activities, such as search or hosting, to greater or less liability; or laws that visit them with more or less onerous obligations; it is between regimes that pay more or less regard to fundamental rights; and it is between prioritising perpetrators or intermediaries.

Such niceties can be trampled underfoot in the rush to do something about the internet. Existing generally applicable laws are readily overlooked amid the clamour to tame the internet Wild West, purge illegal, harmful and unacceptable content, leave no safe spaces for malefactors and bring order to the lawless internet. … We would at our peril confer the title and powers of Governor of the Internet on a politician, civil servant, government agency or regulator.” Regulating the internet – intermediaries to perpetrators

October 2018"[W]hen regulation by regulator trespasses into the territory of speech it takes on a different cast. Discretion, flexibility and nimbleness are vices, not virtues, where rules governing speech are concerned. The rule of law demands that a law governing speech be general in the sense that it applies to all, but precise about what it prohibits. Regulation by regulator is the converse: targeted at a specific group, but laying down only broadly stated goals that the regulator should seek to achieve. A Lord Chamberlain for the internet? Thanks, but no thanks

October 2018: "It is hard not to think that an internet regulator would be a politically expedient means of avoiding hard questions about how the law should apply to people’s behaviour on the internet. Shifting the problem on to the desk of an Ofnet might look like a convenient solution. It would certainly enable a government to proclaim to the electorate that it had done something about the internet. But that would cast aside many years of principled recognition that individual speech should be governed by the rule of law, not the hand of a regulator.

If we want safety, we should look to the general law to keep us safe. Safe from the unlawful things that people do offline and online. And safe from a Lord Chamberlain of the Internet." A Lord Chamberlain for the internet? Thanks, but no thanks

March 2019"...the regulator is not an alchemist. It may be able to produce ad hoc and subjective applications of vague precepts, and even to frame them as rules, but the moving hand of the regulator cannot transmute base metal into gold. Its very raison d'etre is flexibility, discretionary power and nimbleness. Those are a vice, not a virtue, where the rule of law is concerned, particularly when freedom of individual speech is at stake.” A Ten Point Rule of Law Test for a Social Media Duty of Care

May 2019“Individual speech is different. What is a permissible regulatory model for broadcast is not necessarily justifiable for individuals, as was recognised in the US Communications Decency Act case (Reno v ACLU) in the early 1990s. … In these times it is hardly fashionable, outside the USA, to cite First Amendment jurisprudence. Nevertheless, the proposition that individual speech is not broadcast should carry weight in a constitutional or human rights court in any jurisdiction.” The Rule of Law and the Online Harms White Paper

June 2019“A Facebook, Twitter or Mumsnet user is not an invited audience member on a daytime TV show, but someone exercising their freedom to speak to the world within clearly defined boundaries set by the law. A policy initiative to address behaviour online should take that principle as its starting point and respect and work within it. The White Paper does not do so. It cannot be assumed that an acceptable mode of regulation for broadcast is appropriate for individual speech. The norm in the offline world is that individual speech should be governed by general laws, not by a discretionary regulator.” Speech is not a tripping hazard

February 2020: “Consider the days when unregulated theatres were reckoned to be a danger to society and the Lord Chamberlain censored plays. That power was abolished in 1968, to great rejoicing. The theatres were liberated. They could be as rude and controversial as they liked, short of provoking a breach of the peace.

The White Paper proposes a Lord Chamberlain for the internet. Granted, it would be an independent regulator, similar to Ofcom, not a royal official. It might even be Ofcom itself. But the essence is the same. And this time the target would not be a handful of playwrights out to shock and offend, but all of us who use the internet.” Online Harms IFAQ

June 2020“Broadcast-style regulation is the exception, not the norm. In domestic UK legislation it has never been thought appropriate, either offline or online, to subject individual speech to the control of a broadcast-style discretionary regulator. That is true for the internet as in any other medium.” Online Harms Revisited

Analogy wars

October 2018"Setting regulatory standards for content means imposing more restrictive rules than the general law. That is the regulator’s raison d’etre. But the notion that a stricter standard is a higher standard is problematic when applied to what we say. Consider the frequency with which environmental metaphors – toxic speech, polluted discourse – are now applied to online speech. For an environmental regulator, cleaner may well be better. The same is not true of speech." A Lord Chamberlain for the internet? Thanks, but no thanks

October 2018: “[N]o analogy is perfect. Although some overlap exists with the safety-related dangers (personal injury and damage to property) that form the subject matter of occupiers’ liability to visitors and of corresponding common law duties of care, many online harms are of other kinds. Moreover, it is significant that the duty of care would consist in preventing behaviour of one site visitor to another.

The analogy with public physical places suggests that caution is required in postulating duties of care that differ markedly from those, both statutory and common law, that arise from the offline occupier-visitor relationship.” Take care with that social media duty of care

May 2021“Welcome to the Online Regulation Analogy Collection: speech as everything that it isn't (and certainly not as the freedom that underpins all other freedoms)” (Twitter thread)

What’s illegal offline is illegal online

State of Play Amongst all the narratives that have infused the Online Harms debate, the mantra of online-offline equivalence has been one of the longest-running.    

February 2022: "Overall, the government has pursued its quest for online safety under the Duty of Care banner, bolstered with the slogan “What Is Illegal Offline Is Illegal Online”.

That slogan, to be blunt, has no relevance to the draft Bill. Thirty years ago there may have been laws that referred to paper, post, or in some other way excluded electronic communication and online activity. Those gaps were plugged long ago. With the exception of election material imprints (a gap that is being fixed by a different Bill currently going through Parliament), there are no criminal offences that do not already apply online (other than jokey examples like driving a car without a licence).

On the contrary, the draft Bill’s Duty of Care would create novel obligations for both illegal and legal content that have no comparable counterpart offline. The arguments for these duties rest in reality on the premise that the internet and social media are different from offline, not that we are trying to achieve offline-online equivalence. " Harm Version 4.0 - The Online Harms Bill in metamorphosis

December 2022: “DCMS’s social media infographics once more proclaim that ‘What is illegal offline is illegal online’.

The underlying message of the slogan is that the Bill brings online and offline legality into alignment. Would that also mean that what is legal offline is (or should be) legal online? The newest Culture Secretary Michelle Donelan appeared to endorse that when announcing the abandonment of ‘legal but harmful to adults’: "However admirable the goal, I do not believe that it is morally right to censor speech online that is legal to say in person."

Commendable sentiments, but does the Bill live up to them? Or does it go further and make illegal online some of what is legal offline? I suggest that in several respects it does do that." (Some of) what is legal offline is illegal online

End-to-End Encryption

State of Play The issue of end to end encryption, and the allied Ofcom power to require messaging platforms to deploy CSEA scantech, has been a slow burner. It will feature in Lords amendments.  

June 2019“What would prevent the regulator from requiring an in-scope private messaging service to remove end-to-end encryption? This is a highly sensitive topic which was the subject of considerable Parliamentary debate during the passage of the Investigatory Powers Bill. It is unsuited to be delegated to the discretion of a regulator.” Speech is not a tripping hazard

May 2020"This is the first indication that the government is alive to the possibility that a regulator might be able to interpret a duty of care so as to affect the ability of an intermediary to use end to end encryption." A Tale of Two Committees

November 2022“Ofcom will be given the power to issue a notice requiring a private messaging service to use accredited technology to scan for CSEA material. A recent government amendment to the Bill provides that a provider given such a notice has to make such changes to the design or operation of the service as are necessary for the technology to be used effectively. That opens the way to requiring E2E encryption to be modified if it is incompatible with the accredited technology - which might, for instance, involve client-side scanning. Ofcom can also require providers to use best endeavours develop or source their own scanning technology.” How well do you know the Online Safety Bill?

New offences

State of Play The Bill introduces several new offences that could be committed by users. The proposal to enact a new harmful communications offence was dropped after well-founded criticism, but leaving the notorious S.127(1) Communications Act in place. The government is expected to introduce more offences.  

A backbench Lords amendment seeks to add the new false and threatening communications offences to the list of priority illegal content that platforms would have to proactively seek out and remove.

March 2022: “The threatening communications offence ought to be uncontroversial. However, the Bill adopts different wording from the Law Commission’s recommendation. That focused on threatening a particular victim (the ‘object of the threat’, in the Law Commission’s language). The Bill’s formulation may broaden the offence to include something more akin to use of threatening language that might be encountered by anyone who, upon reading the message, could fear that the threat would be carried out (whether or not against them).

It is unclear whether this is an accident of drafting or intentional widening. The Law Commission emphasised that the offence should encompass only genuine threats: “In our view, requiring that the defendant intend or be reckless as to whether the victim of the threat would fear that the defendant would carry out the threat will ensure that only “genuine” threats will be within the scope of the offence.” (emphasis added) It was on this basis that the Law Commission considered that another Twitter Joke Trial scenario would not be a concern.” Mapping the Online Safety Bill

February 2023: “Why has the government used different language from the Law Commission's recommendation for the threatening communications offence? The concern is that the government’s rewording broadens the offence beyond the genuine threats that the Law Commission intended should be captured. The spectre of the Twitter Joke Trial hovers in the wings.” (Twitter thread)

Extraterritoriality

State of Play The territorial reach of the Bill has attracted relatively little attention. As a matter of principle territorial overreach is to be deprecated, not least because it encourages similar lack of jurisdictional self-restraint on the part of other countries.     

December 2020“For the first time, the Final Response has set out the proposed territorial reach of the proposed legislation. Somewhat surprisingly, it appears to propose that services should be subject to UK law on a ‘mere availability of content’ basis. Given the default cross-border nature of the internet, this is tantamount to legislating extraterritorially for the whole world. It would follow that any provider anywhere in the rest of the world would have to geo-fence its service to exclude the UK in order to avoid engaging UK law. Legislating on a mere availability basis has been the subject of criticism over many years since the advent of the internet.” The Online Harms edifice takes shape

March 2022: “The Bill maintains the previous enthusiasm of the draft Bill to legislate for the whole world.

The safety duties adopt substantially the same expansive definition of ‘UK-linked’ as previously: (a) a significant number of UK users; or (b) UK users form one of the target markets for the service (or the only market); or (c) there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the UK presented by user-generated content or search content, as appropriate for the service. Whilst a targeting test is a reasonable way of capturing services provided to UK users from abroad, the third limb verges on ‘mere accessibility’. That suggests jurisdictional overreach. As to the first limb, the Bill says nothing about how ‘significant’ should be evaluated.” Mapping the Online Safety Bill