The U.K. Takes a Stab at Regulating Social Media Platforms

The U.K. Takes a Stab at Regulating Social Media Platforms
An iPhone displays the apps for Facebook and Messenger (AP file photo by Jenny Kane).
Editor’s Note: Guest columnist Kate Jones is filling in this week for Emily Taylor. Efforts to regulate social media platforms are gathering pace in the United Kingdom. In May, the British government published its draft Online Safety Bill, which will be studied by a Joint Committee of Members of Parliament and the House of Lords chaired by MP Damian Collins this autumn. Collins led parliament’s exposé of the 2018 Cambridge Analytica scandal and is a leading U.K. voice on disinformation and digital regulation. In parallel, the House of Commons’ Sub-Committee on Online Harms and Disinformation will also lead an enquiry into the bill. These enquiries come hard on the heels of a House of Lords Committee report on freedom of expression in the digital age, published last month, and a report from the Law Commission recommending modernization of communications offenses in the U.K. Given how new social media regulation is as a policy focus, the Online Safety Bill presents an opportunity for the U.K. to demonstrate the strength of its independent regulatory approach, even as the European Union develops its draft Digital Services Act in parallel. A wide range of industry and civil society voices are involved in the U.K.’s digital regulation discussion. Yet despite the vibrant conversation, the challenges of regulating social media remain far from resolved. The 133-page bill would create a skeletal framework for regulation that would be fleshed out over time with both secondary legislation and regulatory codes of practice. This framework approach allows for flexibility and incremental development, but it attracts criticisms for the extent of discretion it allows to Ofcom, the U.K.’s independent communications regulator. Similarly, the bill arguably empowers the secretary of state for digital, culture, media and sport to wield considerable political influence on the scope of free speech with little Parliamentary oversight. The bill would impose “duties of care” on an estimated 24,000 social media and internet search platforms operational in the U.K., ranging in size from Facebook to local discussion groups, to assess and manage the risks of illegal content appearing on their services, and the risks of content harmful to children appearing on their services if children are likely to access them. The “duty of care” language means platforms would have to adopt adequate processes, rather than having to achieve failsafe results. The largest platforms must also assess the risks involved with content that is “lawful but harmful” to adults potentially appearing on their services, and they are required to specify in clear, accessible terms of service how they deal with this content. Platforms must also allow users to easily report content that they consider to be illegal or harmful and must have a complaints procedure for users. There is a carve-out for journalistic content. Several years into the debate over online harms, the tensions between freedom of expression and protection against harm remain familiar but unsolved. Perhaps nowhere are these tensions more visible than when it comes to the question of whether platforms should have legal obligations regarding content that is lawful but harmful to adults, a thorny topic that the draft EU legislation, in contrast to the U.K.’s, declines to grapple with. On the one hand, the U.K. bill’s inclusion of “lawful but harmful” content has been widely criticized as legitimating censorship and restriction of speech, in contravention of human rights law. On the other, there is serious concern about online speech and content that doesn’t meet thresholds of illegality but can and does cause harm. These include online expressions of racism, misogyny and abuse, as seen vividly in England in the wake of the European football championships, and disinformation that can have a major impact on security and democracy, as currently amply illustrated with regard to COVID-19 vaccines.

Several years into the debate over online harms, the tensions between freedom of expression and protection against harm are familiar but unsolved.

Here, a new approach may be emerging, as the voices calling for more attention to the spread of harmful material, rather than to its mere existence, are becoming louder. The House of Lords Committee report on freedom of expression proposed that a large part of the bill’s provisions on “lawful but harmful” content be replaced by a new design-based obligation, requiring platforms to take steps to ensure that “their design choices, such as reward mechanisms, choice architecture, and content curation algorithms, mitigate the risk of encouraging and amplifying uncivil content.” The committee recommends that the largest platforms empower users to make their own choices as to what kinds of content they see and from whom. Richard Wingfield of Global Partners Digital told me, “If content-ranking processes were more transparent and openly available, social media companies could be required to be open to integrating alternative algorithms, developed by third parties, that users could choose for moderating and ranking what they see online.” These proposals are long overdue. It is not the existence of abuse and disinformation that is new in the digital era, but their viral transmission. For too long, a heady combination of commercial incentives and lack of transparency, accountability and user empowerment has resulted in the exponential expansion of the reach of shocking, emotive and divisive content. These design-based proposals are likely to meet resistance from the platforms; even the Facebook Oversight Board has been unable to gain access to information from Facebook about its algorithms. But they begin to tackle society’s true concern about lawful but harmful content: not that it is said, but that it is spread. Arguably, the bill should not only address platform design, but also—like the European Democracy Action Plan in the EU—tackle deliberate recourse to manipulative techniques, such as disinformation, by those who abuse social media platforms in order to distort public and political opinion or deliberately sow societal division. If the British government can take one comfort from the slew of criticism of the draft bill, it is that it has come in equal measure from all sides of the debate over online harms. The bill’s structure is complex, and for many, its provisions are overly vague, not least its definition of harm. Some are concerned that its skeletal framework makes implementation impossible to anticipate and dependent entirely on eventual Ofcom codes of practice. Others see this incremental approach as a positive, permitting sensible regulatory evolution over time. For platforms, its provisions may be too onerous. Others may consider that platforms are accorded too much power to police online speech. For champions of freedom of expression, the bill’s imposition on platforms of a duty to “have regard to,” or take into account, the importance of protecting users’ right to free speech is inadequate, providing no bulwark against the onslaught to free speech in the bill or the risk of a chilling effect from over-implementation. Privacy advocates argue that, despite a requirement to take privacy into consideration, the bill would legitimate far more intensive scrutiny of personal communications—including encrypted messaging—than present practice. The bill’s omissions are also attracting objections. It does not cover online advertising fraud, despite the recommendations of a Parliamentary committee. It does not give Ofcom or social media platforms powers to tackle urgent threats to public safety and security. And it does not directly tackle the complex issue of anonymity. The media, already threatened by social media’s business model, are doubtful whether the bill’s protections for journalistic content are sufficiently robust. Social media regulation is vital, as the government, not commercial interests, is the democratic guardian of the public interest. The Online Safety Bill is a forerunner in proposing a risk-based, regulatory model for tackling online harms, in contrast to regulatory approaches that ride roughshod over human rights and media freedom by banning perceived harms such as “fake news.” Despite its criticisms, the bill—with its creation of social media duties, transparency and accountability to a strong, independent regulator—is an overwhelmingly positive development. Now is the time to reconsider the aspects that could damage human rights, particularly the clauses on lawful but harmful content, and to replace those with new provisions that would tackle the core of the problem of online harms.

Kate Jones is an associate fellow with the International Law Program at Chatham House, a senior associate with Oxford Information Labs, and an associate of the Oxford Human Rights Hub. She previously spent many years as a lawyer and diplomat with the U.K. Foreign and Commonwealth Office, serving in London, Geneva and Strasbourg. She has also directed Oxford University’s Diplomatic Studies Program. Follow her on Twitter@KateJones77.

Keep reading for free!

Get instant access to the rest of this article by submitting your email address below. You'll also get access to three articles of your choice each month and our free newsletter:

Or, Subscribe now to get full access.

Already a subscriber? Log in here .

What you’ll get with an All-Access subscription to World Politics Review:

A WPR subscription is like no other resource — it’s like having a personal curator and expert analyst of global affairs news. Subscribe now, and you’ll get:

  • Immediate and instant access to the full searchable library of tens of thousands of articles.
  • Daily articles with original analysis, written by leading topic experts, delivered to you every weekday.
  • Regular in-depth articles with deep dives into important issues and countries.
  • The Daily Review email, with our take on the day’s most important news, the latest WPR analysis, what’s on our radar, and more.
  • The Weekly Review email, with quick summaries of the week’s most important coverage, and what’s to come.
  • Completely ad-free reading.

And all of this is available to you when you subscribe today.

More World Politics Review