Here, a new approach may be emerging, as the voices calling for more attention to the spread of harmful material, rather than to its mere existence, are becoming louder. The House of Lords Committee report on freedom of expression proposed that a large part of the bill’s provisions on “lawful but harmful” content be replaced by a new design-based obligation, requiring platforms to take steps to ensure that “their design choices, such as reward mechanisms, choice architecture, and content curation algorithms, mitigate the risk of encouraging and amplifying uncivil content.” The committee recommends that the largest platforms empower users to make their own choices as to what kinds of content they see and from whom. Richard Wingfield of Global Partners Digital told me, “If content-ranking processes were more transparent and openly available, social media companies could be required to be open to integrating alternative algorithms, developed by third parties, that users could choose for moderating and ranking what they see online.” These proposals are long overdue. It is not the existence of abuse and disinformation that is new in the digital era, but their viral transmission. For too long, a heady combination of commercial incentives and lack of transparency, accountability and user empowerment has resulted in the exponential expansion of the reach of shocking, emotive and divisive content. These design-based proposals are likely to meet resistance from the platforms; even the Facebook Oversight Board has been unable to gain access to information from Facebook about its algorithms. But they begin to tackle society’s true concern about lawful but harmful content: not that it is said, but that it is spread. Arguably, the bill should not only address platform design, but also—like the European Democracy Action Plan in the EU—tackle deliberate recourse to manipulative techniques, such as disinformation, by those who abuse social media platforms in order to distort public and political opinion or deliberately sow societal division. If the British government can take one comfort from the slew of criticism of the draft bill, it is that it has come in equal measure from all sides of the debate over online harms. The bill’s structure is complex, and for many, its provisions are overly vague, not least its definition of harm. Some are concerned that its skeletal framework makes implementation impossible to anticipate and dependent entirely on eventual Ofcom codes of practice. Others see this incremental approach as a positive, permitting sensible regulatory evolution over time. For platforms, its provisions may be too onerous. Others may consider that platforms are accorded too much power to police online speech. For champions of freedom of expression, the bill’s imposition on platforms of a duty to “have regard to,” or take into account, the importance of protecting users’ right to free speech is inadequate, providing no bulwark against the onslaught to free speech in the bill or the risk of a chilling effect from over-implementation. Privacy advocates argue that, despite a requirement to take privacy into consideration, the bill would legitimate far more intensive scrutiny of personal communications—including encrypted messaging—than present practice. The bill’s omissions are also attracting objections. It does not cover online advertising fraud, despite the recommendations of a Parliamentary committee. It does not give Ofcom or social media platforms powers to tackle urgent threats to public safety and security. And it does not directly tackle the complex issue of anonymity. The media, already threatened by social media’s business model, are doubtful whether the bill’s protections for journalistic content are sufficiently robust. Social media regulation is vital, as the government, not commercial interests, is the democratic guardian of the public interest. The Online Safety Bill is a forerunner in proposing a risk-based, regulatory model for tackling online harms, in contrast to regulatory approaches that ride roughshod over human rights and media freedom by banning perceived harms such as “fake news.” Despite its criticisms, the bill—with its creation of social media duties, transparency and accountability to a strong, independent regulator—is an overwhelmingly positive development. Now is the time to reconsider the aspects that could damage human rights, particularly the clauses on lawful but harmful content, and to replace those with new provisions that would tackle the core of the problem of online harms.
Several years into the debate over online harms, the tensions between freedom of expression and protection against harm are familiar but unsolved.
Kate Jones is an associate fellow with the International Law Program at Chatham House, a senior associate with Oxford Information Labs, and an associate of the Oxford Human Rights Hub. She previously spent many years as a lawyer and diplomat with the U.K. Foreign and Commonwealth Office, serving in London, Geneva and Strasbourg. She has also directed Oxford University’s Diplomatic Studies Program. Follow her on Twitter@KateJones77.