Regulating speech

The internet has long been hailed as a means for democratizing the expression of opinions. Everyone has the ability to express themselves on Twitter, on Facebook, on blogs, or in countless chat rooms.

No longer constrained by the availability of a Speaker’s Corner in a public square, on the internet, marginalized voices aren’t subjected to the discretion of a publisher seeking to conserve valuable column inches of newsprint, or limited minutes of airtime. On the internet, you are limited only by the ability to attract eyeballs interested in your perspectives.

And that creates a problem.

In order to attract an audience, people game various systems to improve positioning on search engines, or pay to ‘promote’ their posts on a variety of social media platforms. Because promoter capabilities are seen to be a software feature by the platforms (and not a bug), bad actors have been able to use these tools to spread misinformation. Many subscribers have been unable to differentiate between trusted sources of information and some commentators have charged that this influenced the results of the 2016 elections in the United States.

As a result, democracies around the world are turning their minds to the issue of regulating the kinds of communications being expressed on the internet. Earlier this week, the UK government issued its Online Harms White Paper.

In the wrong hands the internet can be used to spread terrorist and other illegal or harmful content, undermine civil discourse, and abuse or bully other people. Online harms are widespread and can have serious consequences.

This White Paper therefore puts forward ambitious plans for a new system of accountability and oversight for tech companies, moving far beyond self-regulation. A new regulatory framework for online safety will make clear companies’ responsibilities to keep UK users, particularly children, safer online with the most robust action to counter illegal content and activity.

The UK plan envisions an independent regulator to set safety standards and reporting requirements and armed with enforcement powers.

The UK White Paper has a section on The Duty of Care:

7.4 As indication of their compliance with their overarching duty of care to keep users safe, we envisage that, where relevant, companies in scope will:

  • Ensure their relevant terms and conditions meet standards set by the regulator and reflect the codes of practice as appropriate.
  • Enforce their own relevant terms and conditions effectively and consistently.
  • Prevent known terrorist or CSEA content being made available to users.
  • Take prompt, transparent and effective action following user reporting.
  • Support law enforcement investigations to bring criminals who break the law online to justice.
  • Direct users who have suffered harm to support.
  • Regularly review their efforts in tackling harm and adapt their internal processes to drive continuous improvement.

7.5 To help achieve these outcomes, we expect the regulator to develop codes of practice that set out:

  • Steps to ensure products and services are safe by design.
  • Guidance about how to ensure terms of use are adequate and are understood by users when they sign up to use the service.
  • Measures to ensure that reporting processes and processes for moderating content and activity are transparent and effective.
  • Steps to ensure harmful content or activity is dealt with rapidly.
  • Processes that allow users to appeal the removal of content or other responses, in order to protect users’ rights online.
  • Steps to ensure that users who have experienced harm are directed to, and receive, adequate support.
  • Steps to monitor, evaluate and improve the effectiveness of their processes.

It is clear that not all illegal content on the internet can be policed using conventional methods. For example, how can we deal with content that hosted in another country that violates our laws?

In its White Paper, the UK is consulting on the need to have some extraordinary tools to deal with the global nature of communications and the “particularly serious nature of some of the harms”.

  • Disruption of business activities. In the event of extremely serious breaches, such as a company failing to take action to stop terrorist use of their services, it may be appropriate to force third party companies to withdraw any service they provide that directly or indirectly facilitates access to the services of the first company, such as search results, app stores, or links on social media posts. These measures would need to be compatible with the European Convention on Human Rights.
  • ISP blocking. Internet Service Provider (ISP) blocking of non-compliant websites or apps – essentially blocking companies’ platforms from being accessible in the UK – could be an enforcement option of last resort. This option would only be considered where a company has committed serious, repeated and egregious violations of the outcome requirements for illegal harms, failing to maintain basic standards after repeated warnings and notices of improvement. Deploying such an option would be a decision for the independent regulator alone. While we recognise that this would have technical limitations, it could have sufficient impact to act as a powerful deterrent. The British Board of Film Classification (BBFC) will have this power to address non-compliance when the requirements for age verification on online pornography sites come into force. We are exploring a range of options in this space, from a requirement on ISPs to block websites or apps following notification by the regulator, through to the regulator issuing a list of companies that have committed serious, repeated and egregious violations, which ISPs could choose to block on a voluntary basis.
  • Senior management liability. We are exploring possible options to create new liability for individual senior managers. This would mean certain individuals would be held personally accountable in the event of a major breach of the statutory duty of care. This could involve personal liability for civil fines, or could even extend to criminal liability. In financial services, the introduction of the Senior Managers & Certification Regime has driven a culture change in risk management in the sector. Another recent example of government action is establishing corporate offences of failure to prevent the criminal facilitation of tax evasion. Recent changes to the Privacy and Electronic Communications Regulations (PECR) provide powers to assign liability to a specific person or position within an organisation. However, this is as yet largely untested. There are a range of options for how this could be applied to companies in scope of the online harms framework, and a number of challenges, such as identifying which roles should be prescribed and whether this can be proportionate for small companies.

The UK White Paper is roughly 100 pages setting out a “vision for online safety, including a new regulatory framework to tackle a broad range of harms”. It appears to be applying serious thought to a serious issue.

There are significant policy issues to be explored. It is difficult to imagine how Canada could conduct a consultation and have a regime in place prior to the next federal election.

Canada’s Minister of Democratic Institutions has been socializing plans for “Safeguarding our Elections” and earlier this week warned “The world’s major social media companies are not doing enough to help Canada combat potential foreign meddling in this October’s elections and the government might have to regulate them”.

Shortly afterwards, Facebook banned a number of Canadian accounts, including one belonging to a former candidate for mayor of Toronto.

If the content of these pages cross the line that defines illegal content, then it is understandable why the pages should be banned.

But what if the content is merely offensive, without being illegal? How do we ensure that actions to block content are consistent with Canada’s Charter of Rights and Freedoms, which guarantees everyone’s “freedom of thought, belief, opinion and expression, including freedom of the press and other media of communication”?

Scroll to Top