Regulating misinformation

What should be the role of government in regulating misinformation?

That is an important question being considered in Canada and around the world as governments seek solutions to online harms and the spread of misinformation. My own views on the subject have been evolving, as I wrote early this year.

As the Center for News, Technology and Innovation (CNTI) writes, “the credibility of information the public gets online has become a global concern. Of particular importance… is the impact of disinformation – false information created or spread with the intention to deceive or harm – on electoral processes, political violence and information systems around the world.”

It’s important to distinguish between “hate” and that which is “merely offensive”. We may not like encountering offensive content, but does that mean there should be legal restrictions preventing it? Readers have seen me frequently refer to Michael Douglas’ address in Aaron Sorkin’s “The American President“. “You want free speech? Let’s see you acknowledge a man whose words make your blood boil, who’s standing center stage and advocating at the top of his lungs that which you would spend a lifetime opposing at the top of yours.”

My post in January referred to a Newsweek article in which Aviva Klompas and John Donohoe wrote:

The old saying goes, sticks and stones may break my bones, but words will never hurt me. Turns out that when those words are propelled by online outrage algorithms, they can be every bit as dangerous as the proverbial sticks and stones.

When it comes to social media, the reality is: if it enrages, it engages… Eliciting outrage drives user engagement, which in turn drives profits.

But my views are also informed by years living in the United States, a country that has enshrined speech freedoms in its constitution.

As CNTI notes “Addressing disinformation is critical, but some regulative approaches can put press freedom and human rights at great risk.”

Ben Sperry provides another perspective in a paper soon to be published in the Gonzaga Law Review. “The thesis of this paper is that the First Amendment forecloses government agents’ ability to regulate misinformation online, but it protects the ability of private actors — ie. the social-media companies themselves — to regulate misinformation on their platforms as they see fit.”

The Sperry paper concludes that in the US, regulating misinformation cannot be government mandated. Government could “invest in telling their own version of the facts”, but it has “no authority to mandate or pressure social-media companies into regulating misinformation.”

So, if government can’t mandate how misinformation is handled, by what rights can social media companies edit or block content? The author discusses why the “state-action doctrine” protects private intermediaries. According to Sperry, the social media platforms are positioned best to make decisions about the benefits and harms of speech through their moderation policies.

He argues that social media platforms need to balance the interests of users on each side in order to maximize value. This includes setting moderation rules to keep users engaged. That will tend to increase the opportunities for generating advertising revenues.

Canada does not yet have the same history of constitutional protection of speech rights as the United States. However, most social media platforms used here are US tech companies. Any Canadian legislation regulating online misinformation is bound to attract concerns from the United States.

About a year and a half ago, Konrad von Finckenstein and Peter Menzies released a relevant paper for the MacDonald Laurier Institute. In “Social media responsibility and free speech: A new approach for dealing with ‘Internet Harms’” [pdf, 619KB], the authors say that Canada’s approach to date has missed the mark. “Finckenstein and Menzies note that the only bodies with the ability and legitimacy to combat online harms are social media companies themselves. What is needed is legislation that establishes a regime of responsibility for social media companies.” Their paper proposes legislation that would protect free expression online while confronting disinformation, unrestrained hate speech, and other challenges.

The UK Online Safety Bill is continuing to work its way through British Parliament.

Canada already has laws prohibiting the wilfull promotion of hate, as applied in a recent case in Quebec. In that case, a man was convicted of promoting hatred against Jews in articles written for the no-Nazi website, the Daily Stormer. He was sentenced to 15 months in jail with three years of probation.

Does Canada need to introduce specific online harms legislation?

What is the right approach?

These papers provide perspectives worth consideration by policy makers.

Scroll to Top