Search Results for: harms

Reviewing online harms legislation

The mandate letters for the Minister of Canadian Heritage and for the Minister of Justice and Attorney General each contain a section calling for the Ministers “to develop and introduce legislation as soon as possible to combat serious forms of harmful online content to protect Canadians and hold social media platforms and other online services accountable for the content they host”.

It is worth examining the report of the Joint Committee on the Draft Online Safety Bill, issued by the UK House of Lords and House of Commons in mid-December [pdf, 2.0 MB].

The UK report opens powerfully:

Self-regulation of online services has failed. Whilst the online world has revolutionised our lives and created many benefits, underlying systems designed to service business models based on data harvesting and microtargeted advertising shape the way we experience it. Algorithms, invisible to the public, decide what we see, hear and experience. For some service providers this means valuing the engagement of users at all costs, regardless of what holds their attention. This can result in amplifying the false over the true, the extreme over the considered, and the harmful over the benign.

The human cost can be counted in mass murder in Myanmar, in intensive care beds full of unvaccinated Covid-19 patients, in insurrection at the US Capitol, and in teenagers sent down rabbit holes of content promoting self-harm, eating disorders and suicide. This has happened because for too long the major online service providers have been allowed to regard themselves as neutral platforms which are not responsible for the content that is created and shared by their users. Yet it is these algorithms which have enabled behaviours which would be challenged by the law in the physical world to thrive on the internet. If we do nothing these problems will only get worse. Our children will pay the heaviest price. That is why the driving force behind the Online Safety Bill is the belief that these companies must be held liable for the systems they have created to make money for themselves.

At 193 pages, it is a lengthy but worthwhile read. The report includes 127 recommendations for the draft legislation over 25 pages in the report.

Having been involved in a number of projects and committees with the late human rights lawyer Alan Borovoy, I am sympathetic to his view that “We should not censor those making racist statements but censure them.” Professor Daniel Lyons of Boston College writes “Traditionally, the solution to bad speech is more speech, not censorship.” He expresses concern that attempting to control the flow of information online, like other attempts to craft digital legislation, can lead to unintended consequences.

Big Tech’s critics are correct that platforms can be misused to disseminate socially undesirable content online. But the remedy is to address the source of the problem — harmful users and the social conditions that make their messages attractive to others.

Still, it is difficult to reconcile how we can accept content in a digital stream which would be unacceptable in print, or as the UK paper writes, enabling “behaviours which would be challenged by the law in the physical world to thrive on the internet.”

Following its consultation process, those crafting new laws need to maintain a careful balance. Perhaps those involved in drafting Canada’s legislation should consider some of the UK recommendations in their initial draft.

“If we do nothing these problems will only get worse. Our children will pay the heaviest price.”

Online platform accountability

Online platform accountability is a significant piece of Canada’s proposed online harms legislation, Bill C-63.

Much has been written about the Online Harms Act already. Observers note there are multiple distinct parts found in the legislation: Internet platform regulation; the return of Section 13 of the Canada Human Rights Act; and, perhaps the most controversial, inclusion of Criminal Code provisions.

As I have written before, my own thinking has been heavily influenced by the late Alan Borovoy, a great Canadian civil rights lawyer, who used to say we should censure, not censor, those who spew hate speech.

I ran across a relevant article from the International Center for Law and Economics (ICLE) to contribute to the discussion. Two and a half years ago, “Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet” was written by Geoffrey A. Manne, Kristian Stout, and Ben Sperry in the context of Section 230 of the US Communications Decency Act of 1996.

To the extent that the current legal regime permits social harms online that exceed concomitant benefits, it should be reformed to deter those harms if such reform can be accomplished at sufficiently low cost. The salient objection to Section 230 reform is not one of principle, but of practicality: are there effective reforms that would address the identified harms without destroying (or excessively damaging) the vibrant Internet ecosystem by imposing punishing, open-ended legal liability? We believe there are.

A common set of objections to Section 230 reform has grown out of legitimate concerns that the economic and speech gains that have accompanied the rise of the Internet over the last three decades would be undermined or reversed if Section 230’s liability shield were weakened. Our paper thus establishes a proper framework for evaluating online intermediary liability and evaluates the implications of the common objections to Section 230 reform within that context. Indeed, it is important to take those criticisms seriously, as they highlight many of the pitfalls that could attend imprudent reforms.

The ICLE authors assert that there are actual harms — violations of civil law and civil rights, violations of criminal law, and tortious conduct — occurring on online platforms. These impose real costs on individuals and society at-large and therefore, justify establishment of a means to apply a measure of order and accountability to the online world.

According to ICLE, intermediary liability, applied to the platforms, can be a cost effective approach for online platform accountability. It notes that “the fundamental principles that determine the dividing line between actionable and illegal or tortious content offline can and should be respected online, as well”.

I found the executive summary and the full paper [pdf, 1MB] worth the time to review.

Still, as I wrote on Twitter (X) a few weeks ago, I remain concerned that Canada’s Online Harms Act may not address the root cause of harms online. The Act may indeed reduce the number of despicable posts we see on digital platforms, but it will do nothing to reduce the number of despicable people who hold those despicable points of view. It will just be harder to find them.

Prebunk misinformation

Is it possible to prebunk misinformation?

Is there a vaccine for fake news?

A recent story on 60 Minutes caught my eye and steered me toward the Social Decision-Making Lab at Cambridge University. The director of the lab, Sander van der Linden, told 60 Minutes that misinformation – that which is outright false or incorrect – represents just a small amount of people’s overall media diet. “The much bigger part is what we would refer to as misleading information, half-truths, biased narratives, information that is presented out of context.”

In collaboration with partners at Yale and George Mason University, the Cambridge lab recently published “Inoculating the Public against Misinformation about Climate Change”. The study is said to be a kind of “psychological vaccine” against misinformation.

A growing body of research suggests that one promising way to counteract the politicization of science is to convey the high level of normative agreement (“consensus”) among experts about the reality of human-caused climate change. … evidence is provided that it is possible to pre-emptively protect (“inoculate”) public attitudes about climate change against real-world misinformation.

There is so much bad information circulating on the internet, much of it placed as commercially driven click-bait. Other sources include politically motivated state actors seeking to disrupt social cohesion. Misinformation and disinformation are motivating many of the most contentious sections of Canada’s Online Harms Act, Bill C-63. The psychological research being led by Cambridge suggests countering bad information with good information. The researchers found that the way we perceive what other groups believe serves as a cue for overall informational judgment. So, conveying facts that scientists and experts are convinced about an issue can increase perceived consensus and acceptance across an ideological spectrum, either directly or indirectly.

The research suggests that communicating a scientific consensus on such issues as vaccines or human-caused climate change should be accompanied by information warning that politically or economically motivated actors may seek to undermine the findings. In effect, audiences should be provided with what they call a “cognitive repertoire” — a basic explanation about the disinformation campaigns — to try to pre-emptively refute such attempts. The research suggests that communicating a social fact, such as a high level of agreement among experts, can be “an effective and depolarizing public engagement strategy.”

According to van der Linden, “everyone is obsessed with influencing each other, but in fact there’s almost no program of research that looks at helping people resist unwanted attempts to persuade them. So that’s where my interest is: helping people resist persuasion when they don’t want it.”

Last fall, I wrote about the cost of misinformation. What if we found ways to prebunk misinformation, inoculate people to be able to detect half truths and lies online?

It seems unlikely that we will be able to block the flow of bad information. Does inoculation represent a better approach, enabling Canada to counter misinformation by censuring, not censoring?

Censure, not censor

Alan Borovoy, Canada’s great civil rights lawyer, used to say we should censure, not censor, those who spew hate speech.

He and I worked together on a committee many years ago. I would frequently give him a ride home afterwards which gave us opportunities to chat. His views continue to influence my perspectives on Bill C-63, Canada’s Online Harms Act. An editorial in the Toronto Star (written to mark his passing in 2015) should be mandatory reading for parliamentarians reviewing the Bill.

Alan was the long time general cousel of the Canadian Civil Liberties Association. The CCLA has called for “substantial amendments” to the Act.

Our preliminary read raises several serious concerns. While the CCLA endorses the declared purposes of upholding public safety, protecting children, and supporting marginalized communities, our initial assessment reveals that the bill includes overbroad violations of expressive freedom, privacy, protest rights, and liberty. These must be rectified before the bill is passed into law.

I referenced The Star’s tribute a couple years ago, writing about early proposals for the Online Harms Act. It is worth another look. As The Star notes, Borovoy’s view, that even the most offensive speech deserved protection, would lead him into “clashes with others on the left.”

I have frequently cited Aaron Sorkin’s version of that perspective from the film The American President: “You want free speech? Let’s see you acknowledge a man whose words make your blood boil, who’s standing center stage and advocating at the top of his lungs that which you would spend a lifetime opposing at the top of yours.”

There are a number of recent articles highly critical of portions of the proposed Online Harms Act, especially as related to Part 2, amendments to the Criminal Code and the Canadian Human Rights Act. Michael Geist writes about why those provisions should be removes from the Act. Christine Van Geyn writes in the National Post that the proposed process creates financial incentives for filing complaints. Individuals face no costs in bringing a complaint — not even the costs of a lawyer — and could receive a $20,000 civil award if successful. “The process becomes the punishment even if the case does not proceed past an investigation.”

Last week, Andrew Coyne wrote “Canada’s Online Harms Act is revealing itself to be staggeringly reckless”, saying, “the more closely it was examined, the worse it appeared.”

There is, first, the proposal to increase the maximum penalty for promoting genocide from its current five years to life imprisonment. Say that again: life in prison, not for any act you or others might have committed, not even for incitement of it, but for such abstractions as “advocacy” or “promotion.”

The most remarkable part of this is the timing. At the very moment when everyone and his dog is accusing someone else of genocide, or of promoting it – as Israel’s defenders say of Hamas’s supporters, as the Palestinians’ say of Israel’s, as Ukraine’s say of Russia’s – the government proposes that the penalty for being on the losing side of such controversies should be life in prison? I have my views on these questions, and you have yours, but I would not throw you in jail for your opinions, and I hope you would not do the same to me – not for five years, and certainly not for life.

Earlier this week, writing in the Toronto Star, Rosie DiManno says “Bill C-63 is a mess of a bill, a fatally flawed piece of overreaching legislation that has drawn scorn from, and made weird allies of, Margaret Atwood and Elon Musk. So maladroit that it can’t possibly be fixed — apart from the obvious correction of severing the child protection part from everything else”.

Finally, a commentary by David Thomas, former chief of the Canadian Human Rights Tribunal, says Bill C-63 is “terrible law that will unduly impose restrictions on Canadians’ sacred Charter right to freedom of expression”.

I have also said that there are limits to our speech freedoms. As the (oft misattributed) expression says, “one’s right to swing their fist ends precisely where the other one’s nose begins.” As CIJA said in its statement on March 6, “We cannot allow mob-driven demonstrations to obstruct our right to participate fully in society.”

There are lines that may not be crossed. Intimidation, threats of physical harm, go beyond the bounds of protected speech. But, we should be able to find a better balance than what has been proposed in Bill C-63.

As Alan Borovoy espoused, censure, not censor.

Regulatory overreach

Consequences of regulatory overreach are discussed in a recent Truth on the Market blog post. Lessons in Regulatory Humility Following the DMA Implementation resonated with me, even before Canada’s Online Harms Act was tabled in the legislature. Peter Menzies and Michael Geist each write about the extreme overreach in Bill C-63, the Online Harms Act.

In response to regulatory overreach, I have been writing about the need for greater humility for almost as long as this blog has been around. Last year, I wrote “Politicians looking to score points with intervention in the digital marketplace should carefully reflect on whether new laws are actually needed.” Seven years ago, I observed “Canada was among the first regulators to set out a light-touch approach to internet regulation” (in 1999).

The Truth in Markets post warns about unintended consequences arising from the European Union’s Digital Markets Act (DMA).

To comply with the DMA, digital platforms will have to adapt their business models, governance, and even their “digital architecture,” which will affect how they provide services and monetize their assets. These changes will be felt not only by the platforms themselves, but also by the services that run on them (whether called “business users” or “complementors”) and by consumers, all of whom will be forced to grapple with new risks or a potential reduction in quality.

Canadians have experienced platforms reducing the quality of user experience in response to government legislation. Facebook removed news from Canadian feeds because of the high costs associated with compliance with the Online News Act.

Apple has warned that aspects of the DMA creates new risks for users. “The new options for processing payments and downloading apps on iOS open new avenues for malware, fraud and scams, illicit and harmful content, and other privacy and security threats.”

The reaction to some of the gatekeepers’ announcements regarding their DMA-compliance plans shows how we could quickly be thrown into a downward spiral in which regulations beget more regulations. Once the first layer of regulations fail to yield the desired results, politicians, consumers, and business users demand more regulation. This leads, in the end, to more heavy-handed rules like the aforementioned price controls or structural separations.

Regulations beget more regulation. Former CRTC Vice-chair Peter Menzies warns about the Province of Quebec seeking to create its own streaming rules. Will another layer of regulations increase the availability of French language content? Mr. Menzies warns, “there is a widely held view that should the regulatory burden be viewed as overly cumbersome, many smaller streaming companies might make their services unavailable in Canada. And it’s not entirely out of the question that some large companies could follow suit.”

Legislation and regulations are almost always designed with aspirational objectives. Unfortunately, there is often insufficient analysis of the consequences of regulatory overreach. Truth on the Markets warns about jurisdictions rushing to be first with digital market legislation. “Countries that take their time, however, to study markets, perform proper regulatory impact analysis, and enact a serious notice-and comment-process, will be those most able to learn from the experience of other regulators and markets. These regulatory impact analyses should, of course, also consider the possibility that the regulation in question may not be necessary at all”.

As I have written before, “we need to explore policies for the digital economy with the thinking of a chess master”. There is a real need to think at least three or four moves ahead.

Scroll to Top