Dealing with illegal content

A couple weeks ago, I wrote about a proposal to create a “Moderation Standards Council” to address how social media platforms deal with and moderate what is termed as “harmful content.” I expressed concern about the proposal to create “an institution for content moderation.”

One bold path forward would be to have the CRTC mandate companies to create this council, a co-regulation approach similar to the Broadcasting Standards Council. The CRTC would mandate the work of the standards council, and set specific binding commitments to improve the transparency and accountability of content moderation.

Besides the fact that the CRTC lacks jurisdiction over social media platforms, we need to consider the very high bar that has rightly been set in defining what forms of speech are illegal, as contrasted with speech that someone merely deems to be offensive.

Social media sites are free to determine their own acceptable use policies that limit the kinds of content that can be posted. Do those policies operate within the bounds of the legal framework in the jurisdiction in which it operating?

A recent news item from France says that “Digital Affairs Minister Mounir Mahjoubi is now trying to purge social media of the racist bile and other hate speech spewed by often faceless users.” It was reported that he “has vowed heavy fines for online platforms that fail to remove hate speech in the 24 hours after it has been reported by users.”

And now, a report in the Globe and Mail says that the House of Commons Ethics Committee has “recommended imposing a requirement on social-media platforms to remove “manifestly illegal content in a timely fashion,” which includes hate speech, harassment and disinformation.”

Canadian ISPs already block certain classes of content that is deemed to be illegal and this without an explicit consent order by the CRTC under Section 36 of the Telecom Act that requires “Except where the Commission approves otherwise, a Canadian carrier shall not control the content or influence the meaning or purpose of telecommunications carried by it for the public.”

Years ago, I wrote a number of pieces dealing with illegal content on the internet. I recall writing about the challenge in public opinion research on the subject.

On a superficial level, if you asked someone on the street if they want their internet service provider to interfere with the content being delivered, I suspect most would immediately answer “No.”

Would the results be the same if the questioner started off by saying: “some ISPs will block spam and viruses from reaching your computer at no extra charge. Is that a valuable service?”

It is pretty clear that there is some content that we want ISPs to block.

Clearly harmful content, like viruses, or fraudulent spam, can be considered to be forms of illegal content to better differentiate it from what I would term the ‘merely offensive’, a term I like to use for content with which I firmly disagree, but regretfully accept as being part of people’s right to be wrong-minded. The challenge is in distinguishing at what point the merely offensive becomes illegal.

In 2006, I wrote about a determination by the Canadian Human Rights Tribunal that identified “hallmarks of material that is more likely than not to expose members of the targeted group to hatred or contempt.”

Perhaps that listing could serve as a starting point for one particular class of content that is illegal.

What other forms of content can be identifies as having crossed the line? Should Canada be more active monitoring and requiring the removal of illegal content?

Scroll to Top