I have been taking some time to consider (and reconsider) my views on legislation to deal with online harms.
Last week, I had the pleasure of joining MP Anthony Housefather (Liberal – Mount Royal) in participating in an online event entitled “Exposing Antisemitism: Online Research in the Fight Against Jew Hatred”. My presentation looked at “Encountering and Countering Hate”.
I took the attendees through my experience over the past two years of dealing with the online presence of Laith Marouf, a subject that has been canvassed here frequently over that period.
As I described to the webinar attendees, it is important to distinguish between “hate” and what is “merely offensive”. In my view, we may not like encountering offensive content, but that doesn’t mean there should be legal restrictions on it. My readers have seen me frequently refer to Michael Douglas’ address in Aaron Sorkin’s “The American President”.
That said, Mr. Housefather argued that we should examine the algorithms that seem to amplify those messages that elicit visceral emotions and thereby get shared and forwarded by those readers who agree, as well as those who oppose.
Aviva Klompas and John Donohoe wrote “The Wages of Online Antisemitism” in Newsweek last week.
The old saying goes, sticks and stones may break my bones, but words will never hurt me. Turns out that when those words are propelled by online outrage algorithms, they can be every bit as dangerous as the proverbial sticks and stones.
The authors write, “When it comes to social media, the reality is: if it enrages, it engages… Eliciting outrage drives user engagement, which in turn drives profits.”
In the next month, the US Supreme Court will be examining a couple of cases that challenge certain shields for online platforms found in Section 230 of the Communications Decency Act. As described in last Friday’s NY Times:
On Feb. 21, the court plans to hear the case of Gonzalez v. Google, which was brought by the family of an American killed in Paris during an attack by followers of the Islamic State. In its lawsuit, the family said Section 230 should not shield YouTube from the claim that the video site supported terrorism when its algorithms recommended Islamic State videos to users. The suit argues that recommendations can count as their own form of content produced by the platform, removing them from the protection of Section 230.
A day later, the court plans to consider a second case, Twitter v. Taamneh. It deals with a related question about when platforms are legally responsible for supporting terrorism under federal law.
The UK has been examining its Online Safety Bill for nearly two years. Its intent is to “make the internet a safer place for everyone in the UK, especially children, while making sure that everyone can enjoy their right to freedom of expression online”.
Key points the Bill covers
The Bill introduces new rules for firms which host user-generated content, i.e. those which allow users to post their own content online or interact with each other, and for search engines, which will have tailored duties focussed on minimising the presentation of harmful search results to users.Those platforms which fail to protect people will need to answer to the regulator, and could face fines of up to ten per cent of their revenues or, in the most serious cases, being blocked.
All platforms in scope will need to tackle and remove illegal material online, particularly material relating to terrorism and child sexual exploitation and abuse.
Platforms likely to be accessed by children will also have a duty to protect young people using their services from legal but harmful material such as self-harm or eating disorder content. Additionally, providers who publish or place pornographic content on their services will be required to prevent children from accessing that content.
The largest, highest-risk platforms will have to address named categories of legal but harmful material accessed by adults, likely to include issues such as abuse, harassment, or exposure to content encouraging self-harm or eating disorders. They will need to make clear in their terms and conditions what is and is not acceptable on their site, and enforce this.These services will also have a duty to bring in user empowerment tools, giving adult users more control over whom they interact with and the legal content they see, as well as the option to verify their identity.
Freedom of expression will be protected because these laws are not about imposing excessive regulation or state removal of content, but ensuring that companies have the systems and processes in place to ensure users’ safety. Proportionate measures will avoid unnecessary burdens on small and low-risk businesses.
Finally, the largest platforms will need to put in place proportionate systems and processes to prevent fraudulent adverts being published or hosted on their service. This will tackle the harmful scam advertisements which can have a devastating effect on their victims.
I wrote a couple pieces last year that are worth a second look:
- The need for more diverse perspectives, February 1, 2022
- Testing democratic freedoms, February 22, 2022
I also think back to “Free from online discrimination”, an article I wrote 3 years ago when ministerial mandate letters called for creation of a Digital Charter so that Canadians would have “the ability to be free from online discrimination including bias and harassment.”
Will Canada follow the UK lead in developing our own legislation?
Does a UK approach adequately protect our Charter freedoms of expression?
Beautifully said, Mark. I’m proud of you !!!!
Love Dad