Search Results for: harms

Social media harms

The way social media harms our kids has been in the news lately. I am not talking about the Online Harms Act, which has been the subject of a number of my recent posts. I will also not be talking (at least not in this post) about the inappropriateness of the Governor General hosting a forum about Online Harms when a bill is being reviewed by Parliament.

Four of the largest school boards in Canada launched a lawsuit against the owners of Facebook, Instagram, SnapChat and TikTok. The suit accuses them of “negligently designing products that disrupt learning and rewire student behaviour while leaving educators to manage the fallout.” The Boards of Education are seeking $4.5B and asking for a redesign of the platforms “to keep students safe.” School Boards are represented by personal injury firm Neinstein LLP, which has taken the case on contingency. More than 200 school boards in the US have launched similar suits.

The current discussion of social media harms is hardly opening up a new topic.

Eight years ago, I wrote “Is Social Media Better At Breaking Than Making?” That post referred to a Tom Friedman piece in the New York Times (“Social Media: Destroyer or Creator?”). It also included a TED Talk by Wael Ghonim, a former Google employee in Egypt whose Facebook page was credited with helping launch the Arab Spring. In his talk, Ghonim says “Five years ago, I said, ‘If you want to liberate society, all you need is the Internet.’ Today, I believe if we want to liberate society, we first need to liberate the Internet.'”

The talk is worth watching. In my view, it stands the test of time.

But, let’s return to that school board lawsuit. The claim is that these social media platforms “rewire student behaviour while leaving educators to manage the fallout.”

A new book by Jonathan Haidt is attracting some attention on this theme of “rewiring”. “The Anxious Generation: How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness” was released last month. He claims that social media platforms are responsible for “displacing physical play and in-person socializing.” How? By “designing a firehose of addictive content that entered through kids’ eyes and ears”. In doing so, “these companies have rewired childhood and changed human development on an almost unimaginable scale”.

A critical review of the book in Nature triggered a lengthy rebuttal on Twitter.

The author of the review in Nature is Candice Odgers, associate dean for research and professor of psychological science and informatics at University of California, Irvine. She has a distinctly Canadian connection. Odgers co-leads international networks on child development for Canadian Institute for Advanced Research in Toronto. She says science doesn’t support the thesis of digital technologies rewiring children’s brains, causing “an epidemic of mental illness.” According to Odgers, Haidt’s work confuses correlation with causation. Specifically, she says studies have not found use of social-media predicts or causes depression. Rather, the research shows those who already have mental-health problems use such platforms differently compared to others.

Haidt’s response, a 984 word, 6311 character post on Twitter (X), has attracted more than 1.5 million views. (I remember when tweets were restricted to a maximum of 140 characters.) His post includes links to collections of resources referenced in his book.

That academic debate is certain to continue.

How should social media platforms be regulated? What is the role of schools, teachers, and parents to assume respective responsibilities for kids’ use of devices and apps?

The safety of regulating online harms

Will government regulations addressing online harms make it less safe to be online?

That is what the heads of a number of messaging apps have told the UK government. A recent article on Telecoms.com reports that executives from WhatsApp, Signal, Viber, Element, OPTF, Threema, and Wire have signed an open letter calling on the UK to rethink its Online Harms bill (that I have discussed previously).

“The UK Government must urgently rethink the Bill, revising it to encourage companies to offer more privacy and security to its residents, not less.”

The companies warned that as “currently drafted, the Bill could break end-to-end encryption, opening the door to routine, general and indiscriminate surveillance of personal messages of friends, family members, employees, executives, journalists, human rights activists and even politicians themselves, which would fundamentally undermine everyone’s ability to communicate securely.”

Writing about various states’ legislative bills aimed at protecting your from harms on social media, Ben Sperry of the International Center for Law & Economics opined, “It’s understandable that legislators would seek solutions to address the perceived harm that social-media usage may cause, especially for teen girls. But where these proposals go wrong is in substituting lawmakers’ own preferences for the decisions of parents and teens on how and when to best use social media.” Sperry says the predictable result would be for social media platforms to invest more in excluding teens from their platforms altogether, rather than investing in creating safe spaces for them to connect and learn online. “This may even be the goal of some legislators, but it’s not beneficial to teens, parents, or society in long run.”

Similar to what we have seen with digital legislation in Canada, the UK’s review of the Online Harms bill has been deeply polarizing. The author of the Telecoms.com article is pretty clear about his views on the legislation. “Given the degree of technological and ethical illiteracy shown in the drafting of this bill and its passage through the House of Commons, there seems little hope that the Lords will understand what’s at stake. But we’d be delighted to be proved wrong and cross our fingers that the Bill is returned to the government with lots of red ink on it.”

Dealing with online harms

I have been taking some time to consider (and reconsider) my views on legislation to deal with online harms.

Last week, I had the pleasure of joining MP Anthony Housefather (Liberal – Mount Royal) in participating in an online event entitled “Exposing Antisemitism: Online Research in the Fight Against Jew Hatred”. My presentation looked at “Encountering and Countering Hate”.

I took the attendees through my experience over the past two years of dealing with the online presence of Laith Marouf, a subject that has been canvassed here frequently over that period.

As I described to the webinar attendees, it is important to distinguish between “hate” and what is “merely offensive”. In my view, we may not like encountering offensive content, but that doesn’t mean there should be legal restrictions on it. My readers have seen me frequently refer to Michael Douglas’ address in Aaron Sorkin’s “The American President”.

That said, Mr. Housefather argued that we should examine the algorithms that seem to amplify those messages that elicit visceral emotions and thereby get shared and forwarded by those readers who agree, as well as those who oppose.

Aviva Klompas and John Donohoe wrote “The Wages of Online Antisemitism” in Newsweek last week.

The old saying goes, sticks and stones may break my bones, but words will never hurt me. Turns out that when those words are propelled by online outrage algorithms, they can be every bit as dangerous as the proverbial sticks and stones.

The authors write, “When it comes to social media, the reality is: if it enrages, it engages… Eliciting outrage drives user engagement, which in turn drives profits.”

In the next month, the US Supreme Court will be examining a couple of cases that challenge certain shields for online platforms found in Section 230 of the Communications Decency Act. As described in last Friday’s NY Times:

On Feb. 21, the court plans to hear the case of Gonzalez v. Google, which was brought by the family of an American killed in Paris during an attack by followers of the Islamic State. In its lawsuit, the family said Section 230 should not shield YouTube from the claim that the video site supported terrorism when its algorithms recommended Islamic State videos to users. The suit argues that recommendations can count as their own form of content produced by the platform, removing them from the protection of Section 230.

A day later, the court plans to consider a second case, Twitter v. Taamneh. It deals with a related question about when platforms are legally responsible for supporting terrorism under federal law.

The UK has been examining its Online Safety Bill for nearly two years. Its intent is to “make the internet a safer place for everyone in the UK, especially children, while making sure that everyone can enjoy their right to freedom of expression online”.

Key points the Bill covers
The Bill introduces new rules for firms which host user-generated content, i.e. those which allow users to post their own content online or interact with each other, and for search engines, which will have tailored duties focussed on minimising the presentation of harmful search results to users.

Those platforms which fail to protect people will need to answer to the regulator, and could face fines of up to ten per cent of their revenues or, in the most serious cases, being blocked.

All platforms in scope will need to tackle and remove illegal material online, particularly material relating to terrorism and child sexual exploitation and abuse.

Platforms likely to be accessed by children will also have a duty to protect young people using their services from legal but harmful material such as self-harm or eating disorder content. Additionally, providers who publish or place pornographic content on their services will be required to prevent children from accessing that content.
The largest, highest-risk platforms will have to address named categories of legal but harmful material accessed by adults, likely to include issues such as abuse, harassment, or exposure to content encouraging self-harm or eating disorders. They will need to make clear in their terms and conditions what is and is not acceptable on their site, and enforce this.

These services will also have a duty to bring in user empowerment tools, giving adult users more control over whom they interact with and the legal content they see, as well as the option to verify their identity.

Freedom of expression will be protected because these laws are not about imposing excessive regulation or state removal of content, but ensuring that companies have the systems and processes in place to ensure users’ safety. Proportionate measures will avoid unnecessary burdens on small and low-risk businesses.

Finally, the largest platforms will need to put in place proportionate systems and processes to prevent fraudulent adverts being published or hosted on their service. This will tackle the harmful scam advertisements which can have a devastating effect on their victims.

I wrote a couple pieces last year that are worth a second look:

I also think back to “Free from online discrimination”, an article I wrote 3 years ago when ministerial mandate letters called for creation of a Digital Charter so that Canadians would have “the ability to be free from online discrimination including bias and harassment.”

Will Canada follow the UK lead in developing our own legislation?

Does a UK approach adequately protect our Charter freedoms of expression?

Cancel culture and online harms legislation

A recent interview on The Hub caught my eye: “Cancel culture comes to the classroom: Professor Deborah Appleman on how teachers are navigating the new culture wars.”

In the article, Sean Speer and Professor Appleman discuss “how culture war politics are intruding into the classroom.”

I had been reading a number of articles on similar themes, such as the blowback against cancel culture at Yale Law School by some US Federal judges who will no longer offer clerkships to graduates of the program (see “Yale Law clerk boycott now up to a dozen federal jurists”), and a ban on speakers who are from Israel or who support Zionism by 9 student groups associated with Berkeley’s law school (see: “Leading US Jewish groups blast Berkeley Law school amid anti-Zionism uproar”).

As Professor Appleman describes, “this pressure of canceling, this culture war, is coming from both liberals and conservatives.”

Classroom teachers are used to conservative critics who think that the books that teachers choose are inappropriate because of profane language or explicit sexual content. We’ve been dealing with that with support from the American Library Association, and we’re about to celebrate Banned Book Weeks coming up.

We’re sort of used to that. What we’re not used to is the canceling that’s coming from the Left, canceling because of problematic portrayals, because of use of offensive language, and canceling because someone has made a judgment about the appropriateness of the life of an author, for example, Sherman Alexie, and the degree to which that author’s behaviour should keep us from teaching their books. It’s a particular moment in time where we’re being pressed from both sides. And that, of course, in the United States is exacerbated by a lot of movements, a lot of anti-gay movements, by movements of critical race theory, even though the people who talk about it don’t really exactly know what it is, a real backlash.

On one hand, we don’t want to have kids read things in our classroom that perpetuate harm.

On the other hand, the purpose of reading literature is to unsettle you, is to hurt you in some ways, and is, maybe, in my opinion, most importantly, giving you the opportunity to feel the hurt of other people.

I encourage you to read the entire interview, including the discussion of the “need to confront ideas or arguments that [students] may find distasteful or even offensive as part of the process of learning.”

And that brings me back to my concern about legislation being considered to address the issue of online harms.

Recall that the mandate letters for the Minister of Canadian Heritage and for the Minister of Justice and Attorney General each contain a section calling for the Ministers “to develop and introduce legislation as soon as possible to combat serious forms of harmful online content to protect Canadians and hold social media platforms and other online services accountable for the content they host”.

As I wrote two months ago, from the outset, I have had concerns about plans to create new legislation addressing online hate, trying to establish a regime that defines what constitutes online harms, and places limits on our freedom of expression.

Then there is the case of Twitter suspending Laith Marouf, and the subsequent withdrawal of a government consulting contract for him to conduct workshops to develop an anti-racism media strategy. Frequent readers are familiar with this case and others can refer to the reading list at the bottom of my September 6 post. As I wrote in August:

That said, let’s examine a very current situation: a Montreal-based consultant who refers to Jews as “loud mouthed bags of human feces”, and threatens “Jews with a bullet to the head” (as highlighted in a Twitter stream last Friday by journalist Jonathan Kay).

Was this hateful or merely offensive? To me, it’s pretty clear that this kind of commentary crossed the line.

But we don’t actually need to consider whether or not Laith Marouf’s comments would survive Canadian Heritage’s prospective Online Harms legislation. Legal or not, it seems pretty inexcusable that this same department of the Canadian government has been providing funding to him.

It is worth noting that it didn’t require new legislation to deal with this case. Marouf was found to have violated Twitter’s terms of service and was suspended (again) without government intervention. Following public exposure, the government cancelled the funding agreement.

Canadian Heritage, the department charged with developing legislation to combat serious forms of harmful online content, found itself having funded a purveyor of the kind of content it was supposed to combat. The Minister’s mandate is to hold social media platforms accountable for the content they host, but two months after the story became widely known, no one has yet been held to account for the department’s failure to conduct proper due diligence before awarding the contract to CMAC.

I am doubtful of the ability of this government to introduce legislation that balances concerns about online harms with our Charter freedom of expression. Still, it is important to note that there are certain limits to our right to freely express our views. And the Charter clearly doesn’t include a right to receive government funding for those with a habit of spewing vile messages.

Regulating online harms

Last week, the Government of Canada released a report on “What We Heard: The Government’s proposed approach to address harmful content online”, summarizing the feedback received from its consultation last summer.

I think it is worth reproducing the “Key Takeaways and Executive Summary” in its entirety:

Key Takeaways and Executive Summary
On July 29th, 2021, the Government of Canada published a legislative and regulatory proposal to confront harmful content online for consultation on its website. Interested parties were invited to submit written comments to the Government via email.

Feedback both recognized the proposal as a foundation upon which the Government could build and identified a number of areas of concern.

There was support from a majority of respondents for a legislative and regulatory framework, led by the federal government, to confront harmful content online.

Specifically, respondents were largely supportive of the following elements of the proposed regime:

  • A framework that would apply to all major platforms;
  • The exclusion of private and encrypted communications and telecommunications services;
  • Accessible and easy-to-use flagging mechanisms and clear appeal processes for users;
  • The need for platform transparency and accountability requirements;
  • The creation of new regulatory machinery to administer and enforce the regime;
  • Ensuring that the regulatory scheme protects Canadians from real-world violence emanating from the online space; and
  • The need for appropriate enforcement tools to address platform non-compliance.

However, respondents identified a number of overarching concerns including concerns related to the freedom of expression, privacy rights, the impact of the proposal on certain marginalized groups, and compliance with the Canadian Charter of Rights and Freedoms more generally.

These overarching concerns were connected to a number of specific elements of the proposal. Respondents specifically called for the Government to reframe and reconsider its approach to the following elements:

  • Apart from major platforms, what other types of online services would be regulated and what the threshold for inclusion would be;
  • What content moderation obligations, if any, would be placed on platforms to reduce the spread of harmful content online, including the 24-hour removal provision and the obligation for platforms to proactively monitor their services for harmful content;
  • The independence and oversight of new regulatory bodies;
  • What types of content would be captured by the regime and how that content would be defined in relation to existing criminal law;
  • The proposed compliance and enforcement tools, including the blocking power; and
  • Mandatory reporting of content to law enforcement and national security agencies or preservation obligations.

Though respondents recognized that this initiative is a priority, many voiced that key elements of the proposal need to be re-examined. Some parties explained that they would require more specificity in order to provide informed feedback and that a lack of definitional detail would lead to uncertainty and unpredictability for stakeholders.

Respondents signaled the need to proceed with caution. Many emphasized that the approach Canada adopts to addressing online harms would serve as a benchmark for other governments acting in the same space and would contribute significantly to international norm setting.

The issue of dealing with online harms is a priority for this government; it is set out in the objectives within the mandate letters for 2 Cabinet Ministers. But, the issues are complex and it appears the government – a minority government – is proceeding cautiously.

There are models for Canada to examine in other jurisdictions. Last week, the UK announced that it would be strengthening its online harms legislation to target revenge porn, hate crime, fraud, the sale of illegal drugs or weapons, the promotion or facilitation of suicide, people smuggling and sexual exploitation (terrorism and child sexual abuse were already included).

As I asked last week, how do we ensure that actions to deal with online harms are consistent with Canada’s Charter of Rights and Freedoms, which guarantees “freedom of thought, belief, opinion and expression, including freedom of the press and other media of communication”?

I wrote in late January, “those crafting new laws need to maintain a careful balance… ‘If we do nothing these problems will only get worse. Our children will pay the heaviest price.'”

Scroll to Top