Search Results for: harms

Creating more sophisticated content consumers

Would more sophisticated content consumers help Canada avoid the need to implement online harms restrictions?

In early 2022, I described Finland’s approach, teaching school kids how to process information online, including checking and verifying “news” and “facts” being shared on social media. As the Daily Telegraph wrote at the time, “Teaching and learning about media literacy and critical thinking is a life-long journey. It starts at kindergartens and continues at elementary schools, high schools and universities”.

While the Canadian government has been under pressure to introduce its long-promised Online Harms bill, I continue to wonder if more effort should be focused on teaching critical thinking skills in Canada.

I am doubtful that the government should be in the business of determining what content should be blocked. This current government is not qualified to block information that it judges to be “misinformation”; as I pointed out in late October, the Prime Minister, Foreign Minister and Minister of Innovation all circulated incorrect information that inflamed antisemitism. How can this government judge others’ content, when their own information has been harmful.

I am not a fan of technology specific legislation. At the same time, it is reasonable to expect that content that is considered illegal in print media should continue to be considered illegal in digital form.

It is extremely challenging to try to block content that is determined to be harmful. Blocking the content in one location will simply create an incentive for the content to emerge somewhere else. It becomes a never ending game of whack-a-mole.

In a recent article on The Hub, Richard Stursberg calls for “the news industry to decouple from social media”, saying “Much of social media is a sewer, polluted with content that claims to be true but is, in fact, disinformation and fake news.” The article claims that credible news gets judged by the company it is keeping on social media, compromising Canadians’ confidence, resulting in less trust for traditional news.

Under the circumstances, the best course might be for the news industry to simply leave social media. It could then set up its own platform, access to which would only be granted to firms that subscribed to a tough code of journalistic ethics like those in place for the CBC, the Globe and Mail, and CTV.

I am not as confident as the author that “It would be a simple matter to set up such a platform.”

Instead, what if we try to develop a society filled with more sophisticated content consumers? Can we create a series of school curricula, from kindergarten through university, to improve digital and media literacy and develop critical thinking?

Such a project would be a long term investment.

The Oxford Internet Institute recently released a study of nearly 12,000 children in the United States, that found no evidence that screen time impacted their brain function or well-being. The abstract for the full study said there were two hypotheses being tested: that functional brain organization is related to digital screen engagement; and, that children with higher rates of engagement will have functional brain organization profiles related to maladaptive functioning. “Results did not support either of these predictions for [screen media activity].”

While some schools boards have been considering whether to remove screens from classrooms, I wonder if a better approach is to focus on programs that teach improved digital literacy skills, learning how to differentiate between good information and bad, and helping kids become more informed consumers of digital content.

Can such programs help innoculate Canadians against a wide variety of online harms, including online hate, fraud, misinformation and disinformation?

Creating more sophisticated content consumers will require a longer time horizon with more patience required to implement, but will it deliver a better outcome than trying to legislate government controls on freedoms of expression?

AI-generated content

Bronwyn Howell recently wrote an article entitled “AI-Generated Content, Fake News and Credible Signals” for AEIdeas that I found to be particularly insightful.

It has been a couple months since I wrote “Emerging technology policy” and I think that the AEI paper presents some good perspectives.

She writes about the potential for people to be misled by AI-generated content due to what she terms information asymmetry. “Exploiting information asymmetries is not new. Snake oil salesmen and the advertising industry have long been “economical with the truth” to persuade gullible consumers to buy their products.”

In a digital world, consumers of AI-generated content do not necessarily know “whether the content they consume is a factual representation or a digital creation or manipulation, but the publisher does.” Regulations requiring content generated by AI to be labeled as such are intended to help overcome the information asymmetry.

Sometimes, no harm comes from the consumer not knowing. For example, if I am not told the aliens in a sci-fi movie are computer-generated, I am unlikely to be harmed; indeed, my enjoyment may be reduced if I am reminded of this before the movie starts, or if the information is emblazoned across the screen when the aliens are in action. But sometimes harm does come from the consumer not knowing—for example, when a video shows a politician saying or doing things that they did not. Yet even here, it is not clear or straightforward. If someone is lampooning a politician for entertainment purposes, then labelling is likely unnecessary (and even potentially harmful if it detracts from the entertainment experience). But if it is an election advertisement, and the intention is to convince voters that the portrayed events are factual and not fictional, then the asymmetry is material.

Potential harms may not arise from how the content was created, but rather from the intent behind its use. If the content is intended to deceive the consumer, regardless of how the content was created, then we need to examine ways to protect the public.

It may not be sufficient to require labelling of content generated by AI. It can be too easy to lie about its origins, and indeed, labelling may not be necessary if no harm ensues. Instead, the article suggests that regulatory “controls are required for the subset of transactions in which harm may occur from fake content.” She uses the example of election advertising, where rules already exist in most jurisdictions. “This suggests electoral law, not AI controls, are the best place to start managing the risks for this application”.

Do we need technology specific legislation and regulation? Or, do we ensure that existing protections for conventional technologies can apply in the world of artificial intelligence generating content?

An article on ABC News earlier this week says, “The war in Gaza is highlighting the latest advances in artificial intelligence as a way to spread fake images and disinformation”.

The risk that AI and social media could be used to spread lies to U.S. voters has alarmed lawmakers from both parties in Washington. At a recent hearing on the dangers of deepfake technology, U.S. Rep. Gerry Connolly, Democrat of Virginia, said the U.S. must invest in funding the development of AI tools designed to counter other AI.

A paper [pdf, 300KB] released earlier this week by Joshua Gans of the Rotman School of Management at University of Toronto asks “Can Socially-Minded Governance Control the AGI Beast?”. Spoiler alert: he concludes (robustly) that it cannot.

The cost of misinformation

What is the cost of misinformation?

We know that there is a real societal cost associated with viral misinformation, but what price do purveyors pay to spread their messages?

It turns out, it is pretty cheap. according to a recent article in Fortune, “For as little as $7, TikTok users can garner thousands of views on TikTok, opening a low-cost pathway to spread propaganda on hot-button topics.” The article discusses a surge in social media misinformation triggered by the Middle East conflict.

In Canada, we have seen senior politicians spreading misinformation in poorly informed social media messages, with the Prime Minister, Foreign Minister and Minister of Innovation all implicating Israel in killing hundreds by bombing a Gaza hospital, when in fact the explosion was caused by a misfired terrorist rocket that didn’t hit the hospital.

It is shameful that none of these three senior politicians have deleted their posts or issued an online clarification. The closest we had was a late night post by Canada’s National Defense Minister (more than 4 days later), absolving Israel from blame. With nearly 7 million followers, the quick-to-tweet politicians didn’t have to pay for their false messages to go viral. The Prime Minister’s post has been viewed more than 2.6 million times, and it was reposted by more than 5,000 other users. The Foreign Minister’s post was seen more than 2.2 million times. By way of contrast, the Defense Minister has only 41,500 followers, less than 1% of the Prime Minister’s 6.5 million. By the time his post was issued, the damage was done.

So, how do we measure the cost of misinformation, especially when the misinformation is spread by people who are supposed to know better?

I have written about government regulation of online harms a number of times in the past. A few weeks ago, in “Regulating misinformation”, I asked “What should be the role of government in regulating misinformation?”

The article in Fortune indicates TikTok “has had its share of criticism for propagating problematic content. It has faced multiple lawsuits for surfacing suicide, self-harm and disturbing content to kids, leading to mental health consequences.”

The BBC writes that “TikTok and Meta have been formally told to provide the EU with information about the possible spread of disinformation on their platforms relating to the Israel-Gaza conflict.” Under the terms of the EU’s Digital Services Act, companies must respond by set deadlines.

Recently, the Government of Canada announced that it plans to move forward with a bill addressing “online hate speech and other internet-related harms.” The Government of Canada’s website on Online Safety says “Now, more than ever, online services must be held responsible for addressing harmful content on their platforms and creating a safe online space that protects all Canadians.”

How will the legislation deal with the possibility that the online harms originate with the government itself?

Regulating misinformation

What should be the role of government in regulating misinformation?

That is an important question being considered in Canada and around the world as governments seek solutions to online harms and the spread of misinformation. My own views on the subject have been evolving, as I wrote early this year.

As the Center for News, Technology and Innovation (CNTI) writes, “the credibility of information the public gets online has become a global concern. Of particular importance… is the impact of disinformation – false information created or spread with the intention to deceive or harm – on electoral processes, political violence and information systems around the world.”

It’s important to distinguish between “hate” and that which is “merely offensive”. We may not like encountering offensive content, but does that mean there should be legal restrictions preventing it? Readers have seen me frequently refer to Michael Douglas’ address in Aaron Sorkin’s “The American President“. “You want free speech? Let’s see you acknowledge a man whose words make your blood boil, who’s standing center stage and advocating at the top of his lungs that which you would spend a lifetime opposing at the top of yours.”

My post in January referred to a Newsweek article in which Aviva Klompas and John Donohoe wrote:

The old saying goes, sticks and stones may break my bones, but words will never hurt me. Turns out that when those words are propelled by online outrage algorithms, they can be every bit as dangerous as the proverbial sticks and stones.

When it comes to social media, the reality is: if it enrages, it engages… Eliciting outrage drives user engagement, which in turn drives profits.

But my views are also informed by years living in the United States, a country that has enshrined speech freedoms in its constitution.

As CNTI notes “Addressing disinformation is critical, but some regulative approaches can put press freedom and human rights at great risk.”

Ben Sperry provides another perspective in a paper soon to be published in the Gonzaga Law Review. “The thesis of this paper is that the First Amendment forecloses government agents’ ability to regulate misinformation online, but it protects the ability of private actors — ie. the social-media companies themselves — to regulate misinformation on their platforms as they see fit.”

The Sperry paper concludes that in the US, regulating misinformation cannot be government mandated. Government could “invest in telling their own version of the facts”, but it has “no authority to mandate or pressure social-media companies into regulating misinformation.”

So, if government can’t mandate how misinformation is handled, by what rights can social media companies edit or block content? The author discusses why the “state-action doctrine” protects private intermediaries. According to Sperry, the social media platforms are positioned best to make decisions about the benefits and harms of speech through their moderation policies.

He argues that social media platforms need to balance the interests of users on each side in order to maximize value. This includes setting moderation rules to keep users engaged. That will tend to increase the opportunities for generating advertising revenues.

Canada does not yet have the same history of constitutional protection of speech rights as the United States. However, most social media platforms used here are US tech companies. Any Canadian legislation regulating online misinformation is bound to attract concerns from the United States.

About a year and a half ago, Konrad von Finckenstein and Peter Menzies released a relevant paper for the MacDonald Laurier Institute. In “Social media responsibility and free speech: A new approach for dealing with ‘Internet Harms’” [pdf, 619KB], the authors say that Canada’s approach to date has missed the mark. “Finckenstein and Menzies note that the only bodies with the ability and legitimacy to combat online harms are social media companies themselves. What is needed is legislation that establishes a regime of responsibility for social media companies.” Their paper proposes legislation that would protect free expression online while confronting disinformation, unrestrained hate speech, and other challenges.

The UK Online Safety Bill is continuing to work its way through British Parliament.

Canada already has laws prohibiting the wilfull promotion of hate, as applied in a recent case in Quebec. In that case, a man was convicted of promoting hatred against Jews in articles written for the no-Nazi website, the Daily Stormer. He was sentenced to 15 months in jail with three years of probation.

Does Canada need to introduce specific online harms legislation?

What is the right approach?

These papers provide perspectives worth consideration by policy makers.

Online disinhibition effect

Online disinhibition effect is a term used by psychologists to refer to the tendancy by some who hide behind online anonymity to be nasty without fear of repercussions.

In the early days of my blog, there was a piece called “4 degrees of impersonal communications” in which I wrote:

people say things in emails that they would never say to someone over the phone. And, over the phone (especially in a voice message), we seem willing to speak in ways that one would never consider saying face-to-face.

I will add that people say things in anonymous comments on blogs that add a further dimension. Perhaps it is a sign of the indifference associated with mass anonymity.

On Sunday mornings, one of my rituals is to watch CBS Sunday Morning. A few weeks ago, I saw the show rebroadcast a segment from last year that reminded me of my earlier blog post.

In the segment, correspondent David Pogue spoke with professor Mary Aiken, a forensic cyber psychologist who shared four ways online conversations differ from in-person conversations:

  • First, we can see each other in real life, looking at visual cues, reading body language.
  • Second, online exchanges may not take place in real time, leading to the possibility that things are taken out of context or misinterpreted.
  • Third, most online discussions are public, meaning that the impact of insults have the potential to be amplified, increasing the impact, the shame and the pain.
  • Fourth, online anonymity means no repercussions for being mean, or hurtful.

“Add all this together and you get what psychologists call the online disinhibition effect.”

The segment refers to a report from Paladin Capital Group, “Towards a Safer Nation: The United States ‘Safety Tech’ Market” [pdf, 2.0MB].

A new sector, the online safety technology or ‘Safety Tech’ sector, which complements the existing cybersecurity industry is gaining prominence. This research report has found evidence of an emerging and thriving US Safety Tech sector that aims to deliver solutions to facilitate safer online experiences and protect people from psychological risks, criminal dangers and online harm. Importantly, Safety Tech innovations also have the capacity to protect people from the corrosive effects of misinformation, online harassment, discrimination, and extremism which increasingly threaten democracy and civil society.

What is the difference between cybersecurity and cyber safety? Binary; cybersecurity primarily focuses on protecting data, systems and networks; cyber safety or Safety Tech focuses on protecting people.

As the CBS correspondent says in the segment, “Never in the history of the internet has anyone’s mind been changed by being yelled at”.

As Canada’s Parliament considers legislation to address online harms, can technology help to address solutions? How do we separate the person from the idea?

Scroll to Top