What is the cost of misinformation?
We know that there is a real societal cost associated with viral misinformation, but what price do purveyors pay to spread their messages?
It turns out, it is pretty cheap. according to a recent article in Fortune, “For as little as $7, TikTok users can garner thousands of views on TikTok, opening a low-cost pathway to spread propaganda on hot-button topics.” The article discusses a surge in social media misinformation triggered by the Middle East conflict.
In Canada, we have seen senior politicians spreading misinformation in poorly informed social media messages, with the Prime Minister, Foreign Minister and Minister of Innovation all implicating Israel in killing hundreds by bombing a Gaza hospital, when in fact the explosion was caused by a misfired terrorist rocket that didn’t hit the hospital.
It is shameful that none of these three senior politicians have deleted their posts or issued an online clarification. The closest we had was a late night post by Canada’s National Defense Minister (more than 4 days later), absolving Israel from blame. With nearly 7 million followers, the quick-to-tweet politicians didn’t have to pay for their false messages to go viral. The Prime Minister’s post has been viewed more than 2.6 million times, and it was reposted by more than 5,000 other users. The Foreign Minister’s post was seen more than 2.2 million times. By way of contrast, the Defense Minister has only 41,500 followers, less than 1% of the Prime Minister’s 6.5 million. By the time his post was issued, the damage was done.
So, how do we measure the cost of misinformation, especially when the misinformation is spread by people who are supposed to know better?
I have written about government regulation of online harms a number of times in the past. A few weeks ago, in “Regulating misinformation”, I asked “What should be the role of government in regulating misinformation?”
The article in Fortune indicates TikTok “has had its share of criticism for propagating problematic content. It has faced multiple lawsuits for surfacing suicide, self-harm and disturbing content to kids, leading to mental health consequences.”
The BBC writes that “TikTok and Meta have been formally told to provide the EU with information about the possible spread of disinformation on their platforms relating to the Israel-Gaza conflict.” Under the terms of the EU’s Digital Services Act, companies must respond by set deadlines.
Recently, the Government of Canada announced that it plans to move forward with a bill addressing “online hate speech and other internet-related harms.” The Government of Canada’s website on Online Safety says “Now, more than ever, online services must be held responsible for addressing harmful content on their platforms and creating a safe online space that protects all Canadians.”
How will the legislation deal with the possibility that the online harms originate with the government itself?