Resolving content moderation dilemmas

A recent study, published in the “Proceedings of the National Academy of Science”, found that most US citizens preferred quashing harmful misinformation over protecting free speech, despite measurable differences along political lines.

The study may be informative as Canada continues down the path of developing legislation in respect of “online harms”.

The scale and urgency of the problems around content moderation became particularly apparent when Donald Trump and political allies spread false information attacking the legitimacy of the 2020 presidential election, culminating in a violent attack on the US Capitol. Subsequently, most major social media platforms suspended Trumpā€™s accounts. After a sustained period of prioritizing free speech and avoiding the role of ā€œarbiters of truthā€, social media platforms appear to be rethinking their approach to governing online speech. In 2020, Meta overturned its policy of allowing Holocaust denial and removed some white supremacists groups from Facebook; Twitter implemented a similar policy soon after. During the COVID-19 pandemic, most global social media platforms took an unusually interventionist approach to false information and vowed to remove or limit COVID-19 misinformation and conspiracies ā€” an approach which might undergo another shift soon. In October 2021, Google announced a policy forbidding advertising content on its platforms that ā€œmak[es] claims that are demonstrably false and could significantly undermine participation or trust in an electoral or democratic processā€ or that ā€œcontradict[s] authoritative, scientific consensus on climate changeā€. And most recently, Pinterest introduced a new policy against false or misleading climate change information across both content and ads.

Content moderation, and terminating or suspending accounts is described by the researchers as a moral dilemma: “Should freedom of expression be upheld even at the expense of allowing dangerous misinformation to spread, or should misinformation be removed or penalized, thereby limiting free speech?”

When choosing between removing a post and allowing a post to remain online, decision-makers face a choice between two values, public safety or freedom of expression, that cannot be honored simultaneously. “These cases are moral dilemmas: situations where an agent morally ought to adopt each of two alternatives but cannot adopt both”.

The researchers examined public support for these “moral dilemmas” in a survey experiment with 2,564 respondents in the United States. Respondents were asked to indicate whether they would remove problematic social media posts and whether they would take punitive action against the accounts in the case of posts with:

  1. election denial,
  2. antivaccination,
  3. Holocaust denial, and
  4. climate change denial.

Respondents were provided with key information about the user and their post as well as the consequences of the posted misinformation.

The majority of respondents preferred deleting harmful misinformation over protecting free speech. However, respondents were more reluctant to suspend accounts than to remove posts, and were more likely to do either if the harmful consequences of the misinformation were severe, or in the case of it being a repeated offense.

Information about the person behind the account, the posting party’s partisanship, and their number of followers had little to no effect on respondents’ decisions.

Although support for content moderation of harmful misinformation was widespread, it was still a partisan issue. “Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them.”

The type of misinformation was also a factor: Climate change denial was removed the least (58%), whereas Holocaust denial was removed the most (71%), closely followed by election denial (69%) and antivaccination content (66%).

According to the researchers, their “results can inform the design of transparent rules for content moderation of harmful misinformation.”

“Results such as those presented here can contribute to the process of establishing transparent and consistent rules for content moderation that are generally accepted by the public.”

Scroll to Top