Last week, the Government of Canada released a report on “What We Heard: The Government’s proposed approach to address harmful content online”, summarizing the feedback received from its consultation last summer.
I think it is worth reproducing the “Key Takeaways and Executive Summary” in its entirety:
Key Takeaways and Executive Summary
On July 29th, 2021, the Government of Canada published a legislative and regulatory proposal to confront harmful content online for consultation on its website. Interested parties were invited to submit written comments to the Government via email.
Feedback both recognized the proposal as a foundation upon which the Government could build and identified a number of areas of concern.
There was support from a majority of respondents for a legislative and regulatory framework, led by the federal government, to confront harmful content online.
Specifically, respondents were largely supportive of the following elements of the proposed regime:
- A framework that would apply to all major platforms;
- The exclusion of private and encrypted communications and telecommunications services;
- Accessible and easy-to-use flagging mechanisms and clear appeal processes for users;
- The need for platform transparency and accountability requirements;
- The creation of new regulatory machinery to administer and enforce the regime;
- Ensuring that the regulatory scheme protects Canadians from real-world violence emanating from the online space; and
- The need for appropriate enforcement tools to address platform non-compliance.
However, respondents identified a number of overarching concerns including concerns related to the freedom of expression, privacy rights, the impact of the proposal on certain marginalized groups, and compliance with the Canadian Charter of Rights and Freedoms more generally.
These overarching concerns were connected to a number of specific elements of the proposal. Respondents specifically called for the Government to reframe and reconsider its approach to the following elements:
- Apart from major platforms, what other types of online services would be regulated and what the threshold for inclusion would be;
- What content moderation obligations, if any, would be placed on platforms to reduce the spread of harmful content online, including the 24-hour removal provision and the obligation for platforms to proactively monitor their services for harmful content;
- The independence and oversight of new regulatory bodies;
- What types of content would be captured by the regime and how that content would be defined in relation to existing criminal law;
- The proposed compliance and enforcement tools, including the blocking power; and
- Mandatory reporting of content to law enforcement and national security agencies or preservation obligations.
Though respondents recognized that this initiative is a priority, many voiced that key elements of the proposal need to be re-examined. Some parties explained that they would require more specificity in order to provide informed feedback and that a lack of definitional detail would lead to uncertainty and unpredictability for stakeholders.
Respondents signaled the need to proceed with caution. Many emphasized that the approach Canada adopts to addressing online harms would serve as a benchmark for other governments acting in the same space and would contribute significantly to international norm setting.
The issue of dealing with online harms is a priority for this government; it is set out in the objectives within the mandate letters for 2 Cabinet Ministers. But, the issues are complex and it appears the government – a minority government – is proceeding cautiously.
There are models for Canada to examine in other jurisdictions. Last week, the UK announced that it would be strengthening its online harms legislation to target revenge porn, hate crime, fraud, the sale of illegal drugs or weapons, the promotion or facilitation of suicide, people smuggling and sexual exploitation (terrorism and child sexual abuse were already included).
As I asked last week, how do we ensure that actions to deal with online harms are consistent with Canada’s Charter of Rights and Freedoms, which guarantees “freedom of thought, belief, opinion and expression, including freedom of the press and other media of communication”?
I wrote in late January, “those crafting new laws need to maintain a careful balance… ‘If we do nothing these problems will only get worse. Our children will pay the heaviest price.'”