Search Results for: harms

Parliamentary committee failures

A British Parliamentary Committee has recently released a report discussing Digital Exclusion [pdf, 1.4MB] in the UK. The intent of the report was to call attention to the “political lethargy” in the UK that is undermining an ambition to make the UK a technology superpower.

However, the report was criticized as being “vague and superficial” in a commentary by Telecoms.com editorial director Scott Bicheno. He observed inconsistencies, such as a claim in the Committee’s press release that there are 7M households without broadband or mobile internet access. The report itself says the correct figure is 1.7M households. The report uses an Ofcom study that found 6% of UK households (which corresponds to 1.7M of the UK’s approximately 28M total households) had no internet connection, whether fixed or mobile. One can only presume the figure in the press release is just wrong, perhaps driven by a typo.

The Telecoms.com commentary concludes with a reference to the UK’s efforts to create an Online Harms bill. The article finishes with a bite. “If this clumsy legislation is anything to go by it seems the first focus for improving digital skills should be the government itself.”

Unfortunately, Canada’s Parliamentary Industry Committee (INDU) has not really distinguished itself with exemplary digital literacy as I have discussed many times in the past.

In my view, the parliamentary committee review process is broken. That view was also expressed last fall by University of Ottawa professor Michael Geist. Is there a better example than the Heritage Committee review of Bill C-18, the Online News Act?

Witnesses are frequently given too little time to provide meaningful responses to questions from Members of Parliament. Frequently, those questions seem designed mainly to create transcripts for campaign materials in the next election, hoping for a “gotcha” moment that can be editted into a partisan soundbite for Twitter or Facebook.

It is interesting to see that the UK parliamentary committee report on Digital Exclusion can be as poorly crafted as some INDU reports I have critiqued here in Canada (eg. this one). Indeed, it is sad that so little illumination seems to emerge from so many committee review processes.

As Professor Geist wrote, “I donā€™t have any obvious solutions. The reality is probably that unless Ministers prioritize accountability and MPs show some independence, nothing will change.” On that note, it has been just over 2 years since I highlighted CRTC funding of a purveyor of hate. The Department of Canadian Heritage ignored warnings until the story went viral. No one has been held accountable. Not one dollar has been recovered. The Heritage Committee review of the case resulted in no report, no admonishments, no accountability. The Ministers responsible for the funding, Rodriguez and Hussen, have retained seats at the Cabinet table.

Of course, there have been exceptions where Committee review created better legislation. The Standing Committee on Citizenship and Immigration recently concluded a productive review of Bill S-245, “An Act to amend the Citizenship Act (granting citizenship to certain Canadians)”. But, the Committee review process there was nearly derailed by partisan filibustering.

The Parliamentary Committee review process is currently broken.

Why are some committee reviews more productive than others? Why has such success been the exception, not the rule?

Will increased digital engagement drive increased partisan polarization or less?

Mid-term report

As we approach the Canada Day holiday weekend, it is an appropriate time to pause for a mid-term report on the top posts so far this year.

These are the blog posts that attracted the greatest viewership so far.

The most viewed post in the first half of the year was from late 2022. Most of the top viewed articles in the mid-term report are from March or earlier; only one is from the past 5 weeks.

Which subjects are of the greatest interest to you? Which articles have you forwarded to a friend or colleague?

A digital bill of rights

As Canada continues to push forward on its Digital Charter, I noticed an interesting thread looking at Florida’s proposed “digital bill of rights”.

Ben Sperry, of the International Center for Law & Economics, writes:

While it bills itself a ā€œDigital Bill of Rights,ā€ the Florida Senate Bill 262 could actually harm consumers and businesses online by substantially raising the costs of targeted advertising.

For consumers, this would mean less ā€œfreeā€ stuff online, as publishers switch from advertising-based to subscription-based models. For businesses, it would mean having less ability to target advertisements to consumers who actually want their products, resulting in less revenue.

Unintended consequences.

In Canada, we have countless examples of overly simplistic analysis of digital issues that fail to consider the logical responses (and counter-responses) of the marketplace to new legislation and regulations.

  • Exhibit 1: CRTC regulations that effectively capped the amortization period for devices at 2 years. The Commission and consumer groups were warned that this would lead to higher monthly prices (how could it not?) but pressed ahead anyway. There were other options that could have permitted portability, but preserved the ability to pay for pricy smartphones over a longer period.
  • Exhibit 2: CRTC banning Videotron’s Unlimited Music and Bell Mobile TV. These innovative services were competitive differentiators, offering new choices to consumers. Rather than letting the market place respond with either lower prices or competitive differentiators, the CRTC just said “no”.

It is worth noting that Canada has not yet tabled draft legislation that targets online harms and hate, which has been the subject of numerous posts on these pages (such as here, here, here, here, and here).

Last month, Canada’s Privacy Commissioner lost a high profile case against Facebook parent Meta arising from the Cambridge Analytica “incident”. In its review of the Federal Court’s decision, McCarthy’s law firm writes that the dismissal is “a monumental victory for Meta”, providing “important lessons for businesses about Canadian privacy law”. The note says, “The federal Personal Information Protection and Electronic Documents Act (ā€œPIPEDAā€) strikes a balance between individual and organizational interests, and should therefore be interpreted in a flexible, pragmatic, and common-sense way. This means that courts must consider not only the individualā€™s privacy interests, but also the organizationā€™s legitimate interests in collecting, using, and disclosing personal information for commercial purposes.”

As Canada moves forward with examination of its Digital Charter, it will be critical to maintain this balance of interests. Policy would be more robustly crafted if it anticipates how different actors might respond to legislative and regulatory initiatives.

Will parliamentary review of Canada’s digital bill of rights anticipate potential consumer and commercial consequences arising from the legislation?

Regulatory humility

As governments increase intervention in internet content and services, I wonder if sufficient regulatory humility being applied.

A recent New York Times article noted, “As companies like Google and Facebook grew into giants in the early 21st century, regulators chose largely not to interfere in the still-young market for online services.” The concern was that regulatory intervention could restrict the development of innovative applications and new business models.

What has changed?

Many internet public intellectuals have long advocated for a free and open internet, which many interpreted as supporting a hands-off approach by governments. However, one of my first blog posts, way back in March 2006, looked at an article by Tom Evslin, in saying that he was another voice on “a lonely quest to try to partially tame the anarchy of the internet.a lonely quest to try to partially tame the anarchy of the internet.”

If the Internet is a law-free zone:

  1. Governments can do whatever they want there including spying and blocking. Itā€™s naĆÆve and illogical to think that governments are governed by law in a free fire zone when no one else is.
  2. Monopolies can do whatever they want including blocking competing services.
  3. Malicious people are free to attack not only other sites but the structure of the Internet itself including its routers and domain name servers.
  4. Threats, libel, and fraud gain immunity from investigation and prosecution by being carried out on the Internet.
  5. The Internet becomes a river in which any conspirator can wade to avoid the bloodhounds of law enforcement.
  6. There are no laws PROTECTING privacy in a law-free zone.
  7. SPAM is as legitimate as any other activity.

The past decade and a half changed the way we look at the internet. We are more willing to have law enforcement in the digital world. As I have expressed before, my concern has been how we tailor new laws and how we define new standards of acceptable online behaviour.

We have laws developed for the analog world and a body of jurisprudence in their application. We have witnessed the failures of anti-spam and do not call legislation. Those laws curtailed activities by legitimate businesses but we continue to get nuisance calls and loads of unwanted emails. To an extent, instead of regulatory processes, we apply technology to suppress what the legislation was supposed to curtail. We target spam and malicious software with software in the networks and on our devices. Telecom networks are trying to target nuisance calls with technology.

Still, I wonder if the legislation suffers from over-reach. At the 2017 Canadian Telecom Summit, then FCC Chair Ajit Pai spoke about the need for regulatory humility:

In short, Americaā€™s approach to broadband policy will be practical, not ideological. Weā€™ll embrace what works, and dispense with what doesnā€™t. That means removing barriers to innovation and investment, instead of creating new ones. That means taking targeted action to address real problems in the marketplace, instead of imposing broad preemptive regulations. And that means respecting principles of economics, physics and law, and acting with humility as we regulate one of the most dynamic marketplaces history has ever known. This vision will unleash the massive investments that the digital world demands.

Every regulation, every piece of legislation risks creating harmful unintended consequences. Some regulations can serve as disincentives for investment, slowing down necessary expansion and upgrades to network infrastructure.

These days, it seems Canada’s Parliament never misses an opportunity to wade into some form of telecom regulation. Parliament crafted laws about somewhat trivial issues, apparently believing it can do better than the specialized independent regulator. As a result, there is legislation on the books mandating paper invoices in a digital world. Why isn’t that part of a regulator’s discretion?

A private member’s bill mandates service transparency that is already part of the the Minister’s policy direction. Recall, I recently wrote about risks arising from online harms legislation in various countries.

Politicians looking to score points with intervention in the digital marketplace should carefully reflect on whether new laws are actually needed. What problems are we trying to fix?

A little more regulatory humility goes a long way to minimize unintended consequences.

Resolving content moderation dilemmas

A recent study, published in the “Proceedings of the National Academy of Science”, found that most US citizens preferred quashing harmful misinformation over protecting free speech, despite measurable differences along political lines.

The study may be informative as Canada continues down the path of developing legislation in respect of “online harms”.

The scale and urgency of the problems around content moderation became particularly apparent when Donald Trump and political allies spread false information attacking the legitimacy of the 2020 presidential election, culminating in a violent attack on the US Capitol. Subsequently, most major social media platforms suspended Trumpā€™s accounts. After a sustained period of prioritizing free speech and avoiding the role of ā€œarbiters of truthā€, social media platforms appear to be rethinking their approach to governing online speech. In 2020, Meta overturned its policy of allowing Holocaust denial and removed some white supremacists groups from Facebook; Twitter implemented a similar policy soon after. During the COVID-19 pandemic, most global social media platforms took an unusually interventionist approach to false information and vowed to remove or limit COVID-19 misinformation and conspiracies ā€” an approach which might undergo another shift soon. In October 2021, Google announced a policy forbidding advertising content on its platforms that ā€œmak[es] claims that are demonstrably false and could significantly undermine participation or trust in an electoral or democratic processā€ or that ā€œcontradict[s] authoritative, scientific consensus on climate changeā€. And most recently, Pinterest introduced a new policy against false or misleading climate change information across both content and ads.

Content moderation, and terminating or suspending accounts is described by the researchers as a moral dilemma: “Should freedom of expression be upheld even at the expense of allowing dangerous misinformation to spread, or should misinformation be removed or penalized, thereby limiting free speech?”

When choosing between removing a post and allowing a post to remain online, decision-makers face a choice between two values, public safety or freedom of expression, that cannot be honored simultaneously. “These cases are moral dilemmas: situations where an agent morally ought to adopt each of two alternatives but cannot adopt both”.

The researchers examined public support for these “moral dilemmas” in a survey experiment with 2,564 respondents in the United States. Respondents were asked to indicate whether they would remove problematic social media posts and whether they would take punitive action against the accounts in the case of posts with:

  1. election denial,
  2. antivaccination,
  3. Holocaust denial, and
  4. climate change denial.

Respondents were provided with key information about the user and their post as well as the consequences of the posted misinformation.

The majority of respondents preferred deleting harmful misinformation over protecting free speech. However, respondents were more reluctant to suspend accounts than to remove posts, and were more likely to do either if the harmful consequences of the misinformation were severe, or in the case of it being a repeated offense.

Information about the person behind the account, the posting party’s partisanship, and their number of followers had little to no effect on respondents’ decisions.

Although support for content moderation of harmful misinformation was widespread, it was still a partisan issue. “Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them.”

The type of misinformation was also a factor: Climate change denial was removed the least (58%), whereas Holocaust denial was removed the most (71%), closely followed by election denial (69%) and antivaccination content (66%).

According to the researchers, their “results can inform the design of transparent rules for content moderation of harmful misinformation.”

“Results such as those presented here can contribute to the process of establishing transparent and consistent rules for content moderation that are generally accepted by the public.”

Scroll to Top