Search Results for: harms

Taming technology authoritarianism

Is it time to tame “technology authoritarianism”?

Is that even possible?

Yesterday, the Canadian government introduced its long-promised Online Harms Act, with the promise of a focus on protecting children and youth from the “dangers of the internet”. I’ll have more on some of the specifics as the text of the legislation works its way through committee review.

From some of the earliest days of this blog, I have been writing about “Taming the wild west” of what I called the anarchy of the internet. At the time, I had a particular concern with the “fine balance between the right to free speech and the right of individuals not to be the objects of hate and violent speech”.

A recent article in The Atlantic caught my eye. In “The Rise of Technoauthoritarianism”, Adrienne LaFrance claims the technocrats of Silicon Valley are “leading an antidemocratic, illiberal movement” and government intervention is required.

She writes that she long believed that regulation was unnecessary, “in part because I was not (and am still not) convinced that the government can do so without itself causing harm… I’d much prefer to see market competition as a force for technological improvement and the betterment of society.”

in recent years, it has become clear that regulation is needed, not least because the rise of technocracy proves that Silicon Valley’s leaders simply will not act in the public’s best interest. Much should be done to protect children from the hazards of social media, and to break up monopolies and oligopolies that damage society, and more. At the same time, I believe that regulation alone will not be enough to meaningfully address the cultural rot that the new technocrats are spreading.

Why the epipheny? Why can’t market forces provide sufficient discipline? LaFrance reminds us that Silicon Valley “attracts many immensely talented people” (including half of my kids), working to do good.

Even the most deleterious companies have built some wonderful tools. But these tools, at scale, are also systems of manipulation and control. They promise community but sow division; claim to champion truth but spread lies; wrap themselves in concepts such as empowerment and liberty but surveil us relentlessly.

Read the full article.

I don’t agree with every claim made by LaFrance, but it is a well written, thought provoking piece. “Many Americans fret — rightfully — about the rising authoritarianism among MAGA Republicans, but they risk ignoring another ascendant force for illiberalism: the tantrum-prone and immensely powerful kings of tech.

Two weeks ago, US Congress summoned the CEOs of leading technology firms to discuss “Big Tech and the Online Child Sexual Exploitation Crisis.” Pressured under questioning, at one point Meta CEO Mark Zuckerberg apologized to victims. “No one should have to go through the things that your families have suffered.”

Although Canada has not yet introduced its long-promised Online Harms legislation, it has passed two of the bills promised under its Digital Charter:

  • C-11: “The Online Streaming Act modernizes the Broadcasting Act and helps ensure Canadian stories and music are widely available on streaming platforms to the benefit of future generations of artists and creators in Canada.”
  • C-18: “The Online News Act aims to ensure that dominant platforms compensate news businesses when their content is made available on their services.”

But, as I asked last Fall, are we “Losing sight of the target”.

And, if indeed it is time to tame “technology authoritarianism”, how do we tame them?

Technology specific legislation for AI

A couple months ago, I reminded readers that I generally don’t like technology specific legislation.

With reference to the concerns about online harms, I wrote “it is reasonable to expect that content that is considered illegal in print media should continue to be considered illegal in digital form.”

A recent article from ITIF discusses explicit AI-generated images. The sharing of non-consensual intimate deep-fakes of Taylor Swift is sparking calls for new legislation to deal with images generated by artificial intelligence (AI).

ITIF argues that the heart of the issue predates AI, and indeed can be traced to before the internet era. The underlying issue in this instance is image-based abuse – sharing (or threatening to share), nude or sexual images about another person without consent. Consider Playboy’s publication of Marilyn Monroe nudes, or Hustler publishing Jacqueline Kennedy Onassis.

What has changed in the internet age is ease of becoming a global distributor. AI has simplified ‘photo-shop’ processing to generate fake images (and videos) with relative ease.

The root problem, independent of technology, is non-consensual sharing of intimate images, which is treated inconsistently by various jurisdictions within the United States.

In Canada, Section 162.1 of the Criminal Code deals with this.

162.1 (1) Everyone who knowingly publishes, distributes, transmits, sells, makes available or advertises an intimate image of a person knowing that the person depicted in the image did not give their consent to that conduct, or being reckless as to whether or not that person gave their consent to that conduct, is guilty

  1. of an indictable offence and liable to imprisonment for a term of not more than five years; or
  2. of an offence punishable on summary conviction.

An article last week by former CRTC vice-chair Peter Menzies suggests that a tweak to that Section may provide greater clarity to the phrase “person depicted” in the Criminal Code.

ITIF notes, “Unfortunately, given widespread fears about AI and backlash against the tech industry, some critics are quick to point the finger at AI.” It is important to note that usage policies for OpenAI (the group behind ChatGPT) already prohibits “Impersonating another individual or organization without consent or legal right” and “Sexually explicit or suggestive content.”

ITIF argues “unless policymakers ban generative AI entirely, the underlying technology — which is publicly available to run on a personal computer — will always be around for bad actors to misuse.” Google and Meta have created tools for users to report unauthorized intimate images.

ITIF suggests that those who distribute nonconsensual intimate images should face significant civil and criminal liability, but these should not be based on technology-specific legislation targeting AI. Legislative solutions need to focus on stopping perpetrators of revenge porn, independent of the technology used for generating or distributing it.

This past Thursday, the FCC adopted a Declaratory Ruling [pdf, 155 KB] that makes it illegal to use voice cloning technology in robocall scams targeting consumers. As the FCC notes, State Attorneys General can already target the outcome of an unwanted AI-voice generated robocall. The scam or fraud can be prosecuted. The major change from the FCC last week makes the act of placing a robocall with an AI-generated voice illegal without having to go after the scam. The FCC’s ruling expands “the legal avenues through which state law enforcement agencies can hold these perpetrators accountable under the law.”

Last month, fake robocalls encouraged voters to skip participating in the New Hampshire primary.

Technology specific legislation for AI? Your thoughts are welcomed.

Dealing with disinformation

What is the best approach for governments to be dealing with disinformation?

Before the holidays, I asked if more sophisticated content consumers would help Canada avoid the need to implement online harms restrictions? Can investment in improved digital literacy be effective?

An article in Financial Times by Bellingcat founder Eliot Higgins argues that “education, not regulation, is the answer.”

As the digital realm’s challenges mount, calls for state-led intervention grow louder. Governments across the world, alarmed by the implications of unbridled platforms, are contemplating regulatory measures to curb the spread of disinformation. But while the intent might be noble, the journey towards state-mediated truth is rife with complexities.

The potential for governmental over-reach is clear. While democratic nations might employ regulations with a genuine intent to combat falsehoods, the same tools could be weaponised by authoritarian regimes to suppress dissent, curtail freedoms and consolidate power. Russia, China, Iran and Venezuela could even use western states’ attempts at countering disinformation as a pretext to justify their own draconian censorship and control of the internet. In such contexts, the line between combating misinformation and controlling narratives becomes precariously thin. The risk? A digital space where genuine discourse is stifled under the guise of regulatory oversight.

He warns that government interventions could “inadvertently exacerbate the very problem they aim to solve.”

“If people perceive these interventions as mere tools to control narratives rather than genuine efforts to combat disinformation, public trust could erode further.”

How do we combat misinformation without impeding digital freedoms? Education is the key.

In short, combat misinformation with good information and training how to distinguish between good and bad. Dealing with disinformation requires investment in education.

Addressing the root causes of disinformation requires a grassroots approach. Education stands at the forefront of this strategy. The idea is simple yet transformative: integrate open-source investigation and critical thinking into the curriculum. Equip the youth with the skills to navigate the labyrinthine digital realm, to question, analyse and verify before accepting or sharing information.

The potential of such a grassroots movement doesn’t stop at school gates. Envision a world where universities become hubs of open-source investigation, with national and international networks of students sharing methodologies, tools and insights. As these students move into their professional lives, they carry forward not just skills but a mindset — one that values evidence over hearsay and critical thinking over blind acceptance.

Higgins suggests media organisations could form partnerships with university-level, creating “pop-up newsrooms and investigative collectives.”

Such an approach is by no means easy. It will require a collaboration between policymakers, news media, technology leaders, educators, and academic institutions. Implicit is a greater degree of difficulty than we have typically seen emerging from Canada’s digital policy framework. But there is a certain degree of urgency associated with getting started.

In late December, Statistics Canada reported that nearly half of all Canadians found it more difficult to distinguish between true and false information than it was three years earlier.

Let me leave you with a final quote from Higgins: “In a world where any information, regardless of its veracity, is readily accessible, the traditional educational paradigm could be upended. Historical revisionism, fuelled by falsehoods, could reshape collective memories. How does one teach critical thinking in an environment where facts are fluid?”

The year ahead

What is on the agenda for the year ahead?

As we gear back up after the Christmas and New Year’s holidays, it is somewhat customary to look ahead to the coming year.

Here is what tops my list:

  • Driving universal adoption
  • Online harms
  • Regulatory overreach
  • Mandated wholesale access
  • Impacts of investment on coverage and resilience
  • Digital literacy

I published my 2023 reflections in mid-December and indicated that the issue of driving increased adoption would need to be a carry-over to the year ahead. In my agenda for 2023, I wrote “there is a big difference between having universal access to broadband, and attaining universal adoption of that service.”

A number of reports indicate that affordability is not the primary barrier inhibiting broadband adoption. We saw that most recently in Ofcom’s Online Nation report. The UK report matches Canadian data showing just a quarter of those without a home connection cite the cost of service as the main reason. Across the country (including the far north), service providers have targeted programs to address affordability for disadvantaged households.

Still, we continue to have issues with people appreciating the utility of a broadband connection. I sometimes wonder if the social research is sufficiently adept at assessing whether “I don’t have a need for broadband” is a euphemism for “I have other priorities for my limited income”. If a parent is having to make difficult choices about buying name-brand versus no-name macaroni and cheese to feed the family dinner, then maybe a connected computer just isn’t a priority. As various levels of government continue to fund improved broadband access, I believe that more needs to be done to understand the factors that are inhibiting adoption, and then develop actions to relax each of those inhibitions.

Many of my 2024 agenda items have overlap with others. For example, to what extent have concerns about online safety and cyber security hindered adoption of broadband among those who are not yet connected?

Online harms will be on the agenda for the coming year. Misinformation, disinformation, and hate are significant online challenges. However, in a democratic society, what is the appropriate approach to address harmful forms of expression? As I wrote last year, the government is exploring new legislation but I am not convinced that this is the appropriate approach.

Will regulatory overreach be overruled by the courts or by changes in government policy? The CRTC is planning substantial organizational growth (30%) to deal with the Online News Act and the Online Streaming Act. Before the holidays, there was an interesting article from the US Chamber of Commerce, “How the FCC’s Regulatory Overreach Impedes Internet for All”.

We have seen how some of the CRTC’s determinations on mandated wholesale access can lead to reduced competition by stifling investment. How does that impact coverage for broadband, advanced wireless services, and investment in increased network resilience?

Improved digital literacy and education seems to be a common theme across many of the items on this year’s agenda. Can improved digital education help reduce vulnerability to certain forms of cyber attacks?

It is shaping up to be a busy year. Hopefully, I’ll be more successful checking these items off my list than some of my personal New Year’s resolutions.

What else do you have on your telecom policy plan for 2024?

Top 5 of 2023

Which of my blog posts were the Top 5 in 2023, the ones that attracted the most attention?

Looking at the analytics, these 5 articles had the most individual page views:

  1. Incubating innovation” [November 22, 2022]
  2. 3800 MHz auction preview” [May 25, 2023]
  3. The economics of broadband revisited” [March 28, 2023]
  4. Dealing with online harms” [January 24, 2023]
  5. #CHPC reviews government funding of antisemitism” [February 16, 2023]

Honourable mentions go to:

Fascinating to see that one of my posts from 2020 continues to attract so much interest from so many readers.

Which posts resonated the most with you? I posted my year-end wrap-up just last week, so it hasn’t had a chance to crack the Top 5… yet!

Thank you for following me here on this blog and on Twitter, and thank you for engaging online and by phone over the past year.

Click here to subscribe to my weekly newsletter, with its digest of the previous week’s blog posts.

I hope the coming holiday period provides an opportunity to connect with your family and friends. Let me reiterate my very best wishes for health, happiness and peace in the year ahead.

Scroll to Top