Creating more sophisticated content consumers

Would more sophisticated content consumers help Canada avoid the need to implement online harms restrictions?

In early 2022, I described Finland’s approach, teaching school kids how to process information online, including checking and verifying “news” and “facts” being shared on social media. As the Daily Telegraph wrote at the time, “Teaching and learning about media literacy and critical thinking is a life-long journey. It starts at kindergartens and continues at elementary schools, high schools and universities”.

While the Canadian government has been under pressure to introduce its long-promised Online Harms bill, I continue to wonder if more effort should be focused on teaching critical thinking skills in Canada.

I am doubtful that the government should be in the business of determining what content should be blocked. This current government is not qualified to block information that it judges to be “misinformation”; as I pointed out in late October, the Prime Minister, Foreign Minister and Minister of Innovation all circulated incorrect information that inflamed antisemitism. How can this government judge others’ content, when their own information has been harmful.

I am not a fan of technology specific legislation. At the same time, it is reasonable to expect that content that is considered illegal in print media should continue to be considered illegal in digital form.

It is extremely challenging to try to block content that is determined to be harmful. Blocking the content in one location will simply create an incentive for the content to emerge somewhere else. It becomes a never ending game of whack-a-mole.

In a recent article on The Hub, Richard Stursberg calls for “the news industry to decouple from social media”, saying “Much of social media is a sewer, polluted with content that claims to be true but is, in fact, disinformation and fake news.” The article claims that credible news gets judged by the company it is keeping on social media, compromising Canadians’ confidence, resulting in less trust for traditional news.

Under the circumstances, the best course might be for the news industry to simply leave social media. It could then set up its own platform, access to which would only be granted to firms that subscribed to a tough code of journalistic ethics like those in place for the CBC, the Globe and Mail, and CTV.

I am not as confident as the author that “It would be a simple matter to set up such a platform.”

Instead, what if we try to develop a society filled with more sophisticated content consumers? Can we create a series of school curricula, from kindergarten through university, to improve digital and media literacy and develop critical thinking?

Such a project would be a long term investment.

The Oxford Internet Institute recently released a study of nearly 12,000 children in the United States, that found no evidence that screen time impacted their brain function or well-being. The abstract for the full study said there were two hypotheses being tested: that functional brain organization is related to digital screen engagement; and, that children with higher rates of engagement will have functional brain organization profiles related to maladaptive functioning. “Results did not support either of these predictions for [screen media activity].”

While some schools boards have been considering whether to remove screens from classrooms, I wonder if a better approach is to focus on programs that teach improved digital literacy skills, learning how to differentiate between good information and bad, and helping kids become more informed consumers of digital content.

Can such programs help innoculate Canadians against a wide variety of online harms, including online hate, fraud, misinformation and disinformation?

Creating more sophisticated content consumers will require a longer time horizon with more patience required to implement, but will it deliver a better outcome than trying to legislate government controls on freedoms of expression?

ISED got it right

Is it possible that ISED got it right? As Canada’s 3800 MHz auction closed yesterday, it appears that ISED’s use of “caps” (as contrasted with spectrum set-asides) may have contributed to an auction that kept prices internationally competitive.

The total money raised in the auction was $2.16B, far below the Bay Street financial analysts expected range of $4B – $10B. The average cost per Mhz-pop was $0.29, more than 60% lower than Scotiabank’s pre-auction estimate of $0.70. As I wrote earlier this week, the $0.29 is in line with Australia’s recent auction which worked out to C$0.26 per MHz-pop.

Recall that only 2 years ago, the 3500 MHz auction raised $8.9B, with an average cost of $3.28 per MHz-pop.

With significantly higher costs of capital, financial analysts were expecting bidding to be lower, but the results came in significantly lower than expected. BMO Capital Markets called it “A much more disciplined auction.” Scotiabank said “Finally a spectrum auction that does not break the bank”. TD Securities said, “In short, we are delighted with the outcome of the auction. Each of Rogers, Bell, TELUS, and Quebecor spent materially less than what we and the Street had expected in this mid-band auction.” A note from National Bank credits the cross-band spectrum cap and more available spectrum.

The lower spectrum cost means carriers will be in a better position to invest in physical infrastructure.

A number of carriers released statements last night:

  • Bell: Bell secures the most 5G+ spectrum nationwide with acquisition of 3800 MHz licenses
  • Cogeco: 3800 MHz spectrum auction: Cogeco acquires 99 licenses in Québec and Ontario
  • Rogers: Rogers Acquires 3800 MHz 5G Spectrum Across Canada
  • Sasktel: SaskTel invests $10.2 million to acquire 3800 MHz wireless spectrum as part of its mission to deliver advanced 5G connectivity to customers across Saskatchewan
  • TELUS: TELUS secures critically important 3800 MHz spectrum licences, unleashing the full potential of 5G
  • Videotron: 3800 MHz wireless spectrum auction – Quebecor and Videotron invest nearly $300 million to move forward with Canadian expansion

Reading these, we might conclude that ISED got it right.

In any case, remember, there is a free webinar from the International Telecommunications Society next Thursday (December 7): “Optimizing spectrum auctions”. See you there!

AI-generated content

Bronwyn Howell recently wrote an article entitled “AI-Generated Content, Fake News and Credible Signals” for AEIdeas that I found to be particularly insightful.

It has been a couple months since I wrote “Emerging technology policy” and I think that the AEI paper presents some good perspectives.

She writes about the potential for people to be misled by AI-generated content due to what she terms information asymmetry. “Exploiting information asymmetries is not new. Snake oil salesmen and the advertising industry have long been “economical with the truth” to persuade gullible consumers to buy their products.”

In a digital world, consumers of AI-generated content do not necessarily know “whether the content they consume is a factual representation or a digital creation or manipulation, but the publisher does.” Regulations requiring content generated by AI to be labeled as such are intended to help overcome the information asymmetry.

Sometimes, no harm comes from the consumer not knowing. For example, if I am not told the aliens in a sci-fi movie are computer-generated, I am unlikely to be harmed; indeed, my enjoyment may be reduced if I am reminded of this before the movie starts, or if the information is emblazoned across the screen when the aliens are in action. But sometimes harm does come from the consumer not knowing—for example, when a video shows a politician saying or doing things that they did not. Yet even here, it is not clear or straightforward. If someone is lampooning a politician for entertainment purposes, then labelling is likely unnecessary (and even potentially harmful if it detracts from the entertainment experience). But if it is an election advertisement, and the intention is to convince voters that the portrayed events are factual and not fictional, then the asymmetry is material.

Potential harms may not arise from how the content was created, but rather from the intent behind its use. If the content is intended to deceive the consumer, regardless of how the content was created, then we need to examine ways to protect the public.

It may not be sufficient to require labelling of content generated by AI. It can be too easy to lie about its origins, and indeed, labelling may not be necessary if no harm ensues. Instead, the article suggests that regulatory “controls are required for the subset of transactions in which harm may occur from fake content.” She uses the example of election advertising, where rules already exist in most jurisdictions. “This suggests electoral law, not AI controls, are the best place to start managing the risks for this application”.

Do we need technology specific legislation and regulation? Or, do we ensure that existing protections for conventional technologies can apply in the world of artificial intelligence generating content?

An article on ABC News earlier this week says, “The war in Gaza is highlighting the latest advances in artificial intelligence as a way to spread fake images and disinformation”.

The risk that AI and social media could be used to spread lies to U.S. voters has alarmed lawmakers from both parties in Washington. At a recent hearing on the dangers of deepfake technology, U.S. Rep. Gerry Connolly, Democrat of Virginia, said the U.S. must invest in funding the development of AI tools designed to counter other AI.

A paper [pdf, 300KB] released earlier this week by Joshua Gans of the Rotman School of Management at University of Toronto asks “Can Socially-Minded Governance Control the AGI Beast?”. Spoiler alert: he concludes (robustly) that it cannot.

Optimizing spectrum auctions

What approach should governments follow for optimizing spectrum auctions?

Access to spectrum is vital for modern digital communications. Radio frequencies are essential for smartphones connectivity, access to the Cloud, the Internet of Things, as well as potential use cases in autonomous vehicles and artificial intelligence.

Governments use auctions in order to allocate radio spectrum to companies efficiently; these auctions have a significant impact on innovation, economic well-being, and government revenues.

Let’s take a look at the results of the 3500MHz auctions in Australia and Canada. Last week, Australia’s auction concluded, raising just under AUS$725M (roughly C$650M). Similar spectrum was auctioned in Canada 2 years ago, raising C$8.9B – more than 13 times the amount. Canadian carriers paid C$3.28 per MHz-pop; Australia’s auction worked out to C$0.26 per MHz-pop.

Canadian carriers paid billions of dollars in higher spectrum fees than their Australian counterparts. Had Canada’s auction been structured to optimize increased infrastructure investment instead of driving government revenues, what would have been the impact on innovation or economic well-being?

The International Telecommunications Society is hosting its next one-hour webinar on December 7, starting at 10:00am (Eastern), looking at “Optimizing spectrum auctions”. Frequent readers know that I have been a big fan of the ITS webinars (Like last month’s AI policy webinar)

Geoffrey Myers, a former economist with Ofcom, and now visiting professor in practice at The London School of Economics and Political Science, will discuss his recent book, Spectrum Auctions: Designing markets to benefit the public, industry and the economy.

The webinar will draw on his extensive experience at the UK’s communications regulator, and his study of the theory and practice of spectrum auctions. Professor Myers will explore the optimization of regulatory design in spectrum auctions, providing insights on the entire spectrum auction process. He will address several critical themes that emerge from his work:

  • How can we continually improve spectrum auction design by learning from successes and failures worldwide?
  • Are there enough analytical tools to consistently guide and support spectrum policy decisions?
  • How can expert advice, extending beyond technical and economic knowledge, shape spectrum policy effectively while considering policymakers’ practical concerns?

This webinar is a valuable resource for regulators, economists, and private sector experts involved in spectrum auction design and bidding strategies. It also provides insights for applied economists, teachers, and advanced students interested in market design and public management.

I hope to see you online. Reserve your complimentary spot today.

How fast is fast enough for broadband?

Just how fast is fast enough for broadband?

I last wrote about this 3 years ago, challenging the myth that universal fibre should be on the national agenda.

A couple of weeks ago, the FCC launched an inquiry [pdf, 155KB] to examine that question. The FCC intends to look at “universal deployment, affordability, adoption, availability, and equitable access to broadband”. The FCC Chair, Jessica Rosenworcel says the intent is to update the US broadband standard (currently 25Mbps down, 3Mbps up) to 100/20 and set a long-term goal for gigabit speeds.

The FCC Chair said that the 25/3 standard “is not only outdated, it masks the extent to which lowincome neighborhoods and rural communities are being left offline and left behind.”

However, FCC data shows that 94% of US households had 100 Mbps access available by the end of 2021. According to Eric Fruits of the International Center for Law & Economics (ICLE), “If the FCC wants to increase the number of households with 100/20 Mbps speeds, it should recognize that much of the gap is driven by lower rates of adoption, rather than a lack of access to higher speeds.”

That is a familiar refrain for my readers. “The problem of increased broadband adoption can’t be fixed directly by throwing money at it, but we need to undertake more serious research into those factors that stand in the way of people subscribing to broadband.”

A September brief from ICLE was entitled “Finding Marginal Improvements for the ‘Good Enough’ Affordable Connectivity Program”. ICLE found that “about two-thirds of households without at-home internet have access, but don’t subscribe. The brief argues that, for households without a broadband subscription, their smartphone internet service may provide a superior “bang for the buck” relative to fixed broadband.”

Just as mobile devices have become a substitute for wireline home phones, we need to examine the extent to which smartphones and mobile services are substitutes for home internet connections.

In 2021, Pew Research found that 19% of respondents said the most important reason for not having broadband at home is that their smartphone does everything they need to do online. That study found that 15% of US adults are “smartphone-only” internet users – that is, they have a smartphone, but do not have a home broadband connection.

What is the best approach for encouraging continued broadband investment?

Do regulators need to raise targets? CRTC data shows that more two thirds of Canadian broadband subscriptions were already at speeds of 100 Mbps or higher, well above Canada’s broadband objective. Ninety percent of households had access to 100 Mbps service by year end 2021; more than three quarters of Canadians had access to gigabit speeds.

When there is demand for higher speeds, doesn’t this demonstrate companies will make the necessary investments? As I have said many times before, the future can be brighter for Canadian innovation and investment if the government would try harder to get out of the way.

Scroll to Top