AI Governance: Why Canada Needs to Get This Right

AI Governance: New Tradeoffs for Sovereignty, Trust and SustainabilityOn May 11, the Ivey Business School is convening a half‑day workshop in Toronto that cuts directly to the heart of these questions: AI Governance: New Tradeoffs for Sovereignty, Trust and Sustainability. It’s the latest in a long‑running series of telecom and digital policy workshops that have become important convening opportunities for Canada’s policy, academic, and industry communities.

Canada is entering a decisive moment in the evolution of artificial intelligence. The conversation is no longer just about models, innovation, or regulation in isolation. AI is becoming infrastructure—built on data, compute, networks, and energy systems—and the choices we make now will shape our economic resilience, our competitiveness, and our sovereignty for decades.

Three forces are converging in ways that demand fresh thinking:

  • AI Sovereignty — As AI systems consolidate around global platforms, Canada must decide what it needs to control—data, compute, models, or something else—to remain a credible middle power in a shifting geopolitical landscape.
  • AI Trust — With agentic AI accelerating, business models and regulatory frameworks must evolve to ensure transparency, accountability, and public confidence.
  • AI Sustainability — AI’s energy and carbon footprint is rising fast. Grid resilience, climate alignment, and sustainable infrastructure design are no longer side issues—they’re core to long‑term viability.

These are the new fault lines shaping investment, innovation, and Canada’s national AI strategy.

The workshop will explore issues that telecom and digital policy leaders are already grappling with:

  • How should Canada translate AI governance principles into practical levers for sovereignty, trust, and sustainability?
  • What does a “Canadian profile” in AI governance look like between the U.S.’s industry‑driven approach and the EU’s risk‑management model?
  • How should business models adapt as trust becomes a competitive differentiator?
  • What are the real risks of an AI‑driven productivity paradox—and how do we avoid locking in the wrong infrastructure choices?
  • How should we measure sovereignty, trust, and sustainability in the AI stack?

These are just some of the questions that could define the next decade of telecom, digital infrastructure, and national competitiveness. This workshop provides an opportunity to hear global perspectives through a Canadian lens and shape the conversation on AI governance.

It is worth noting that the White House released its National AI Legislative Framework last Friday. This framework addresses six key objectives:

  1. Protecting Children and Empowering Parents: Parents are best equipped to manage their children’s digital environment and upbringing. The Administration is calling on Congress to give parents tools to effectively do that, such as account controls to protect their children’s privacy and manage their device use. The Administration also believes that AI platforms likely to be accessed by minors should implement features to reduce potential sexual exploitation of children or encouragement of self-harm.
  2. Safeguarding and Strengthening American Communities: AI development should strengthen American communities and small businesses through economic growth and energy dominance. The Administration believes that ratepayers should not foot the bill for data centers, and is calling on Congress to streamline permitting so that data centers can generate power on site, enhancing grid reliability. Congress should also augment Federal government ability to combat AI-enabled scams and address AI national security concerns.
  3. Respecting Intellectual Property Rights and Supporting Creators: The creative works and unique identities of American innovators, creators, and publishers must be respected in the age of AI. Yet, for AI to improve it must be able to make fair use of what it learns from the world it inhabits. The Administration is proposing an approach that achieves both of these objectives, enabling AI to thrive while ensuring Americans’ creativity continues propelling our country’s greatness.
  4. Preventing Censorship and Protecting Free Speech: The Federal government must defend free speech and First Amendment protections, while preventing AI systems from being used to silence or censor lawful political expression or dissent. AI cannot become a vehicle for government to dictate right and wrong-think. The Administration is proposing guardrails to ensure that AI can pursue truth and accuracy without limitation.
  5. Enabling Innovation and Ensuring American AI Dominance: The Administration is calling on Congress to take steps to remove outdated or unnecessary barriers to innovation, accelerate the deployment of AI across industry sectors, and facilitate broad access to the testing environments needed to build and deploy world-class AI systems.
  6. Educating Americans and Developing an AI-Ready Workforce: The Administration wants American workers to participate in and reap the rewards of AI-driven growth, encouraging Congress to further workforce development and skills training programs, expanding opportunities across sectors and creating new jobs in an AI-powered economy.

Kristian Stout of the International Center for Law & Economics released a commentary on the Truth on the Market blog, calling it “a welcome set of guidelines … a light-touch federal approach, grounded in existing legal doctrines, and focused on harms rather than speculative risks. Whether Congress can translate that posture into durable legislation remains an open question. But as a statement of direction, the framework gets more right than wrong.”

Join me for AI Governance: New Tradeoffs for Sovereignty, Trust and Sustainability, on the afternoon of Monday May 11, 2026 at Ivey’s Donald K. Johnson Centre in downtown Toronto. Registration is open now.

Australia’s NBN provides a lesson in economics

Australia's NBNAustralia’s NBN has been the subject of numerous posts on these pages. NBN Co’s latest half‑year results [pdf, 2.1MB] offer a clear signals that Australia’s long (and often messy) transition from copper to fibre is finally tipping into a new phase. The headline numbers are solid enough (revenue up 2%, EBITDA up 5%), but a hidden story lies beneath the financials, where we see a behavioural shift by Australian broadband users. It’s a shift that echoes themes I’ve written about before: the disconnect between the price of telecom services and service provider ARPU (Average Revenue Per User).

The most striking figure in the report is the tenfold jump in customers on 500 Mbps and above, from 3% to 31% in just twelve months. That’s not incremental growth; that’s a structural pivot in how households consume connectivity. It validates what I argued in earlier posts about the NBN’s design compromises: Australians were never “satisfied” with slower speeds — they were constrained by the economics of a network built around copper bottlenecks. Once the value equation changed, behaviour changed with it.

This is where the economics get interesting. NBN Co’s Accelerate Great initiative effectively boosted speeds for a third of customers at no additional wholesale cost. In other words, effective prices fell, yet residential ARPU rose by $3 to $52. That’s the paradox I’ve highlighted before in the context of Australia’s NBN and other wholesale fibre markets: when you give customers more for the same price, they don’t simply pocket the savings, many migrate up the value chain. Faster tiers become the new baseline, usage expands, and the network becomes more central to daily life.

This is a dynamic we’ve seen in other markets, including Canada: increasing speeds with the latest technology results in a better value proposition and it delivers more value to consumers and network operators alike. Lower operating costs, fewer faults, and a multi‑decade asset life create room for service providers to improve value without undermining revenue. NBN Co’s 7% drop in operating expenses and 15% reduction in direct network costs are dividends of replacing copper with glass.

The milestone of three million FTTP customers — and one million copper‑to‑fibre upgrades completed — marks a symbolic turning point for Australia’s NBN. A decade ago, fibre‑to‑the‑node was sold as a pragmatic compromise. Today, it’s being quietly retired, with 47,000 premises upgrading every month and the final 622,000 FTTN lines scheduled for completion by 2030. Australia’s political debate may have faded, but the engineering logic of increased investment has prevailed.

What’s equally notable is the shift in business demand. Nearly half of business customers are now on high‑speed tiers, driven by cloud workloads, AI tools, and the growing need for symmetrical bandwidth. The download‑to‑upload ratio for business is already 2:1—far closer to enterprise patterns than residential ones. That’s another indicator of a market moving up the curve, not down.

All of this reinforces a broader lesson: when networks remove friction — whether technological or economic — customers respond with higher engagement, higher usage, and indeed, higher ARPU. ARPU goes up, not because the price increased, but because customers are seeing greater value from the lower prices for the next tier. It’s a reminder that affordability and revenue growth are not mutually exclusive.

That phenomenon applies in Canada as well. At the recent Scotiabank TMT investor conference, TELUS CFO Doug French said success is based on being relevant to customers. As the NBN data demonstrates, value, not headline price, drives broadband behaviour. When networks deliver more speed, more reliability, and more headroom for emerging applications, households and businesses naturally migrate upward. The result is a healthier revenue mix, lower operating costs, and the financial ability for network operators to invest in platforms that can sustain the next decade of digital demand.

Canada is already deep into this transition to fibre, but there is a need to ensure the government policy environment encourages continued network investments. Yesterday, Ofcom (the UK telecom regulator) released a policy statement, “Promoting competition and investment in fibre networks: Telecoms Access Review 2026-31”. Most significantly, at paragraph 2.12, it states, “Our strategy is to promote investment in gigabit-capable networks through network competition in areas where this is viable. We consider that network competition brings potentially significant benefits to consumers, compared to competition based on regulated access to wholesale services provided by a single network.”

It continues in paragraph 2.12, “Network competition creates stronger incentives to attract and retain customers by offering them the services they want, and so is a more effective spur for innovation and investment in high quality networks than access-based competition. This is because network providers have much greater scope for product differentiation and can strive to win customers and generate higher margins by offering a better service than their competitors.”

The Ofcom policy statement, promoting competition and investment in fibre, will be worth further examination. I have already expressed concerns that Canada’s current regulatory framework is inhibiting investment. In a blog post yesterday, Ted Woodhead noted “there is a net reduction in total industry Capex which is a trend that Canadians hoping for better service, or any service at all, should find deeply disturbing.” A report from Scotiabank yesterday repeated previous advice for incumbents to materially reduce capital expenditures given the current regulatory climate.

Fibre is more than just a technology upgrade. It is an enabler for an economic reset, helping align network capabilities with customer expectations and needs for an AI-driven digital economy.

As we’ve seen in the results from Australia’s NBN, that alignment is where quality, coverage, affordability, and investment can coexist.

Improving productivity in Canada

The Parliamentary Industry and Technology Committee (INDU) released a report last week entitled Improving Productivity in Canada [pdf, 3.3MB].

The report contains a number of recommendations of interest to the telecom sector. Many will focus on Recommendation 34: That the Government of Canada review measures related to competition policy in the telecommunications, transportation and financial services sectors to strengthen competition in Canada.

In my view, there are two important prerequisite recommendations to such a review:

  • Recommendation 19: That the Government of Canada work with provinces and territories to reduce the regulatory burden by addressing irritants systemically rather than individually to improve the overall impact of regulatory decisions, support innovation, encourage investment in Canada bring down prices for consumers. This may include but is not limited to:
    • streamline regulations surrounding domestic food processing and manufacturing, to encourage investment in Canada;
    • adopting legislation requiring all federal regulatory agencies to explicitly consider competitiveness and business growth in the performance of their duties by rigorously assessing the potential impacts of regulatory decisions on economic growth beforehand, rather than as an afterthought;
    • expanding the scope of the Red Tape Reduction Act by reducing or eliminating the exemptions it currently provides; and
    • establishing an independent body, modeled on the United Kingdom’s Regulatory Policy Committee, responsible for publicly assessing the quality of regulatory impact assessments.

  • and,

  • Recommendation 20: That the Government of Canada undertake a comprehensive review of federal regulatory and permitting systems in order to identify and remove unnecessary regulatory and reporting burdens – particularly where they disproportionately affect small and medium-sized enterprises – with the objective of reducing duplication, accelerating timelines, improving predictability for investors and aligning regulation with trusted jurisdictions where appropriate in order to free up capital and management time for growth and technology adoption.

If you search for “incentives to invest” on my blog, more than 130 references come up. Just 2 weeks ago, in “Regulatory impacts on investment”, I referenced evidence of capital investment reductions triggered by CRTC regulations – evidence that is found in the CRTC’s own industry monitoring report.

I wrote, “The next few years will test whether Canada can maintain its infrastructure leadership while pursuing competition policy based on government intervention.”

The INDU study references a witness who “proposed undertaking systemic reform to improve the impact of all regulatory decisions. He criticized that previous government efforts had focused primarily on isolated irritants, comparing this approach to ‘pumping air into a leaky tire: It might help you in the short term, but the underlying problem goes unsolved.'”

While the INDU Committee would like the Government to improve productivity with a review of telecom competition policy, such a review needs to consider the impact of regulation on investment and competition in the sector.

Prohibition of fees could raise prices

The CRTC released a decision today: “Prohibition of fees that are a barrier to switching cellphone and Internet plans” Telecom Regulatory Policy CRTC 2026-43. The decision follows changes to the Telecom Act that were introduced in the omnibus 2024 Budget Implementation Act [pdf, 1.0MB]. That legislation gave rise to a “trilogy” of CRTC Notices of Consultation: 2024-294 (that resulted in today’s decision); 2024-293 (Enhancing customer notification); and, 2024-295 (Enhancing self-service mechanisms).

To be fair to the Commission, it had no choice but to respond to the legislative change. Today’s Decision was based on this new section of the Telecom Act:

Prohibition
27.‍04 (1) A telecommunications service provider must not charge a fee to a subscriber that is related to the activation or modification of a telecommunications service plan, or any other fee whose main purpose is, in the opinion of the Commission, to discourage subscribers from modifying their service plan or cancelling their contract for telecommunications services.
Types of fees
(2) The Commission must specify the types of fees for the purposes of subsection (1).

There are two parts to the CRTC’s determination: elimination of early termination, and elimination of activation or modification fees.

The legislatively prohibited “activation or modification fees” are defined by the CRTC as those that aren’t “related to the physical installation of a telecommunications service at a customer’s premises or fees related to additional products or services the customer has explicitly chosen to purchase”. I found it interesting that the various consumer codes are modified to include the new definition by today’s policy decision, but there is no accompanying paragraph that explicitly tells consumers that such fees are prohibited under the Act. Since the Codes are consumer-facing, one might have thought that the newly defined term should be found in the Codes.

The Wireless Code already dealt with early termination fees. If a consumer terminates service within the first 2 years after receiving a device subsidy, the service provider is able to recover the remaining balance. If no device subsidy was provided, the service provider could charge up to $50. That fee can no longer be charged. As a result, it is hard to imagine how service providers will be able to offer discounts in exchange for longer-term commitments.

The CRTC launched its consultation in November 2024, 16 months ago, initially giving the public an extremely tight schedule to provide input. It extended the deadline for submissions until March 12, 2025, exactly 1 year ago today. At the time, I asked, “Will the CRTC be able to find effective ways to work around the government’s naively constructed amendments to the legislation, using a short 6-week process?” Unfortunately, I don’t think it did.

The accompanying press release for today’s Policy says “Based on the public record, the CRTC is eliminating extra fees to activate, change, or cancel a plan. This will give consumers more flexibility to manage their plans and take advantage of better offers without worrying about unexpected costs.”

I don’t understand how regulating service provider pricing mechanisms results in more flexibility for consumers. I see at least one area where the new rules could result in less flexibility. With no ability to charge an early termination fee, service providers may tie discounts for long-term contracts to only those customers getting a device. This could eliminate discounts for one or two-year commitments for consumers bringing their own devices.

Regulations that reduce choice end up reducing consumer flexibility.

AI trust

In late January, TELUS released its third annual AI Report, the 2026 AI Trust Atlas: Public perspectives on bridging the AI trust gap.

The report [pdf, 7.7MB] indicates the gap in AI trust is growing and we need to consider whether our policy framework is prepared.

Artificial intelligence has become so deeply woven into daily life that many people no longer notice or appreciate when they’re using it. AI shapes how we search, shop, navigate, communicate, and essential information. Increasingly AI is involved in how we access healthcare, and other government services.

According to the AI Trust Atlas, AI adoption in Canada and the United States is now nearly universal, but trust in AI is not rising alongside its use. The public is embracing AI tools at unprecedented rates, yet confidence in the institutions deploying them remains low. While nearly 9 in 10 Canadians (89%) actively used an AI-enabled tool, only a third of Canadians (34%) trust the companies using it. Only a quarter (27%) of Canadians believe the current laws are adequate to address their concerns. In both Canada and the US, 90% believe it is important for AI to be regulated.

The result is a widening trust gap — a kind of trust recession — in which AI becomes more pervasive but not more legitimate.

This recession is not defined by a collapse in trust, but by something subtler and more corrosive: stagnation. The Atlas shows that only a minority of Canadians and Americans trust companies that use AI, and those numbers have barely budged even as adoption has surged. People are using AI more than ever, but they are not feeling more secure, more informed, or more protected. They are living with AI, but not living comfortably with it. That is the hallmark of a trust recession — a moment when public confidence fails to keep pace with technological integration, leaving society in a state of unease.

The roots of this unease are not technological. They are political and structural. For more than a decade, AI has been deployed faster than governments could regulate it, faster than institutions could explain it, and faster than communities could evaluate its impacts. The result is a public that is surrounded by AI but not empowered by it. People feel AI is happening to them, not with them or for them. They see systems making decisions that affect their lives, but they do not see the guardrails that should accompany those decisions. They see benefits, but they also see risks — and they do not see a governance framework capable of managing either.

What makes the Atlas so revealing is that the public’s expectations are not vague or contradictory. They are remarkably consistent. People want to know when AI is being used and how it affects them. They want systems that undergo meaningful risk assessment before deployment, not after something goes wrong. They want human oversight across all applications, not just the ones deemed “high‑risk” by technical experts. They want independent governance rather than industry self‑policing. And they want mechanisms for public input — not symbolic consultations, but real opportunities to shape how AI is designed, evaluated, and monitored.

These expectations align closely with emerging global norms, from the EU AI Act to the OECD AI Principles to Canada’s own evolving regulatory frameworks. Yet in North America, policy development remains slow, fragmented, and reactive. The public sees this. And they are losing patience. The trust recession is not a failure of public understanding. It is a failure of public policy.

Healthcare offers the clearest illustration of what is at stake. It is the sector where optimism and anxiety collide most intensely. People believe AI can improve diagnosis, accuracy, and access. They see the potential for faster triage, earlier detection, and more personalized care. But they also fear privacy breaches, algorithmic bias, accountability gaps, and the erosion of human judgment. These concerns are not hypothetical. They reflect real experiences with opaque systems, inconsistent safeguards, and unclear lines of responsibility. The tension in healthcare is not unique to healthcare. It is simply more visible there. The lesson is that AI’s benefits are real, but so are its risks, and governance determines which one prevails.

One of the most important contributions of the Atlas is its focus on Indigenous perspectives. Indigenous respondents emphasize data sovereignty, distinctions‑based design, and community‑driven governance. These principles are not peripheral. They are central to building trustworthy AI systems. Indigenous data governance frameworks — including OCAP® and the CARE Principles — offer a model for how AI can be developed in ways that respect autonomy, protect communities, and embed accountability. In a trust recession, these approaches are not optional. They are essential.

The broader message of the Atlas is that the trust recession is a policy gap, not a technological one. The public is not afraid of AI. They are afraid of unregulated AI. They are afraid of systems that make decisions without transparency. They are afraid of algorithms that affect their lives without oversight. They are afraid of institutions that deploy AI without accountability.

These fears are rational. They reflect a decade in which AI innovation outpaced public policy by orders of magnitude. We built the systems and deployed them, but we did not build the guardrails.

If AI is now part of our societal infrastructure, then trust must be treated as part of that infrastructure too. Like other infrastructure, trust requires investment, maintenance, and stewardship. The report suggests that a policy response to the trust recession must establish transparency standards to make it clear when and how AI is being used. Will we create independent oversight bodies with real authority, or advisory committees with symbolic mandates? Will we require pre‑deployment risk assessments to evaluate social, ethical, and community impacts? Will we embed public participation into the governance process, not only as a courtesy but as a democratic necessity? Will we establish rights‑based frameworks to protect individuals from algorithmic discrimination, wrongful automation, and opaque decision‑making?

These would be some of the practical foundations of a trustworthy AI ecosystem. In their absence, we risk a deepening of the trust recession, and the perceived legitimacy of AI‑enabled systems will continue to erode.

The snapshot of public opinion in the AI Trust Atlas is a warning that the social contract around AI may be fraying. The trust recession will not reverse itself or be solved by better marketing, more optimistic narratives, or promises of future benefits. It will only be solved by policy — thoughtful, enforceable, transparent, and inclusive policy. While AI is reshaping society, society has not yet reshaped the governance structures needed to manage it.

A few weeks ago, I wrote about Canada’s AI advantage, referencing a three pillar framework: Sustainable-by-design; Sovereign-by-design; and, Responsible-by-design. Do we need to inject a fourth element: Trust-by-design?

Last October, the Government of Canada launched a public consultation to assist in the development of a national AI strategy. Last month, a report was issued [pdf, 305KB] summarizing the 64,600 submissions. Michael Geist has an excellent post talking about what didn’t make it into the report. On the subject of “trust”, Dr. Geist observed

Just about everyone agrees that trust is essential for AI adoption, but the implementation of regulation draws different views. Some want to move quickly, while others warn that overly broad regulation will slow deployment, disadvantage domestic firms, and regulate technologies Canada does not control. Those disagreements largely disappear in the government’s summary, where trust is presented as a settled consensus objective, rather than a contested policy domain with real trade-offs.

Everyone agrees that AI trust is essential. Can we develop a policy framework that bridges the AI trust gap, avoiding the risks identified in the submissions?

Scroll to Top