AI trust

In late January, TELUS released its third annual AI Report, the 2026 AI Trust Atlas: Public perspectives on bridging the AI trust gap.

The report [pdf, 7.7MB] indicates the gap in AI trust is growing and we need to consider whether our policy framework is prepared.

Artificial intelligence has become so deeply woven into daily life that many people no longer notice or appreciate when they’re using it. AI shapes how we search, shop, navigate, communicate, and essential information. Increasingly AI is involved in how we access healthcare, and other government services.

According to the AI Trust Atlas, AI adoption in Canada and the United States is now nearly universal, but trust in AI is not rising alongside its use. The public is embracing AI tools at unprecedented rates, yet confidence in the institutions deploying them remains low. While nearly 9 in 10 Canadians (89%) actively used an AI-enabled tool, only a third of Canadians (34%) trust the companies using it. Only a quarter (27%) of Canadians believe the current laws are adequate to address their concerns. In both Canada and the US, 90% believe it is important for AI to be regulated.

The result is a widening trust gap — a kind of trust recession — in which AI becomes more pervasive but not more legitimate.

This recession is not defined by a collapse in trust, but by something subtler and more corrosive: stagnation. The Atlas shows that only a minority of Canadians and Americans trust companies that use AI, and those numbers have barely budged even as adoption has surged. People are using AI more than ever, but they are not feeling more secure, more informed, or more protected. They are living with AI, but not living comfortably with it. That is the hallmark of a trust recession — a moment when public confidence fails to keep pace with technological integration, leaving society in a state of unease.

The roots of this unease are not technological. They are political and structural. For more than a decade, AI has been deployed faster than governments could regulate it, faster than institutions could explain it, and faster than communities could evaluate its impacts. The result is a public that is surrounded by AI but not empowered by it. People feel AI is happening to them, not with them or for them. They see systems making decisions that affect their lives, but they do not see the guardrails that should accompany those decisions. They see benefits, but they also see risks — and they do not see a governance framework capable of managing either.

What makes the Atlas so revealing is that the public’s expectations are not vague or contradictory. They are remarkably consistent. People want to know when AI is being used and how it affects them. They want systems that undergo meaningful risk assessment before deployment, not after something goes wrong. They want human oversight across all applications, not just the ones deemed “high‑risk” by technical experts. They want independent governance rather than industry self‑policing. And they want mechanisms for public input — not symbolic consultations, but real opportunities to shape how AI is designed, evaluated, and monitored.

These expectations align closely with emerging global norms, from the EU AI Act to the OECD AI Principles to Canada’s own evolving regulatory frameworks. Yet in North America, policy development remains slow, fragmented, and reactive. The public sees this. And they are losing patience. The trust recession is not a failure of public understanding. It is a failure of public policy.

Healthcare offers the clearest illustration of what is at stake. It is the sector where optimism and anxiety collide most intensely. People believe AI can improve diagnosis, accuracy, and access. They see the potential for faster triage, earlier detection, and more personalized care. But they also fear privacy breaches, algorithmic bias, accountability gaps, and the erosion of human judgment. These concerns are not hypothetical. They reflect real experiences with opaque systems, inconsistent safeguards, and unclear lines of responsibility. The tension in healthcare is not unique to healthcare. It is simply more visible there. The lesson is that AI’s benefits are real, but so are its risks, and governance determines which one prevails.

One of the most important contributions of the Atlas is its focus on Indigenous perspectives. Indigenous respondents emphasize data sovereignty, distinctions‑based design, and community‑driven governance. These principles are not peripheral. They are central to building trustworthy AI systems. Indigenous data governance frameworks — including OCAP® and the CARE Principles — offer a model for how AI can be developed in ways that respect autonomy, protect communities, and embed accountability. In a trust recession, these approaches are not optional. They are essential.

The broader message of the Atlas is that the trust recession is a policy gap, not a technological one. The public is not afraid of AI. They are afraid of unregulated AI. They are afraid of systems that make decisions without transparency. They are afraid of algorithms that affect their lives without oversight. They are afraid of institutions that deploy AI without accountability.

These fears are rational. They reflect a decade in which AI innovation outpaced public policy by orders of magnitude. We built the systems and deployed them, but we did not build the guardrails.

If AI is now part of our societal infrastructure, then trust must be treated as part of that infrastructure too. Like other infrastructure, trust requires investment, maintenance, and stewardship. The report suggests that a policy response to the trust recession must establish transparency standards to make it clear when and how AI is being used. Will we create independent oversight bodies with real authority, or advisory committees with symbolic mandates? Will we require pre‑deployment risk assessments to evaluate social, ethical, and community impacts? Will we embed public participation into the governance process, not only as a courtesy but as a democratic necessity? Will we establish rights‑based frameworks to protect individuals from algorithmic discrimination, wrongful automation, and opaque decision‑making?

These would be some of the practical foundations of a trustworthy AI ecosystem. In their absence, we risk a deepening of the trust recession, and the perceived legitimacy of AI‑enabled systems will continue to erode.

The snapshot of public opinion in the AI Trust Atlas is a warning that the social contract around AI may be fraying. The trust recession will not reverse itself or be solved by better marketing, more optimistic narratives, or promises of future benefits. It will only be solved by policy — thoughtful, enforceable, transparent, and inclusive policy. While AI is reshaping society, society has not yet reshaped the governance structures needed to manage it.

A few weeks ago, I wrote about Canada’s AI advantage, referencing a three pillar framework: Sustainable-by-design; Sovereign-by-design; and, Responsible-by-design. Do we need to inject a fourth element: Trust-by-design?

Last October, the Government of Canada launched a public consultation to assist in the development of a national AI strategy. Last month, a report was issued [pdf, 305KB] summarizing the 64,600 submissions. Michael Geist has an excellent post talking about what didn’t make it into the report. On the subject of “trust”, Dr. Geist observed

Just about everyone agrees that trust is essential for AI adoption, but the implementation of regulation draws different views. Some want to move quickly, while others warn that overly broad regulation will slow deployment, disadvantage domestic firms, and regulate technologies Canada does not control. Those disagreements largely disappear in the government’s summary, where trust is presented as a settled consensus objective, rather than a contested policy domain with real trade-offs.

Everyone agrees that AI trust is essential. Can we develop a policy framework that bridges the AI trust gap, avoiding the risks identified in the submissions?

Leave a Comment

Your email address will not be published. Required fields are marked *

This website stores cookies on your computer. These cookies are used to provide a more personalized experience and to track your whereabouts around our website in compliance with the European General Data Protection Regulation. If you decide to to opt-out of any future tracking, a cookie will be setup in your browser to remember this choice for one year.

Accept or Deny

Scroll to Top