TELUS recently released its second annual AI report [pdf, 15.3MB] and among its key findings, the research shows that AI trust is “fundamental to the social license required to unlock AI’s full potential to do good”.
Pam Snively (Chief Data & Trust Officer, TELUS) said, “While Canadians are actively embracing AI in their daily lives, they’re telling us that trust must be earned through meaningful human oversight, robust safeguards, and transparent practices. It is trust that will determine how far and how fast we can go.”
The TELUS study canvassed views from 5667 members of Leger’s online panel between mid-December, 2024 and mid-January 2025. It reports Canadians recognize the potential for AI to drive social impact and productivity, but are concerned about the consequences of AI, if left unchecked, especially absent human intervention. The report says confidence in AI decision making doubles when human supervision is present, especially in “high stakes” areas such as healthcare.
The report speaks about “Responsible AI”, while noting that despite widespread use of AI is, 91% of respondents expressed concern about its impact on Canadian society, implying a recognition of AI’s potential as a powerful tool, but also as a risk:
AI has the potential to drive meaningful positive impact and productivity, transforming the way we work, live, and learn in the world. But this potential comes with risk. Issues like misinformation, bias, societal disruption, and data security make it clear: the responsible development of AI is crucial to the technology’s evolution as a tool to drive amazing outcomes.
I’ll note that TELUS has been involved in projects such as Trust by Design for at least 5 years, dating back to when the company described three core principles for its use of customer data: accountability; ethical use; and, transparency.
Last year, Harvard Business Review had an article by Bhaskar Chakravorti, Dean of Global Business at The Fletcher School (Tufts University), about persistent risks associated with AI trust. “For instance, radiologists hesitate to embrace AI when the black box nature of the technology prevents a clear understanding of how the algorithm makes decisions”.
The HBR article suggests that the AI trust gap will be permanent, “even as we get better in reducing the risks.”
This has three major implications. First, no matter how far we get in improving AI’s performance, AI’s adopters — users at home and in businesses, decision-makers in organizations, policymakers — must traverse a persistent trust gap. Second, companies need to invest in understanding the risks most responsible for the trust gap affecting their applications’ adoption and work to mitigate those risks. And third, pairing humans with AI will be the most essential risk-management tool, which means we shall always have a need for humans to steer us through the gap — and the humans need to be trained appropriately.
At last month’s meeting of the G7, the leaders issued a statement on “AI for Prosperity”. The statement calls for “[driving] innovation and adoption of secure, responsible, and trustworthy AI that benefits people, mitigates negative externalities, and promotes our national security.” The term “trust” shows up 13 times in the brief document, with the final section entitled “Unlock AI opportunity through trust-building”.
Writing about “AI Slop,” Josh Gans, from the Rotman School at University of Toronto warns “disinformation drives out information in the sense that people do not trust anything to be informative at all. That is, you can’t inform any people any of the time.”
Trust in AI systems will have to be earned. We need to ensure there is an appropriate level of AI literacy among lay people, to help build sufficient understanding of AI’s challenges and opportunities. Consumers and businesses alike need to be aware of the risks that come alongside powerful capabilities.