AI-generated content

Bronwyn Howell recently wrote an article entitled “AI-Generated Content, Fake News and Credible Signals” for AEIdeas that I found to be particularly insightful.

It has been a couple months since I wrote “Emerging technology policy” and I think that the AEI paper presents some good perspectives.

She writes about the potential for people to be misled by AI-generated content due to what she terms information asymmetry. “Exploiting information asymmetries is not new. Snake oil salesmen and the advertising industry have long been “economical with the truth” to persuade gullible consumers to buy their products.”

In a digital world, consumers of AI-generated content do not necessarily know “whether the content they consume is a factual representation or a digital creation or manipulation, but the publisher does.” Regulations requiring content generated by AI to be labeled as such are intended to help overcome the information asymmetry.

Sometimes, no harm comes from the consumer not knowing. For example, if I am not told the aliens in a sci-fi movie are computer-generated, I am unlikely to be harmed; indeed, my enjoyment may be reduced if I am reminded of this before the movie starts, or if the information is emblazoned across the screen when the aliens are in action. But sometimes harm does come from the consumer not knowing—for example, when a video shows a politician saying or doing things that they did not. Yet even here, it is not clear or straightforward. If someone is lampooning a politician for entertainment purposes, then labelling is likely unnecessary (and even potentially harmful if it detracts from the entertainment experience). But if it is an election advertisement, and the intention is to convince voters that the portrayed events are factual and not fictional, then the asymmetry is material.

Potential harms may not arise from how the content was created, but rather from the intent behind its use. If the content is intended to deceive the consumer, regardless of how the content was created, then we need to examine ways to protect the public.

It may not be sufficient to require labelling of content generated by AI. It can be too easy to lie about its origins, and indeed, labelling may not be necessary if no harm ensues. Instead, the article suggests that regulatory “controls are required for the subset of transactions in which harm may occur from fake content.” She uses the example of election advertising, where rules already exist in most jurisdictions. “This suggests electoral law, not AI controls, are the best place to start managing the risks for this application”.

Do we need technology specific legislation and regulation? Or, do we ensure that existing protections for conventional technologies can apply in the world of artificial intelligence generating content?

An article on ABC News earlier this week says, “The war in Gaza is highlighting the latest advances in artificial intelligence as a way to spread fake images and disinformation”.

The risk that AI and social media could be used to spread lies to U.S. voters has alarmed lawmakers from both parties in Washington. At a recent hearing on the dangers of deepfake technology, U.S. Rep. Gerry Connolly, Democrat of Virginia, said the U.S. must invest in funding the development of AI tools designed to counter other AI.

A paper [pdf, 300KB] released earlier this week by Joshua Gans of the Rotman School of Management at University of Toronto asks “Can Socially-Minded Governance Control the AGI Beast?”. Spoiler alert: he concludes (robustly) that it cannot.

Scroll to Top