Technology specific legislation for AI

A couple months ago, I reminded readers that I generally don’t like technology specific legislation.

With reference to the concerns about online harms, I wrote “it is reasonable to expect that content that is considered illegal in print media should continue to be considered illegal in digital form.”

A recent article from ITIF discusses explicit AI-generated images. The sharing of non-consensual intimate deep-fakes of Taylor Swift is sparking calls for new legislation to deal with images generated by artificial intelligence (AI).

ITIF argues that the heart of the issue predates AI, and indeed can be traced to before the internet era. The underlying issue in this instance is image-based abuse – sharing (or threatening to share), nude or sexual images about another person without consent. Consider Playboy’s publication of Marilyn Monroe nudes, or Hustler publishing Jacqueline Kennedy Onassis.

What has changed in the internet age is ease of becoming a global distributor. AI has simplified ‘photo-shop’ processing to generate fake images (and videos) with relative ease.

The root problem, independent of technology, is non-consensual sharing of intimate images, which is treated inconsistently by various jurisdictions within the United States.

In Canada, Section 162.1 of the Criminal Code deals with this.

162.1 (1) Everyone who knowingly publishes, distributes, transmits, sells, makes available or advertises an intimate image of a person knowing that the person depicted in the image did not give their consent to that conduct, or being reckless as to whether or not that person gave their consent to that conduct, is guilty

  1. of an indictable offence and liable to imprisonment for a term of not more than five years; or
  2. of an offence punishable on summary conviction.

An article last week by former CRTC vice-chair Peter Menzies suggests that a tweak to that Section may provide greater clarity to the phrase “person depicted” in the Criminal Code.

ITIF notes, “Unfortunately, given widespread fears about AI and backlash against the tech industry, some critics are quick to point the finger at AI.” It is important to note that usage policies for OpenAI (the group behind ChatGPT) already prohibits “Impersonating another individual or organization without consent or legal right” and “Sexually explicit or suggestive content.”

ITIF argues “unless policymakers ban generative AI entirely, the underlying technology — which is publicly available to run on a personal computer — will always be around for bad actors to misuse.” Google and Meta have created tools for users to report unauthorized intimate images.

ITIF suggests that those who distribute nonconsensual intimate images should face significant civil and criminal liability, but these should not be based on technology-specific legislation targeting AI. Legislative solutions need to focus on stopping perpetrators of revenge porn, independent of the technology used for generating or distributing it.

This past Thursday, the FCC adopted a Declaratory Ruling [pdf, 155 KB] that makes it illegal to use voice cloning technology in robocall scams targeting consumers. As the FCC notes, State Attorneys General can already target the outcome of an unwanted AI-voice generated robocall. The scam or fraud can be prosecuted. The major change from the FCC last week makes the act of placing a robocall with an AI-generated voice illegal without having to go after the scam. The FCC’s ruling expands “the legal avenues through which state law enforcement agencies can hold these perpetrators accountable under the law.”

Last month, fake robocalls encouraged voters to skip participating in the New Hampshire primary.

Technology specific legislation for AI? Your thoughts are welcomed.

Scroll to Top