Thursday, November 21, 2024
Uncategorized

Can AI commit libel? We’re about to find out

The tech world’s hottest new toy may find itself in legal hot water as AI’s tendency to invent news articles and events comes up against defamation laws. Can an AI model like ChatGPT even commit libel? Like so much surrounding the technology, it’s unknown and unprecedented — but upcoming legal challenges may change that.

Defamation is broadly defined as publishing or saying damaging and untrue statements about someone. It’s complex and nuanced legal territory that also differs widely across jurisdictions: a libel case in the U.S. is very different from one in the U.K., or in Australia — the venue for today’s drama.

Generative AI has already produced numerous unanswered legal questions, for instance whether its use of copyrighted material amounts to fair use or infringement. But as late as a year ago, neither image nor text generating AI models were good enough to produce something you would confuse with reality, so questions of false representations were purely academic.

Not so much now: The large language model behind ChatGPT and Bing Chat is a bullshit artist operating at an enormous scale, and its integration with mainstream products like search engines (and increasingly just about everything else) arguably elevates the system from glitchy experiment to mass publishing platform.

So what happens when the tool/platform writes that a government official was charged in a case of malfeasance, or that a university professor was accused of sexual harassment?

A year ago, with no broad integrations and rather unconvincing language, few would say that such false statements could be taken seriously. But today these models answer questions confidently and convincingly on widely accessible consumer platforms, even when those answers are hallucinated or falsely attributed to non-existent articles. They attribute false statements to real articles, or true statements to invented ones, or make it all up.

Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel.

That is the assertion being made by Brian Hood, mayor of Hepburn Shire in Australia, when he was informed that ChatGPT named him as having been convicted in a bribery scandal from 20 years ago. The scandal was real — and Hood was involved. But he was the one who went to the authorities about it and was never charged with a crime, as Reuters reports his lawyers saying.

Now, it’s clear that this statement is false and unquestionably detrimental to Hood’s reputation. But who made the statement? Is it OpenAI, who developed the software? Is it Microsoft, which licensed it and deployed it under Bing? Is it the software itself, acting as an automated system? If so, who is liable for prompting that system to create the statement? Does making such a statement in such a setting constitute “publishing” it, or is this more like a conversation between two people? In that case would it amount to slander? Did OpenAI or ChatGPT “know” that this information was false, and how do we define negligence in such a case? Can an AI model exhibit malice? Does it depend on the law, the case, the judge?

These are all open questions because the technology that they concern didn’t exist a year ago, let alone when the laws and precedents legally defining defamation were established. While it may seem silly on one level to sue a chatbot for saying something false, chatbots aren’t what they once were. With some of the biggest companies in the world proposing them as the next generation of information retrieval, replacing search engines, these are no longer toys but tools employed regularly by millions of people.

Hood has sent a letter to OpenAI asking it to do something about this — it’s not really clear what it can do, or whether it’s compelled to, or anything else, by Australian or U.S. law. But in another recent case, a law professor found himself accused of sexual harassment by a chatbot citing a fictitious Washington Post article. And it is likely that such false and potentially damaging statements are more common than we think — they are just now getting serious and enough to warrant reporting to the people implicated.

This is only the very beginning of this legal drama, and even lawyers and AI experts have no idea how it will play out. But if companies like OpenAI and Microsoft (not to mention every other major tech company and a few hundred startups) expect their systems to be taken seriously as sources of information, they can’t avoid the consequences of those claims. They may suggest recipes and trip planning as starting points but people understand that the companies are saying these platforms are a source of truth.

Will these troubling statements turn into real lawsuits? Will those lawsuits be resolved before the industry changes yet again? And will all of this be mooted by legislation among the jurisdictions where the cases are being pursued? It’s about to be an interesting few months (or more likely years) as tech and legal experts attempt to tackle the fastest moving target in the industry.

source

Leave a Reply

Your email address will not be published. Required fields are marked *