Symposium at Reynolds Journalism Institute asks: What happens when AI creates defamatory content?

Symposium at Reynolds Journalism Institute asks: What happens when AI creates defamatory content?

In a world where artificial intelligence can write headlines and even compose passable news stories, the notion of “defamation by algorithm” is no longer the realm of science fiction. According to a panel of experts in journalism, law and social media who gathered last Friday, March 10 at the University of Missouri’s Reynolds Journalism Institute, this uncharted territory of “defamation by algorithm” needs to be charted — and quickly.

Lyrissa Lidsky
Lyrissa Lidsky

Moderated by constitutional law expert Lyrissa Lidsky and part of a larger, all-day symposium covering issues where law, journalism and social media converge, the panel’s discussion came at a time when Dominion Voting Systems and Fox News are engaged in a high-profile defamation lawsuit; at the same time, the U.S. Supreme Court is considering whether to weaken the legal shield that protects social media companies from assuming liability for user posts.

The panel’s focus zeroed in more narrowly on the question of whether ChatGPT, a language model that continues to make headlines for its ability to convincingly reproduce human language, can produce defamatory speech. Frank LoMonte, Counsel for CNN and a longtime press freedom advocate, tried to boil things down early on.

Frank LoMonte
Frank LoMonte

“Think of it in the same way as commissioning a piece from a freelance writer,” LoMonte said. “If a freelance writer was widely known to produce inaccurate stories, and I commissioned a story from them, then I am liable.”

But others argued that AI and algorithms are too multifaceted to pin down with the freelancer metaphor. James Daire, associate director of legal for online review site Yelp, noted that the company fended off a libel lawsuit in 2016 that would have held Yelp accountable for its aggregation of user-generated ratings.

It wasn’t the first lawsuit of its kind. In 2013, TripAdvisor defeated a similar claim that its algorithmically-generated list of America’s “dirtiest” hotels — based on user ratings of cleanliness — constituted defamation.

James Daire
James Daire

Despite these outcomes, Daire is concerned that the country’s courts don’t have a nuanced understanding of the processes by which computers generate everything from top ten lists to readable — but sometimes error-ridden — news articles.

“I don’t think a lot of the courts have a good handle on what an algorithm is,” Daire said. “It’s really just a shortcut if you think about it. There isn’t really anything inherently ‘speech-y’ about the algorithm itself.”

One way that lawmakers have begun attempting to close that knowledge gap is by pushing for “algorithm audits” that would force businesses to analyze their algorithms for issues like racial or socioeconomic bias and report on the results. Jasmine McNealy, an associate professor at the University of Florida College of Journalism and Communications, said these audits could be a mixed blessing.

Jasmine McNealy
Jasmine McNealy

“If you’re doing what you’re supposed to do, seeing biases and figuring out how you could mitigate some of them, then that report could start to be used as evidence of, ‘are you entertaining doubts about your system, and are you fixing them?’” McNealy said. “In the Dominion case, we’re looking at emails. An algorithmic audit could say the same thing. ‘This is only giving us 60% of what we want, and we want it to get better. This isn’t where we want it to be, but we’re going to keep using it.’ Could that possibly be used as evidence of liability? I think so.”

For now, however, the question of liability is still emerging. LoMonte pointed out that so far, the Supreme Court is proceeding cautiously in establishing long-term precedents for technology that continues to change rapidly, putting the onus on news organizations and social media companies to avoid the pitfalls of tools that might be as risky as they are promising.

“The more that ChatGPT’s faults are known, the more organizations publish from it at their own peril,” LoMonte said.

Comments

Comments are closed.