Computer brain

Media law experts discuss legal risks and challenges of AI tools

Check your insurance policies, be transparent and treat your favorite Generative AI application like an intern whose work needs plenty of supervision

The AI lawsuits have commenced.

A Georgia man filed a lawsuit against OpenAI in June after ChatGPT spit out false statements, including fraud and embezzlement allegations, after a journalist asked ChatGPT about him. Such lawsuits hold a warning for journalists using AI tools for their work.

AI tools such as large language models (LLMs) like ChatGPT or image generators like Stable Diffusion can generate text and images quickly, aid journalists in headline generation, and comb through datasets or archives. Is that having an impact in newsrooms?

Well, just last month, German publisher Axel Springer SE announced it was “parting ways with colleagues who have tasks that in the digital world are performed by AI and/or automated processes.” Meanwhile, new jobs to verify video and audio or to write ChatGPT prompts, the new skill to replace search engine query wizards are popping up.

Digital Content Next, an industry trade group that represents 50 of the largest media groups, shared a list of seven recommendations about Generative Artificial Intelligence called, “Principles for Development and Governance of Generative AI.” All of them involved existing or potential legal or regulatory concerns.

We spoke to several media lawyers and media law experts about what they’re hearing and how they are talking to clients and each other about the risks and challenges posed by AI tools. They focused on three main issues — defamation, invasion of privacy, and copyright.

Defamation

Bryan Clark, a former journalist who now works as an attorney at Vedder Price in Chicago, said that AI tools may be improving in their accuracy, but there are still obvious risks.

“There’s still a regular occurrence where you plug things in and something pops out that’s completely made up and fantastical,” Clark said. “If you’re a journalist trying to use this for research purposes, ChatGPT or any other tool could scour a lot of sources faster than you can, but there also might be something in there that’s completely false. If you don’t know that, you could end up building a whole story around something around something that is made up.”

Much of the concerns about false information stem from what have become known as ChatGPT’s “hallucinations,” which open up potential liability for publishers, said Lyrissa Lidsky, the Raymond and Miriam Ehrlich Chair of U.S. Constitutional Law at University of Florida’s Levin College of Law.

“Not only does it invent information, it also invents documents that supposedly back up that information,” Lidsky said. “Of course, if it’s inaccurate and wrong, it can lend to the harm that they can cause by being defamatory.”

Lidsky outlined some classic hurdles in defamation law, such as requirements of fault that are based on human reasonability, which does not apply well to how ChatGPT operates, though it may be more of a problem for journalists who use ChatGPT without verifying its output.

“What journalists need to think about is when they are using ChatGPT is that the law is not going to protect them just because they relied on ChatGPT,” Lidsky said. “They’re going to have to engage in normal journalistic investigation to make sure that the information they receive from ChatGPT is correct. If we’re putting it in normal liability terms, I think it could be actual malice to rely on ChatGPT without verification given its known credibility problems.”

Actual malice, the higher fault standard established by the U.S. Supreme Court over the past 60 years, is required in defamation cases filed by public officials or public figures. Negligence, the standard for other defamation plaintiffs, may be established by using ChatGPT in its current less reliable version, Lidsky suggested, though that may change as the tools improve.

Tom Leatherbury, director of the First Amendment Clinic at SMU’s Dedman School of Law, said that liability may hinge on whether the company that created the AI tool could be held responsible for errant information published unwittingly by journalists, and to what extent libel insurance policies may cover such issues.

“If you take an AI-generated piece and you publish it, with or without changes, if the defamation occurs in the unchanged part of it, can you third-party in the AI company?” Leatherbury said, which would mean adding the AI company to the lawsuit if you’re sued. “There’s the whole problem of have insurance policies caught up to this and what are they doing?”

Leatherbury suggested one step news organizations can take is to check whether their defamation insurance policies cover AI-generated content.

Privacy

The media law experts cited numerous privacy concerns.

“You could have traditional invasion of privacy,” Leatherbury said. “You could have private facts claims or false light claims in jurisdictions that recognize false light. You could have misappropriation type of invasion of privacy claims because of taking the content from the original creator.”

Similarly, Clark said clients he’s talked to are using AI tools to crunch data to sort through large volumes of information, some of which may be personally identifiable or provided to a company through terms of use that do not extend to sharing to the general public. That could create problems if an AI tool gets a hold of that data. Lidsky noted similar issues with such private information.

“LLMs are relying on data that’s out there that we can’t necessarily track down to the source, but they could be invading privacy in the ways that they combine that’s data that’s out there,” Lidsky said. “Connecting dots that the average person couldn’t connect by reaching deep into datasets and creating a composite of an individual in a way that discloses private facts, or perhaps the information is not defamatory but it portrays them in an offensive way that makes out a false light claim.”

What journalists need to think about is when they are using ChatGPT is that the law is not going to protect them just because they relied on ChatGPT. They’re going to have to engage in normal journalistic investigation to make sure that the information they receive from ChatGPT is correct.

Professor Lyrissa Lidsky, University of Florida

Lidsky also raised challenges with misappropriation or violation of what is known as the “right of publicity” that generally protects one’s likeness or image from commercial use without consent. Image generators such as Stable Diffusion or Dall-E have already received criticism for allowing users to create harmful or misleading images of celebrities, including Pope Francis.

“I think those misappropriation claims are just going to be epic because it allows you to combine an image with all sorts of things in ways that are misleading,” Lidsky said. “We’re already seeing some issues with deepfakes.”

Online publishers have long relied on Section 230 of the Communications Decency Act as a shield against tort lawsuits for privacy and defamation. The 1996 law protects web hosts, including news websites, from liability for third-party content they host. Michael Lambert, an attorney for Haynes & Boone in Austin, questioned how far Section 230 would extend when journalists are using AI tools to generate content.

“On one hand, it is third-party content,” Lambert said. “It’s scanning the web and these AI tools are getting information from third parties. But it’s using that third-party information and creating pretty new content that itself is very removed from the original content. ChatGPT is using individual words but they’re coming up with new works, so under typical analysis, these companies are materially altering the third-party content so much so as you wouldn’t think Section 230 would apply.”

Copyright

Several lawsuits have already been filed against AI image generators for using human creators’ copyrighted works without authorization. News organizations who use these works could face similar legal threats.

“In production agreements and other content-related agreements there are always (representations) and warranties that this is your work and you own the rights to it and you are not invading anyone’s privacy or invading any property rights like copyrights,” Leatherbury said. “I wouldn’t want to be advising people who use AI-generated content on how to modify those reps and warranties and whether or not they would actually be acceptable to a publisher.”

The U.S. Supreme Court’s recent ruling finding no fair use in Andy Warhol’s artistic reimaginings of photographer Lynn Goldsmith’s works featuring the musician Prince did not look kindly upon a fair use argument for using a secondary work for a similar purpose – in that case, licensing photographs – regardless of how transformative it may be. Lambert suggested that may have an effect on how courts may consider fair use in cases involving AI tools, though journalists may have a stronger argument.

“Looking at it through that perspective is what courts will need to do under the Warhol case,” Lambert said. “But it opens up a lot of questions such as what is a commercial use, and where that line is. With ChatGPT, you would think it would seem that way. If it’s used for a greater good in a different way, I think that a court would be more likely (to find fair use) than if you were just straight up selling it for a license.”

Advice

Lidsky will teach a course called “AI, Big Tech and the First Amendment” to law students in the spring. She anticipates a growing body of law relevant to journalists and publishers as they increasingly use AI tools.

“I think you have to start using them because they’re going to revolutionize the information gathering process in all different kinds of fields,” Lidsky said. “But I think you need to use them with a great, great deal of caution because of the known and unknown vulnerabilities that they have. But particularly the known propensity for spewing out false information.” 

The media law experts we spoke to all suggested journalists should proceed with caution, using the tools mostly for background work and research, in the understanding that decades of law stands for the idea that journalists are going to be liable for nearly everything they publish. Lambert suggested thinking of an AI tool more as something like an intern.

“You could ask it specific research questions, or you could ask it to come up with story ideas,” Lambert said. “But I feel like a journalist needs to review all of that his or herself. It can’t be something you just take from ChatGPT and plop it into a news article. You should go and fact-check everything yourself. There needs to be the human element still at this point.”

The attorneys agreed that transparency was the key for journalists, both from a legal and an ethical perspective. News organizations should be clear to their audience what tools they are using, and for what purposes. That can get down to details such as proper attribution.

“Do you source it to let your readers know that this is partially AI generated?” Leatherbury said.  “And how do you source it? Do you source within a paragraph or at the end?”

Clark warned that using AI tools may be riskier for publications with fewer resources than major newsrooms.

“It’s unlikely that the New York Times, the Wall Street Journal, the Washington Post are going to be running things that popped out of ChatGPT without researching just like they would any other project,” Clark said. “But I could see more fringe publications that already have lesser reporting and fact-checking resources attempting to use this as a means to help them generate content and generate eyeballs on their publication, inadvertently getting into hot water.”

Jared Schroeder is an associate professor of journalism studies at the Missouri School of Journalism. He is the author of “The Press Clause and Digital Technologies’ Fourth Wave.”

Daxton “Chip” Stewart, is a professor of journalism at Texas Christian University in Fort Worth. He is the author of “Media Law Through Science Fiction: Do Androids Dream of Electric Free Speech?”

Related Stories

Expand All Collapse All
Comments

Comments are closed.