Balance scales with books on one side and an electronic brain on the other.

Why AI’s legal wins create leverage for journalists

Courts’ early rulings leave an opening for news organizations to assert real market harm

Nina Brown is the Laura J. and L. Douglas Meredith Professor of Teaching Excellence at Syracuse University’s Newhouse School of Public Communications. She teaches courses in communications law. 

In the battle between copyright owners and generative AI companies, round one went to the AI defendants when two federal courts held that using authors’ works to train large language models (LLMs) constituted a fair use — and was not infringement. This victory was a blow to the authors as well as other copyright owners with similar cases against AI companies. Yet these decisions were arguably good news for journalists.

To understand why, it’s helpful to look at the landscape of copyright claims against generative AI companies. Since their proliferation in 2022, there have been more than 50 copyright lawsuits brought by authors, publishers, artists, news organizations, musicians, and other creators. Copyright law gives owners the exclusive right to copy and control the distribution of their works, and these owners claim the AI defendants have infringed on those rights.

It’s easy to sympathize with the plaintiffs: They created original works that were used without their consent. But they face a significant hurdle — the fair use defense. Fair use allows copyrighted works to be used in certain ways without the owner’s permission. And it’s the reason that defendants prevailed in the first two cases against generative AI companies, Bartz v. Anthropic and Kadrey v. Meta. These cases should be read with a bit of caution: Both were early-stage rulings on motions to dismiss, not final judgments based on a full record, and the fair use test is highly context-dependent. To determine whether a particular use of a copyrighted work constitutes a fair use, judges must apply a four-part test that balances the creators’ rights against the public interest. It’s a fact-intensive inquiry, and each case must be examined on its specific facts. As such, broad conclusions are difficult.

Even so, it’s impossible not to read them as tea leaves. These are the first decisions in the conflict between copyright owners and generative AI companies, and every court that follows will use them as a roadmap. While each case will have its own facts, and other courts may disagree or ultimately reach different conclusions, Bartz and Kadrey have set the trajectory for how copyright law will adapt to this technology.

“Spectacularly transformative”

In Bartz v. Anthropic, authors including Andrea Bartz and Charles Graeber sued Anthropic, creator of the LLM known as Claude. To train Claude, Anthropic downloaded millions of books, including some written by the plaintiffs, from pirated libraries and digitized millions of print books it purchased in order to build a central library. The plaintiffs claimed Anthropic infringed their copyrights by scanning printed books (that it had lawfully purchased) into digital form, copying pirated books (that it had not lawfully purchased), creating a permanent digital library of these books, some of which were used to train Claude.

In what can only be described as a victory for all generative AI companies, the court ruled that Anthropic’s training use was “quintessentially” and “spectacularly transformative.” That is significant in a fair use analysis — when a work is found to be transformative, it tips the entire scale so that it’s more likely the use will be deemed fair. The court said Anthropic’s use was transformative because it had used the books not to reproduce or replace them, but to teach the models how to generate entirely new text in response to user prompts, much like how people learn to read and write by reading others’ work. It also protected Anthropic’s digitization of lawfully purchased books, finding that converting them into digital form for analysis and search was a fair and transformative use. Importantly, the authors did not allege that the text generated by the LLMs reproduced their copyrighted works. Had such evidence existed, the court suggested that this case would have come to a different conclusion.   

The court came to a different conclusion about the creation and retention of pirated copies to build a permanent library. Because pirating books that could have been lawfully purchased was not reasonably necessary to any transformative use and served only to create an unauthorized archive of copyrighted works, the court held that this constituted infringement. (These claims are proceeding as a class action, and the parties have since entered settlement discussions.)

When AI creates market harm

In Kadrey v. Meta, another group of authors (this time including Richard Kadrey, Sarah Silverman, Ta-Nehisi Coates, and others) sued Meta Platforms, alleging similar copyright infringement as the Bartz plaintiffs for the use of their books in training its LLM, LLaMA. (The Kadrey plaintiffs added claims that the outputs also reproduced expressive elements of their copyrighted works, but those claims were dismissed as merely speculative.)

On the key question about whether using copyrighted books to train the LLM was a fair use, the court was far more measured than the Bartz court. While it agreed that the training use was “highly transformative,” the court warned that transformative purpose alone cannot outweigh significant market harm. In fact, it prognosticated that training AI on copyrighted works will likely be unlawful when it creates market harm, either because the LLM floods the market with similar AI-generated content or it undermines authors’ ability to profit from their works. Because there was no evidence that the plaintiffs in this case suffered economic harm, the court held that it was a fair use, but it emphasized that market harm was “the single most important factor” in the fair use analysis.

While the judges in both cases agreed that the use of copyrighted materials for training was highly transformative, they diverged on how much weight that transformativeness should carry against other fair use factors, particularly the fourth factor of market harm. But this may be explained because the plaintiffs in Bartz presented no cognizable evidence of market harm — so just like in Kadrey, there was nothing to weigh against the transformative nature of the use. 

AI needs journalists’ content

How is any of this good news for journalists? Even though neither Bartz nor Kadrey involved news organizations, they offer a clear roadmap of how they can leverage market harm arguments to protect their content or negotiate licensing deals.

The judge in Kadrey specifically highlighted that cases involving training on news content might come out differently. As he put it, “An LLM that could generate accurate information about current events might be expected to greatly harm the print news market.” News, in other words, is different. Unlike novels or creative works that readers seek out for entertainment, news articles exist to inform — and AI systems capable of generating accurate, timely summaries could easily replace the need to visit a publisher’s site or pay for a subscription. That’s the kind of direct market substitution that tends to tip the fair use test towards infringement.

This is where news organizations likely have something that the authors in Bartz and Kadrey lacked: a credible path to argue market harm. If LLMs displace traditional reporting by producing summaries or real-time updates, that’s actual harm, not speculative. This is true even if courts ultimately say training on copyrighted news data is a fair use, because LLMs would still need fresh journalistic input to remain accurate and current. In other words, it’s not a one-time need for training to learn general language patterns. It’s the continuous need to gain new information to deliver up-to-date and accurate responses that users expect.

This is where news organizations gain real leverage. AI companies need news content. And they need it on an ongoing basis. It’s possible, though unlikely, that future courts will break from Kadrey and rule that the transformative use outweighs existing market harm. But litigating each claim is costly and fair use is not a certain outcome. Can AI companies afford that risk? Maybe. But licensing provides certainty and reduces this legal risk. Not to mention the reputational and regulatory value they can gain from fairly compensating creators.

So yes, round one went to the AI companies. But with a stronger argument for market harm and licensing leverage in their hands, round two may belong to the newsroom.


Cite this article

Brown, Nina  (2025, Dec. 1). Why AI’s legal wins create leverage for journalists. Reynolds Journalism Institute. Retrieved from: https://rjionline.org/news/why-ais-legal-wins-create-leverage-for-journalists/