Rightsholders have waged an aggressive campaign about the alleged harmfulness of AI training. When that rhetoric is put to the test in court, however, they keep coming up short. Last year, the Southern District of New York tossed claims for removal of “copyright management information” in AI training, explaining that there was no harm alleged, so no standing to sue. (Earlier this week, the court dismissed the claims again, “with prejudice,” meaning they’re dead forever unless there’s a successful appeal.) Last week, Judge Eumi K. Lee rejected a group of music publishers’ arguments that AI developer Anthropic would cause them irreparable harm by training its AI assistant Claude on in-copyright lyrics. Outside the courts, the alleged existential threat of AI is often taken for granted, but Judge Lee’s opinion shows that when these claims are put to the test, the plaintiffs can’t produce evidence of harm other than the purely circular harm of “not getting paid for this new use.”
Before getting into Judge Lee’s opinion, we should note that Anthropic and the publishers have already come to an agreed approach to filtering Anthropic’s outputs, i.e., the responses of its AI chat client Claude to user prompts. I have my own worries about these “guardrails” (what remains of fair use if your creative tools can’t quote from prior works?), but the publishers seem satisfied that Anthropic’s measures will prevent Claude from producing infringing outputs, etc. The court is now focused on whether training causes any irreparable harm in and of itself, and Judge Lee is dubious.
The music publishers argued that AI training causes two kinds of harm: reputational harm and market harm. Their reputational harm arguments were mostly premised on Claude generating infringing outputs, a concern that Judge Lee explained had already been addressed by the parties’ agreed-upon guardrails. They also alleged a generalized harm from “loss of control” over their work in training, but Judge Lee explained that, without any evidence of concrete harm, this generic harm doesn’t pass muster.
The publishers’ market arguments fare no better. The publishers offered no evidence for the claim that unlicensed AI training somehow impacts ordinary markets for non-AI uses like lyric aggregator websites, which Lee says “provide entirely different services and do not compete with Claude.” She notes that the publishers again relied on assumptions about harmful outputs, arguments that were moot once the parties agreed on the guardrails framework. Lee is clearly frustrated with the publishers’ efforts here, saying “declarations by Publishers’ representatives are largely duplicative of each other, and state in a general and conclusory manner that Anthropic’s use of the Works is harmful” and their arguments “do not demonstrate how using the Works to train Claude is affecting – let alone diminishing – the value of any of the Works.” Ouch.
The only remaining category of harm is the circular one: harm to the alleged market for AI training itself. This is where Judge Lee’s most important insight comes. After acknowledging the publishers’ argument that “courts must treat the use of copyrighted works in emerging technology markets with caution and care,” Judge Lee explains that it would therefore be a mistake for the Court “to define the contours of a licensing market for AI training where the threshold question of fair use remains unsettled.” Accordingly, she denies the publishers’ request for the “extraordinary relief of a preliminary injunction based on legal rights (here, licensing rights) that have not yet been established.” This key point—that copyright plaintiffs have yet to establish that licensing AI is among their exclusive rights—needs to be printed on leaflets and dropped from airplanes. For now, we are doing the next best thing and dropping it in your inbox.