A note from Re:Create: Cox v. Sony Will Smooth Some of AI’s Jagged Copyright Frontier

Brandon Butler

Professor Matt Sag, a leading expert on AI and copyright law, has a forthcoming paper in the Duke Law Journal on what he calls AI’s “jagged copyright frontier,” the uneven application of various copyright doctrines like fair use and substantial similarity to AI technology. Sag previews his argument that copyright law’s complexity makes it impossible to say all AI technology is categorically free of copyright liability risk; different technologies will have different risk profiles depending on what they train on, what they can do, and how they are marketed, among other things. This “jagged” phenomenon creates liability risk for AI developers, but also opportunities for mutually beneficial deals with rightsholders. 

I agree with Professor Sag about the big picture: of course there are implementations of AI technology that would create infringement liability without licenses in place, and that also create meaningful opportunities for win-win licensing deals. Imagine an AI tool fine-tuned on Grateful Dead concert recordings and marketed to Deadheads as a way to generate infinitely many new “live” versions of their favorite tunes, even whole concerts that never happened, fine-tuned by user prompts about their favorite concerts. Such a tool could never get to market without a license from the Dead, and that’s as it should be. The ability to create and perform new versions of their songs and protected aspects of their performances would be a core part of the value proposition for the tool, and the band (or their heirs, estates, etc., now that almost all original members are literally dead—Rest In Power, Jerry, Bobby, and Co.) should participate in the profits.

But some of the “jaggedness” of AI’s copyright frontier, and the “safety” measures meant to reduce risk, strike me as dangerous for creativity and free expression. For example, I’m not sure a general purpose generative AI tool should have to implement “guardrails” that reject prompts that could generate material that refers to existing works or elements of works, like characters. We should all be wary of a machine for creativity that prohibits its users from creating parody, pastiche, or even just fan fiction. There’s something deeply Orwellian about a creativity tool with a built-in censor. I don’t like to imagine someone sitting down to make a scathing parody of 2001: A Space Odyssey using AI to help illustrate the story and their computer saying, “I’m sorry, Dave, but I can’t do that.”Thank goodness for the Supreme Court’s recent decision in Cox v. Sony. The heart of that opinion is the Court’s clear refusal to impose liability based only on the provision of a multi-purpose service with knowledge that some users will use it to infringe copyright. Of course if you give people a tool that can write words or make images, some will use that tool to make copies or derivatives of their favorite culture, and it’s impossible to be sure ex ante that none of those creations will be infringing. Cox tells us that’s not something AI devs have to fret about. AI developers still have to contend with the complexities of fair use and substantial similarity, as well as the specter of direct liability. But Cox smooths one edge of the frontier by making clear that AI devs won’t be responsible for their users’ infringements as long as they offer a tool with substantial non-infringing uses and they don’t actively encourage infringement in the way they design and market their tech. That’s one less reason to build creative tools with built-in censors, and that’s a good thing for both free expression and innovation.