A Small Win for Music Publishers in the Fight Over Claude Outputs
A Deal Reached Between Anthropic and Publisher Plaintiffs
On Thursday, music publishers got a small win in a copyright fight alleging that Anthropic’s Claude chatbot regurgitates song lyrics without paying licensing fees to rights holders. In an order, US district judge Eumi Lee outlined the terms of a deal reached between Anthropic and publisher plaintiffs who license some of the most popular songs on the planet, which she said resolves one aspect of the dispute.
The Deal’s Key Provisions
Through the deal, Anthropic admitted no wrongdoing and agreed to maintain its current strong guardrails on its AI models and products throughout the litigation. These guardrails, Anthropic has repeatedly claimed in court filings, effectively work to prevent outputs containing actual song lyrics to hits like Beyonce’s ‘Halo,’ Spice Girls’ ‘Wannabe,’ Bob Dylan’s ‘Like a Rolling Stone,’ or any of the 500 songs at the center of the suit.
Perhaps more importantly, Anthropic also agreed to apply equally strong guardrails to any new products or offerings, granting the court authority to intervene should publishers discover more allegedly infringing outputs. Before seeking such an intervention, publishers may notify Anthropic of any allegedly harmful outputs. That includes any outputs that include partial or complete song lyrics, as well as any derivative works that the chatbot may produce mimicking the lyrical style of famous artists.
Review and Response Process
After an expeditious review, Anthropic will provide a ‘detailed response’ explaining any remedies or clearly stating its intent not to address the issue. This process aims to ensure accountability and transparency in Anthropic’s handling of potentially infringing outputs.
The Ongoing Dispute Over AI Training on Lyrics
Although the deal does not settle publishers’ more substantial complaint alleging that Anthropic training its AI models on works violates copyright law, it is likely a meaningful concession, as it potentially builds in more accountability. Anthropic reversed course to reach this deal after initially arguing that relief sought preventing harmful outputs ‘in response to future users’ queries’ was ‘moot.’
Expert Testing and Public Comments
In court filings, publishers noted that expert Ed Newton-Rex, the CEO of Fairly Trained, conducted his own testing on Claude’s current guardrails and surveyed public comments. Allegedly, he found two ‘simple’ jailbreaks allowing him to generate whole or partial lyrics from 46 songs named in the suit.
A Change of Heart for Anthropic
Initially, Anthropic tried to get this evidence tossed, arguing that publishers were trying to ‘shoehorn’ ‘improper’ ‘new evidence into the record.’ However, publishers dug in, arguing that they needed to ‘correct the record’ regarding Anthropic’s claims about supposedly effective current guardrails to prove there was ‘ongoing harm’ necessitating an injunction.
The Complex Question of AI Training and Copyright Law
Song lyrics may seem freely available anywhere online, but publishers noted in their complaint that lyrics sites pay to license lyrics. Anthropic allegedly never attempted to enter into such an agreement. If its alleged infringement continues, publishers argued that rights holders would be forced to cede control of their content while Anthropic profits without paying them licensing fees.
The Stakes for Anthropic
The question of whether AI training is a fair use of copyrighted works remains complex and hotly disputed in court. For Anthropic, the stakes could be high, with a loss potentially triggering more than $75 million in fines, as well as an order possibly forcing Anthropic to reveal and destroy all the copyrighted works in its training data.
The Road Ahead
This suit will likely take months to fully resolve. As the case unfolds, it remains to be seen how the court will ultimately decide on the issues of fair use and copyright infringement in the context of AI training on potentially copyrighted material.
What’s Next?
The outcome of this case will have significant implications for the future of artificial intelligence development and its interaction with copyrighted works. Stay tuned as Ars Technica continues to track the developments in this important story.
Senior Policy Reporter Ashley Belanger
Ashley Belanger is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
Related Stories
- Anthropic’s AI Chatbot at Center of Copyright Dispute
- The Complex Question of AI Training on Potentially Infringing Content
- Fair Use in the Age of Artificial Intelligence: A Guide to the Debate
Comments
54 comments have been posted so far. Share your thoughts and opinions on this developing story.
Comments
- "This is a step forward for music publishers, but we still need to address the core issue of AI training on potentially infringing content."
- JohnDoe123
- "I’m not surprised that Anthropic reversed course after expert testing revealed weaknesses in their guardrails."
- AIExpert23
- "This case highlights the importance of fair use and its application to emerging technologies like AI."
- FairUseFan