Anthropic Commits to Copyright Protection Guardrails Following Music Publishers' Agreement
Anthropic has agreed to maintain copyright protection measures in its AI systems as part of an agreement with music publishers who filed a preliminary injunction motion in California's Northern District.
The agreement comes after eight music publishers sued Anthropic in October 2023, seeking to prevent copyright infringement of their works. While Anthropic maintained its training methods constituted "fair use," they've agreed to specific protective measures.
Head silhouette with light burst
Key Points of the Agreement:
- Anthropic will maintain existing content filters on user queries
- New AI models must include similar copyright protections
- Publishers can notify Anthropic if they detect copyright violations
- Anthropic must investigate and respond to publisher complaints promptly
- The company can improve guardrails as long as effectiveness isn't reduced
The agreement states: "Anthropic will maintain its already implemented Guardrails in its current AI models and product offerings." For future models, they must apply "Guardrails on text input and output in a manner consistent with its already-implemented Guardrails."
If publishers identify potential violations, Anthropic must provide a detailed written response outlining how they'll address the issue or clearly state if they won't take action.
Central AI processor with digital circuits
3D blue AI text on abstract
The agreement doesn't constitute an admission of liability or wrongdoing by any party. The publishers' complaint regarding unauthorized use of lyrics in AI training remains unresolved and pending further legal proceedings.