Anthropic Defends AI Training Methods, Cites 'Broad Safeguards' Against Music Copyright Infringement

Anthropic Defends AI Training Methods, Cites 'Broad Safeguards' Against Music Copyright Infringement

By Marcus Hartley

December 24, 2024 at 11:14 PM

Anthropic has filed a strong opposition to major music publishers' preliminary injunction request regarding their AI chatbot Claude's alleged copyright infringement. The dispute centers on two main issues: the use of protected works in training data and the generation of song lyrics by Claude.

Circuit board with AI processor

Circuit board with AI processor

Key points from Anthropic's opposition:

  1. Fair Use Defense
  • Claims using copyrighted works to train LLMs constitutes fair use
  • Argues training data usage is "transformative" under fair use doctrine
  • States monetary damages would suffice if publishers prevail
  1. Technical Context
  • Claude learns from "trillions of tiny textual data points"
  • Training data likely includes some copyrighted works
  • Research details predated Claude's commercial release by nearly a year
  1. Protective Measures
  • Implemented "broad array of safeguards" to prevent copyright infringement
  • Claims no reasonable expectation of future violations
  • Disputes ongoing market and licensing harm allegations

Anthropic co-founder Jared Kaplan provided additional support through a declaration detailing Claude's training specifics. The court recently granted and denied partial redactions to the opposition filing, with some details (including compliance costs) remaining confidential.

The case (5:24-cv-03811) remains ongoing, with reports suggesting a significant portion may be dismissed in the near future. This dispute represents a crucial test case for AI training data and copyright law.

3D blue AI text on abstract

3D blue AI text on abstract

Anthropic logo on black background

Anthropic logo on black background

Related Articles

Previous Articles