Anthropic Argues Copying Protected Works for AI Training Can Qualify as Fair Use
Anthropic has publicly stated that copying protected works as an intermediate step to create non-infringing AI outputs can qualify as fair use, amid ongoing legal challenges from the music industry.
Anthropic logo on notepad
In response to a Copyright Office inquiry, Anthropic claims its AI assistant Claude was trained through "quintessentially lawful use of materials." The company argues that copying copyrighted content during AI training serves only as an intermediate step for statistical analysis, not for expressive purposes.
Key points from Anthropic's position:
- Training process involves analyzing statistical relationships between words and concepts
- Copying is transformative and unrelated to the original work's expressive purpose
- The process creates new outputs without re-using copyrighted expression
However, this stance faces strong opposition. ASCAP explicitly argues that unauthorized AI training on copyrighted works cannot constitute fair use, stating:
- Such use is not transformative
- Each unauthorized use serves commercial purposes
- Copyright holders' consent is required
The debate occurs amid broader developments in AI regulation:
- EU negotiations on the AI Act included calls for mandatory AI training disclosures
- Multiple lawsuits target AI companies for alleged copyright infringement
- Universal Music, Concord, and ABKCO are specifically suing Anthropic for "systematic and widespread" infringement
Anthropic logo on black background
The outcome of these legal challenges could significantly impact how AI companies train their models and interact with copyrighted content in the future.