A coalition of prominent writers has launched fresh legal action against Anthropic, Google, OpenAI, Meta, xAI, and Perplexity, asserting that these firms utilized unauthorized copies of their literary works to develop their language models. This development echoes an earlier class-action lawsuit brought against Anthropic on identical grounds of copyright violations.
The case carries particular weight given that a court previously determined AI companies could legally train on pirated material, yet found the act of piracy itself unlawful. This legal distinction created an unusual loophole: while the unauthorized distribution remains illegal, the subsequent use of stolen content for AI training was deemed permissible.
The Settlement Controversy
Under the existing Anthropic settlement valued at $1.5 billion, affected authors can claim approximately $3,000 each. However, this resolution has left many creative professionals dissatisfied. Their primary grievance centers on a fundamental issue: the settlement fails to penalize AI firms for the core violation—leveraging stolen literary content to generate multi-billion-dollar revenue streams.
According to the plaintiffs’ filing, the current settlement framework appears designed to benefit the technology companies rather than compensate creators adequately. The lawsuit explicitly challenges what it characterizes as an insufficient remedy: “LLM companies should not be able to so easily extinguish thousands upon thousands of high-value claims at bargain-basement rates, eliding what should be the true cost of their massive willful infringement.”
Who’s Behind the Push
The legal effort is spearheaded by John Carreyrou, the major investigative journalist and “Bad Blood” author who exposed the Theranos scandal, alongside other accomplished writers. Their position underscores growing tensions between creative industries and AI developers over how intellectual property rights should be protected in the machine-learning era.
This new lawsuit signals that the author community views the previous settlement as insufficient accountability for what they characterize as systematic, knowing misuse of protected works.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
The Latest Copyright Battle: Major Authors Challenge Six AI Giants Over Model Training Practices
A coalition of prominent writers has launched fresh legal action against Anthropic, Google, OpenAI, Meta, xAI, and Perplexity, asserting that these firms utilized unauthorized copies of their literary works to develop their language models. This development echoes an earlier class-action lawsuit brought against Anthropic on identical grounds of copyright violations.
The case carries particular weight given that a court previously determined AI companies could legally train on pirated material, yet found the act of piracy itself unlawful. This legal distinction created an unusual loophole: while the unauthorized distribution remains illegal, the subsequent use of stolen content for AI training was deemed permissible.
The Settlement Controversy
Under the existing Anthropic settlement valued at $1.5 billion, affected authors can claim approximately $3,000 each. However, this resolution has left many creative professionals dissatisfied. Their primary grievance centers on a fundamental issue: the settlement fails to penalize AI firms for the core violation—leveraging stolen literary content to generate multi-billion-dollar revenue streams.
According to the plaintiffs’ filing, the current settlement framework appears designed to benefit the technology companies rather than compensate creators adequately. The lawsuit explicitly challenges what it characterizes as an insufficient remedy: “LLM companies should not be able to so easily extinguish thousands upon thousands of high-value claims at bargain-basement rates, eliding what should be the true cost of their massive willful infringement.”
Who’s Behind the Push
The legal effort is spearheaded by John Carreyrou, the major investigative journalist and “Bad Blood” author who exposed the Theranos scandal, alongside other accomplished writers. Their position underscores growing tensions between creative industries and AI developers over how intellectual property rights should be protected in the machine-learning era.
This new lawsuit signals that the author community views the previous settlement as insufficient accountability for what they characterize as systematic, knowing misuse of protected works.