All articles

Judge Cracks Open OpenAI's Legal Playbook in Copyright Case

CommsToday - News Team
Published
December 1, 2025

A federal judge orders OpenAI to disclose its legal communications about deleting controversial training datasets in a high-stakes copyright lawsuit.

Credit: Outlever

Key Points

  • A federal judge orders OpenAI to disclose its legal communications about deleting controversial training datasets in a high-stakes copyright lawsuit.
  • The ruling came after OpenAI put its own motives at issue by claiming a "good faith" defense, thereby waiving its attorney-client privilege.
  • The decision could expose evidence of willful infringement, opening OpenAI to enhanced damages of up to $150,000 per copyrighted work.
  • This ruling sets a precedent for the AI industry, signaling that internal discussions on data sourcing may no longer be shielded from legal scrutiny.

A federal judge has ordered OpenAI to turn over privileged legal communications about its deletion of controversial training datasets, a major blow in a high-stakes copyright lawsuit filed by authors, as reported by Bloomberg Law. The ruling could expose direct evidence of willful infringement in the training of models like ChatGPT.

  • The good faith gambit: OpenAI’s defense has hinged on a "fair use" argument, but its claim of acting in good faith backfired. The judge ruled that by putting its own motives "at issue" in the copyright dispute, the company waived the very privilege that would have kept its legal advice private.

  • A $150,000 question: The judge's order noted that the now-discoverable communications are "likely probative of willfulness." That finding is critical, as it could open the floodgates to enhanced damages of up to $150,000 per infringed work if the authors prevail.

  • An uncomfortable precedent: The decision lands as the AI industry is still absorbing the costly lesson from a rival, Anthropic, which reportedly paid a $1.5 billion settlement to end a similar copyright fight. The order sets a new standard for transparency in an industry that has preferred to keep its data practices in the dark.

AI developers can no longer assume that internal discussions about how they source and handle training data are safely shielded from legal scrutiny. This ruling forces a level of accountability that could reshape data governance practices across the entire industry.