Public AI Exposure Waives Privilege, Confidentiality, and Privacy, U.S. Judge Rules
Public AI Exposure Waives Privilege, Confidentiality, and Privacy, U.S. Judge Rules

Per a recent decision by a New York federal judge, when you submit information to a public AI platform like ChatGPT, Claude, or Gemini (i.e., the free versions), you’re disclosing that information to a “third party.” Treating AI platform submissions as third-party disclosures means that attorney-client privilege dies upon submission and that privacy and confidentiality obligations become material considerations to be accounted for.
For law firms and businesses, the issue isn’t whether GenAI is useful. It’s whether the organization can control where sensitive information goes, who can access it, and whether it can be compelled in discovery. Public tools typically don’t give you that control, so the default should be: no sensitive data in public LLMs.
In February 2026, U.S. District Judge Jed S. Rakoff confirmed the point in U.S. v. Heppner, No. 1:25-cr-00503 (S.D.N.Y., Feb. 18, 2026): sharing confidential information with a public GenAI tool—Claude, in that case—waives attorney-client privilege and work-product protection. The opinion matters less for its novelty than for how plainly it states what many policies already assume: disclosure to a public LLM is disclosure to a third party.
Organizations are already reacting because the failure modes are common and costly. One well-known example: Samsung banned using public AI platforms after an engineer uploaded source code to the platform. Apple, Amazon, JPMorgan Chase, Verizon, Spotify, and most law firms have similar bans.
Yet despite policies, many corporate and law firm employees regularly use public AI platforms to do their jobs, including translating documents, summarizing correspondence, and drafting presentations. The interfaces feel private and protected. But the data path isn’t.
In this article, we explain why public AI platforms create privilege, confidentiality, and privacy problems at scale, and offer practical guidance for reducing the risk of waiver and disclosure.
Public AI Platforms and the Waiver of Attorney-Client Privilege
Privilege requires communications with a third party having an ethical, legal, or professional obligation to maintain its confidentiality. When a user shares privileged content with a public GenAI platform, they have voluntarily disclosed it to a third party not bound by any such obligation. That is the classic waiver fact pattern—only faster and easier to do by accident—and it’s equally applicable to public AI platforms. The risk is highest when users paste drafts of legal advice, investigation notes, interview summaries, chronologies, litigation strategy, or annotated documents into a public model.
The downstream problem is discovery. Even if the prompt itself never surfaces, the outputs can become case materials, creating new documents that a regulator or opposing party will seek—and may be entitled to.
In Heppner, the defendant Bradley Heppner, knowing he was under federal investigation, used the public version of Anthropic’s Claude to prepare case documents and then sent those AI-generated files to his lawyers. When the FBI raided his home and seized these documents, his lawyers asserted attorney-client privilege and work-product protection. The government moved to overturn those claims, and Judge Rakoff granted the motion.
His ruling is unequivocal. Communications with public-facing GenAI platforms don’t enjoy attorney-client privilege because (1) they aren’t between a lawyer and their client; (2) they aren’t confidential because the platforms are third parties (and aren’t bound by confidentiality obligations); and (3) users like Heppner aren’t seeking legal advice from the GenAI (which is neither a lawyer nor their agent). In short: disclosure waives privilege.
Private enterprise versions of AI platforms can change the risk profile, depending on the contract and the system design. If the enterprise tool doesn’t train the underlying LLM on user data and has enforceable confidentiality commitments, the disclosure analysis will look different from that of a public tool. These environments may better support emerging frameworks around generative AI privilege, where organizations can demonstrate controlled use and confidentiality protections.
The primary takeaway for lawyers and their clients is not to forego AI, but to limit its use to private AI instances governed by terms that protect confidentiality and prohibit third-party disclosure.
Confidentiality and Privacy Risks of Public AI Platforms
For businesses, the confidentiality risk isn’t limited to privilege. Trade secrets and proprietary data can also lose protection if the company can’t show reasonable measures to keep them secret. Client data, pricing, roadmaps, source code, security details, and M&A materials are among the obvious danger zones.
Contractual duties add another layer. Many MSAs, NDAs, DPAs, and outside-counsel guidelines prohibit disclosure to third parties without consent. A public LLM is a third party. One “helpful” prompt by an employee can create a breach.
Law firms face the same disclosure risk, plus professional obligations. If lawyers or staff share client information with public GenAI platforms, they risk violating confidentiality duties and client instructions. They also risk supervision failures if the firm has no training, no controls, and no clear policy on what’s permitted.
When using public LLMs, users have no reasonable expectation of privacy. Entering confidential business information—trade secrets, proprietary data, or client materials—into these platforms is reckless. Once sensitive information leaves your control, you may not be able to retrieve it or prove it stayed contained. That’s why many organizations restrict public AI tools for anything sensitive.
Keep Privileged, Confidential, and Private Information out of Public AI Tools
The Heppner decision is a reminder with enterprise-wide consequences. Sharing sensitive information with public GenAI platforms can waive attorney-client privilege and work-product protection, and can breach confidentiality and data privacy duties. As public GenAI tools become more capable and more embedded in daily work, organizations must train their employees, establish clear policies, and implement technological safeguards to prevent sensitive information from being disclosed.
If you can’t answer three questions—where does the data go, who can access it, and how long is it kept—don’t prompt with sensitive content.
The Heppner ruling sets a clear precedent for any legal team navigating AI in the workplace. Whether you're evaluating your current policies or building governance practices from the ground up, TransPerfect Legal's Consulting and Information Governance team can help. We work with organizations to assess their risk exposure, strengthen internal policies, and build a defensible framework for AI use—so legal teams can work smarter without compromising privilege, confidentiality, or compliance. Get in touch with our team today to learn more.