Your AI Chatbot Just Became a Federal Witness
A federal judge in New York ruled this week that documents a criminal defendant created using an AI chatbot were not protected by attorney-client privilege and could be used against him at trial. The defendant used Claude, an AI tool made by Anthropic, to research his legal situation while under federal investigation. He later shared those documents with his attorneys and claimed they were privileged. The court disagreed. The opinion in United States v. Heppner, No. 25-cr-503-JSR (S.D.N.Y. Feb. 17, 2026), is the first federal ruling of its kind.
If you or your organization use public AI tools to research legal matters, analyze regulatory risk, or prepare for potential litigation, this ruling is directly relevant to you.
Background
In October 2025, Bradley Heppner was charged with federal securities fraud and related offenses in connection with an alleged $300 million scheme. Prior to his arrest, and while aware he was a target of a federal investigation, Heppner used Claude to research the government's potential case and outline possible defenses. The government seized his devices pursuant to a search warrant and found 31 documents reflecting his AI-generated research and analysis.
Heppner's attorneys argued those documents were privileged. Judge Jed Rakoff of the Southern District of New York disagreed, ruling that the documents were not privileged and that the government may use them at trial.
Why the Court Rejected the Privilege Claims
AI Platforms Are Not Attorneys
Attorney-client privilege protects confidential communications between a client and a licensed attorney. Communications with an AI platform do not qualify. The court was direct: Claude is not an attorney, and no amount of sophisticated output changes that. A conversation with an AI tool, however detailed or legally focused, does not create the attorney-client relationship that privilege requires.
Public AI Platforms Do Not Preserve Confidentiality
Privilege also requires that a communication be kept confidential. When a user accesses a publicly available AI platform, that user agrees to the platform's privacy policy. Anthropic's policy, which was in effect at the time of Heppner's searches, states that Anthropic collects user inputs and outputs, uses that data to improve its products, and may share it with governmental regulatory authorities and third parties without the necessity of a subpoena. The court found that Heppner had no reasonable expectation that his conversations with AI would remain private. By using the tool, he consented to the potential for disclosure.
Transmitting AI-Generated Documents to Counsel Does Not Create Privilege Retroactively
A document that is not privileged at the time of its creation does not become privileged simply because it is later shared with an attorney. The court reaffirmed this well-settled principle: privilege attaches to confidential communications made for the purpose of obtaining legal advice, not to preexisting materials transmitted to counsel after the fact. The underlying AI-generated documents remained discoverable.
The Work Product Doctrine Requires Attorney Direction
The work product doctrine provides a separate layer of protection for materials prepared in anticipation of litigation, but only when those materials are prepared by or at the direction of counsel. Heppner's own attorneys confirmed they had not directed him to use Claude. Because the research was not directed by counsel, the work product doctrine did not apply.
Key Considerations for Organizations and Individuals
Before using any public AI tool in connection with a legal matter, regulatory inquiry, notice of claim, or potential dispute, the following questions merit attention:
- Have you reviewed the privacy policy for the AI tool in question? Does it permit the platform to share user inputs with governmental authorities or third parties?
- Are you using a publicly accessible consumer platform, or a closed enterprise system with contractual confidentiality protections?
- Is the AI being used at the direction of your attorney, or independently? Without attorney direction, work product protection is unlikely to apply.
- Would you be comfortable if opposing counsel or government attorneys reviewed everything you have typed into this AI tool?
- Has your organization established a policy governing AI use in connection with legal or regulatory matters?
Scope of the Ruling
This ruling does not prohibit the use of AI tools in legal matters. The court was careful to note the outcome might have been different had defense counsel directed Heppner's use of the AI platform as part of a structured legal strategy. AI tools used under attorney supervision, with proper documentation and on platforms with appropriate confidentiality protections, present a different legal question than unsupervised and undirected conversations at issue in Heppner.
Questions involving enterprise AI platforms operating under strict contractual confidentiality agreements likely warrant a different analysis than the publicly available consumer platforms addressed in this ruling. Organizations integrating AI into legal or compliance workflows should engage outside counsel to evaluate the privilege implications before integrating AI into the workflow.
Conclusion
Public AI platforms are not confidential, and they are not a substitute for legal counsel. If you are facing a government inquiry, anticipate litigation, or are dealing with any matter that carries legal risk, consult an attorney before turning to an AI tool for research or analysis. The record of what you type may ultimately be more valuable to an adversary than to you.
Team
- Of Counsel