Artificial intelligence chatbots have become a routine part of professional life. Executives use them to think through strategic problems. Inventors use them to organize technical ideas. Attorneys use them to research legal questions and draft documents. And increasingly, all of these users are having conversations with AI tools that touch on legally sensitive matters — litigation exposure, regulatory risk, patent strategy, confidential business information.

What most of those users do not know is that a recent federal court decision has raised serious questions about whether those conversations are protected by attorney-client privilege. The law in this area is unsettled, the stakes are high, and the answers depend on facts and circumstances that most users have never thought to examine.

What Happened in Heppner

In February 2026, Judge Jed Rakoff of the United States District Court for the Southern District of New York issued a first federal court ruling addressing whether conversations with a publicly available AI platform can be protected by attorney-client privilege or the work product doctrine. The case is United States v. Heppner, No. 25-cr-503 (S.D.N.Y. Feb. 17, 2026).

The facts are instructive. Bradley Heppner, a financial executive facing a federal criminal investigation, used the consumer version of Claude® — Anthropic’s AI assistant — to generate documents outlining potential defense strategies. He later shared those documents with his defense attorneys. When federal agents seized his devices, his lawyers asserted privilege. Judge Rakoff ruled that the documents were protected by neither attorney-client privilege nor the work product doctrine.

The ruling rests on three independent grounds. First, an AI platform is not a lawyer — all recognized privileges require a relationship with a licensed professional who owes fiduciary duties. Second, Heppner had not used Claude® at the direction of his counsel; he acted entirely on his own initiative, and sharing the resulting documents with his lawyers afterward did not retroactively create privilege. Third — and most consequentially for the broader AI-using public — the court found that Heppner had no reasonable expectation of confidentiality in his AI conversations, based on the terms of service governing his use of the platform.

Heppner is best understood as a ruling about a specific set of circumstances, not a sweeping declaration that AI conversations can never be privileged. But its reasoning raises questions that extend well beyond the facts of that case.

The Companion Decision: Warner v. Gilbarco

One week before the Heppner written opinion issued, a federal magistrate judge in the Eastern District of Michigan reached the opposite conclusion on different facts in Warner v. Gilbarco, Inc. (E.D. Mich., Feb. 10, 2026). In that civil case, a pro se plaintiff used generative AI tools to prepare materials related to her employment discrimination lawsuit. The defendants moved to compel production of all documents reflecting her AI tool usage, arguing that disclosing information to an AI platform waived any protection.

The court denied the motion. In the court’s analysis, AI platforms are “tools, not persons” — disclosure to an AI tool is not disclosure to an adversary, and work product protection is not waived by using software to process your own mental impressions. The Gilbarco decision illustrates the other side of the coin: where the facts support the elements of available protection, AI tools do not automatically disturb those protections.

Taken together, Heppner and Gilbarco are consistent. Neither decision changed the underlying law. Both applied established principles to new technology and reached different outcomes based on different facts. The critical variables are the same ones that have always mattered: whether a qualifying relationship existed, whether confidentiality was maintained, and whether attorney direction was present.

Why the Confidentiality Question Is Harder Than It Looks

Attorney-client privilege requires, among other things, that the communication be made in confidence — with a reasonable expectation that it will not be disclosed to third parties. Judge Rakoff found that Heppner lacked that expectation because the terms of service governing his AI platform reserved rights to use his conversations for model training and to disclose them to governmental authorities.

That reasoning, taken at face value, would seem to doom AI conversations generally. But there is a significant problem with applying it broadly: it proves too much.

Gmail®, Google’s email service, expressly states in its terms that its automated systems analyze your content for product features and improvements. Gmail® now integrates Gemini® AI, which processes inbox content to generate summaries and draft responses. Google reserves the right to report violations to appropriate authorities. And yet no court has ever held that using Gmail® destroys attorney-client privilege. The ABA and state bar ethics authorities have consistently concluded that cloud-based email is compatible with privilege, provided attorneys take reasonable precautions. The legal profession has operated on that assumption for decades.

The same observation applies to a wide range of services that receive confidential data, process it computationally, and return results over the internet: legal research platforms, e-discovery systems, tax preparation software, clinical decision-support tools, financial analytics platforms. None of these services has been held to destroy privilege despite operating under terms that reserve comparable data-use and legal-process-compliance rights.

What, then, actually distinguishes AI chatbots from these accepted services? The answer is not obvious — and that lack of an obvious answer is precisely why Heppner’s confidentiality reasoning is likely to face challenge as the law develops.

Two Issues That Must Not Be Conflated

A careful analysis separates two distinct features of AI terms of service that bear on confidentiality, and that are too often treated as a single undifferentiated problem.

The first is model training use — the practice of feeding user conversations into AI improvement pipelines for the provider’s commercial benefit. This practice goes beyond what standard cloud and email providers do, and it is the feature most legitimately distinguishable from ordinary professional services. Most AI platforms now allow users to opt out of this use, and that opt-out is a meaningful step that deserves more credit than Heppner’s framing suggests. A user who has opted out has removed the most commercially novel feature of AI terms — what remains is a legal-process compliance clause that courts have never treated as privilege-destroying in any other context.

The second is legal process compliance — the obligation every digital service provider carries to respond to valid subpoenas, warrants, and court orders. This feature is universal. It appears in Gmail®’s terms, Microsoft®’s OneDrive® terms, Dropbox®’s terms, and the terms of virtually every law firm’s own document management system. Courts have never held that this feature destroys privilege, and there is no principled reason to treat AI platforms differently.

Conflating these two issues leads to a distorted and overly pessimistic picture of AI users’ privilege exposure — particularly for users who have taken affirmative steps to manage the first issue.

Privilege Has Never Depended on Wealth or Organizational Size

One response to Heppner that has emerged in some legal commentary is essentially: upgrade to an enterprise AI subscription, and the problem goes away. Enterprise and commercial AI plans — such as Claude® for Work, ChatGPT® Enterprise, or comparable offerings from Microsoft® and Google® — do provide stronger contractual confidentiality protections: express confidentiality obligations, absolute prohibitions on training use of customer content, and government-disclosure frameworks requiring valid legal process with notice obligations. Those protections are real, and they matter.

But treating enterprise subscriptions as the only path to privilege protection raises a fundamental doctrinal concern. Privilege doctrine has never conditioned protection on the cost of the communication medium or the size of the user’s organization. A solo inventor who discusses patent strategy with a solo practitioner attorney over free Gmail® has the same privilege protection as a Fortune 500 company with enterprise infrastructure. The privilege attaches to the relationship and the communication — not the billing tier.

A framework that grants privilege only to users who can afford multi-seat business subscriptions is not a framework consistent with how privilege has ever worked. Courts asked to apply Heppner’s reasoning broadly will face this concern directly, and the more likely trajectory is a totality-of-circumstances approach that asks whether the user’s overall conduct created a reasonable expectation of confidentiality, not whether they subscribed to the right billing plan.

What This Means for You

The practical implications of Heppner vary by user, but the common thread is that AI conversations involving legally sensitive matters require careful thought — about which platform you are using, under what terms, in what context, and with what relationship to counsel.

The variables that matter include the terms of service governing your platform, the steps you have or have not taken to limit data use, whether an attorney directed or supervised the AI-assisted work, the nature of the information involved, and what the AI-generated materials were intended to accomplish. These are fact-specific inquiries, and the answers will differ from user to user.

What is clear is that uninformed use — treating AI conversations as automatically private simply because they feel private — carries real risk. And equally clear is that the risk is manageable, through a combination of informed platform choices, appropriate account settings, and thoughtful integration of AI tools into attorney-supervised legal workflows.

For inventors, the stakes extend beyond privilege. Inputting unpublished invention details into AI platforms raises questions not only about evidentiary protection but about prior art, trade secret status, and patent eligibility that deserve careful analysis before those conversations happen — not after. The third article in this series addresses those patent-specific risks in detail.

The Law Is Still Being Written

Heppner is a single district court decision on a question the court itself described as one of “first impression nationwide.” The Harvard Law Review, commenting on the decision in March 2026, noted that the opinion “veers toward categorically excluding a client’s use of generative AI from attorney-client privilege” and that “a more fact-dependent analysis — and careful consideration of the role of AI within the attorney-client relationship — would suggest that such use should at least sometimes qualify for privilege.” That critique reflects the direction in which the law is most likely to develop.

The trajectory suggests that courts will adopt a nuanced, circumstances-based approach rather than a categorical rule, and that AI-assisted legal work conducted thoughtfully — with attention to platform terms, opt-out settings, attorney direction, and the nature of the content involved — will ultimately be treated comparably to other cloud-based professional tools. But that outcome is not guaranteed, the path there will involve further litigation, and the users best positioned to navigate it are those who understand the issues now.

Getting Ahead of the Problem

The questions raised by Heppner are ones that every professional who uses AI tools for legally sensitive work should be discussing with qualified counsel — ideally before a privilege dispute arises, not after. The analysis is fact-specific, the law is developing, and the right answer for a solo inventor in a patent prosecution context may differ from the right answer for an in-house legal team managing litigation.

John Ogilvie of Ogilvie Law Firm advises clients on intellectual property matters and the legal risks associated with emerging technology, including AI tool usage in sensitive professional contexts. If you have questions about how Heppner and the developing law of AI privilege apply to your situation, use the Contact form to get in touch.