Insights
2026-04-25·Strategy·6 min read

A Federal Judge Just Ruled Your AI Conversations Aren't Private. Here's What That Means for Your Business.

By JR Intelligence

The ruling already happened. The precedent is already set. The question is whether you're exposed before it matters to you.

In United States v. Heppner, 25 CR 503, the Southern District of New York delivered the first federal ruling directly addressing whether AI-generated legal documents can be protected by attorney-client privilege. The answer was no — on every count. Judge Jed S. Rakoff, one of the most influential federal judges in the country, didn't just rule against the defendant. He worked through the reasoning in a way that will be cited for years.

If your business uses ChatGPT, Claude, or any consumer-tier AI to touch anything remotely legal — contracts, employment decisions, supplier disputes, financial records — this ruling is directly relevant to you.

The Ruling That Changes the Calculus

Bradley Heppner faced federal securities and wire fraud charges in the Southern District of New York. In preparing his defense, he used free-tier Anthropic Claude to draft legal documents and develop legal strategy. Federal agents seized 31 electronic devices. Among the recovered material: Heppner's AI chat logs, including every prompt he typed and every response Claude gave.

Heppner's lawyers argued those documents should be protected — either by attorney-client privilege or work product doctrine. Judge Rakoff rejected both arguments, and his reasoning is the part worth understanding.

On privilege: attorney-client privilege requires an attorney. Not "someone giving legal advice" — a licensed human professional bound by fiduciary duty, subject to bar discipline, capable of being disbarred. Claude is none of those things. The relationship that creates privilege doesn't exist when one party is a software model. Rakoff explicitly rejected the argument that AI is "just a fancy word processor" — a word processor doesn't give you legal strategy, doesn't respond to the specific facts of your case, and doesn't create the kind of reliance that privilege is designed to protect. The analogy fails.

On confidentiality: Anthropic's privacy policy at the time of Heppner's use disclosed that inputs and outputs could be used for model training and shared with third parties. Heppner had clicked through that agreement to use the product. The court found there was no reasonable expectation of confidentiality when you've consented to that disclosure. You can't claim privilege over something you've already agreed to share.

On work product: even though Heppner eventually shared some of the AI-generated documents with his actual attorney, that didn't retroactively create protection. The documents were created in a non-privileged context first. Sharing them with counsel later doesn't launder the privilege problem.

All three failed. Everything Heppner typed into Claude was discoverable.

Why This Hits SMBs Harder Than Enterprises

Large companies have legal departments. Their general counsel reviews which AI tools employees can use and under what conditions. Enterprise procurement goes through a vendor risk assessment process. The attorney-client privilege question — and its relationship to AI tool selection — is a topic that's been on enterprise legal radar for two years.

SMBs don't have that infrastructure. The person using ChatGPT to review a commercial lease, or Claude to draft an employment separation agreement, or an AI chatbot to think through a supplier dispute is often the owner. Or the operations manager. Or whoever had fifteen minutes and thought the AI could help.

That's the exposure pattern: well-intentioned, pragmatic use of tools that seemed helpful, without awareness that those tools create a legal record with no confidentiality protection.

The adoption data makes the gap more urgent. JPMorgan Chase Institute data shows SMB AI adoption surged from 5.2% to 17.7% over two years — a 3.4x increase. That adoption is accelerating, not plateauing. The AI for Main Street Act just added a 35% federal tax credit on AI spending, which will push adoption further. More tools, more users, more data flowing through consumer-tier platforms — with most of that adoption happening before businesses have thought carefully about what their AI use policies actually allow.

The ruling didn't create the exposure. It confirmed it was there the whole time.

The Three-Part Test Your AI Use Just Failed

Judge Rakoff's reasoning maps to a three-question test that any business can run against its current AI practices. The questions are simple. The implications are not.

1. Is there an attorney-client relationship? Claude is not a licensed attorney. It has no fiduciary duty to you. It cannot be disbarred for bad advice. It is not your lawyer. Any documents it helps you create are not privileged, regardless of how legal the subject matter is. The relationship doesn't exist, so privilege can't attach.

2. Is there a reasonable expectation of confidentiality? Read the privacy policy of whatever consumer AI tool you're using. If it contains any language about using your inputs to improve the model, sharing data with third parties, or retaining conversation history — and most free-tier tools do — you've consented to a disclosure that destroys confidentiality. You clicked agree. That's the record.

3. Is there work product protection? Work product doctrine protects documents prepared in anticipation of litigation by or for an attorney. If you created the document in a consumer AI tool and then handed it to your lawyer, that doesn't fix the original problem. The sequence matters. Documents created outside a privileged context don't become privileged because you later involve counsel.

If any of your current AI use involves legal documents, HR decisions, financial records, or anything you'd want protected in a dispute, this three-part test is worth running against your actual tool stack today.

What "Enterprise-Grade" Actually Means (and What It Costs)

The distinction between consumer-tier and enterprise-tier AI isn't marketing language — it's legally meaningful now.

Enterprise-tier AI products offer:

  • No-training clauses: Your inputs and outputs are not used to improve the model. Your data doesn't leave the context of your organization's account.
  • SOC 2 Type II compliance: Audited security controls with independent verification.
  • Business Associate Agreements (BAA): Required if your work touches any health information, even tangentially.
  • Data residency controls: Clarity on where your data is stored and processed.
  • Admin controls and audit logs: You can see who used what, when, and pull records if you need them.

On pricing, the delta is smaller than most people expect:

  • ChatGPT Team: ~$25/user/month. ChatGPT Enterprise: custom, but available.
  • Claude for Business: ~$25–30/user/month with no-training guarantees.
  • Microsoft Copilot: ~$30/user/month.
  • Google Workspace AI: bundled with existing Workspace plans for most tiers.

For a fifteen-person company, the upgrade from consumer to enterprise tier costs roughly $300–450 per month. A single discovery dispute — document production, attorney time, the litigation process itself — typically costs multiples of that in the first month alone. The math isn't complicated.

But enterprise tier is necessary, not sufficient. The tool selection solves the confidentiality problem. It doesn't solve the policy problem. You still need to define what goes into the tools and what doesn't.

The Five-Point Legal Hygiene Checklist

This is 2–3 hours of work. The ruling already happened. The precedent is set.

1. Audit your AI tools. Survey your team and find out what they're actually using — not what you've approved, what they're actually running. Shadow AI is the first risk: tools you don't know about are tools you can't govern. The discovery problem starts the moment an employee types sensitive information into an unvetted tool.

2. Upgrade to enterprise tiers for anything that matters. Consumer-tier tools should not touch client data, legal documents, financial records, or HR files. Full stop. The no-training guarantee and the contractual confidentiality commitment are the minimum bar. Identify the tools that need to be upgraded and upgrade them.

3. Write a one-page AI use policy. "Use good judgment" is not a policy. "No client names, no financial figures, no legal strategy, no HR records in consumer AI tools" is a policy. It needs to be specific enough that an employee reading it knows whether a given use case is permitted. One page is sufficient. The existence of a written policy matters — courts and insurers increasingly look for evidence that you were thinking about this.

4. Brief your outside counsel. Your attorney needs to know what AI tools you use and how, so they can structure communications and document handling in a way that protects the privilege they're supposed to provide. If your lawyer doesn't know you're using AI tools in your business operations, they can't account for it.

5. Document your governance. Write down your policy. Document who approved it and when. Log the enterprise tier subscriptions and their no-training terms. The 69% of companies that have no AI governance structure are in a worse position in a dispute than companies that have even a basic written framework. Evidence of governance matters — and right now, most of your competitors have none.

The Heppner ruling is the first major federal precedent at this intersection, and it came from one of the most consequential federal courts in the country with one of the most respected judges on the bench. It will be cited in subsequent cases. The gap between AI adoption and AI legal hygiene was already real before this ruling. Now there's a federal court opinion on the record that confirms what that gap costs.


JR Intelligence works with SMBs on AI implementation, governance, and risk management — including the policy frameworks that protect the work you're doing with AI. If you want to close the legal hygiene gap in your business, let's talk.

AI Legal RiskAttorney-Client PrivilegeData PrivacyComplianceSMB Operations

Ready to Build

See what this looks like for your operation.

One audit. We map your workflow, find the leverage, and show you the automated version of your business.