Is Claude Safe for Enterprise Use? A GRC Practitioner's Breakdown
Most organizations using Claude are on the wrong plan for the work they're doing with it. Here's what the data policies actually say, what your employees are probably doing right now, and what a real vendor risk assessment looks like.
Someone at your company is already using Claude. Probably several people. Some of them are using it on a Free or Pro account they set up with a personal email address. They're drafting customer-facing content, summarizing contracts, asking it questions about internal processes. None of this has gone through your vendor risk program. None of it has been reviewed by legal.
This is not a hypothetical. It's the default state of most organizations in 2026, and the question isn't whether your people are using Claude, it's whether you have any meaningful visibility or control over how they're using it.
This post is about what the data policies actually say, why the plan your employees are using matters a great deal, and what a competent vendor risk assessment of Anthropic looks like. I'm writing this from a GRC perspective, not a marketing one. Some of this will be uncomfortable for Anthropic fans.
The most important thing to understand first
There are two fundamentally different versions of Claude from a compliance standpoint. They happen to share a name, a UI, and a model. But the data handling is night-and-day different.
- Version one is the consumer product: Free, Pro, and Max plans. These operate under Anthropic's Consumer Terms of Service, which gives Anthropic rights to use your conversations to train future models, unless you actively opt out. As of October 2025, that opt-out became mandatory: Anthropic introduced a training toggle and required every user to make a choice. If you didn't notice the notification, your data was in. The opt-in extends data retention from 30 days to five years. That's not a typo: it's a 60x increase in how long your conversations sit in Anthropic's systems.
- Version two is the commercial product: Team plans, Enterprise plans, and direct API access. Under commercial terms, Anthropic does not train on your data by default. No toggle required. This protection is contractual, not a setting you can accidentally turn back on.
The gap between these two versions is where most organizations are currently exposed.
What each plan actually gives you
- Free: Consumer terms. Training-eligible by default until you opt out. 30-day retention if opted out, up to 5 years if opted in. No data processing agreement. Not HIPAA-eligible. Not appropriate for anything sensitive.
- Pro ($20/month) and Max ($100–200/month): Still consumer terms. Training-eligible by default. Same retention policies as Free. The higher price does not buy you better data handling. This is where a lot of power users land — and it's where a lot of accidental data exposure happens, because "I pay for it" feels like it ought to mean something from a compliance standpoint. It doesn't.
- Team ($25/seat/month): Commercial terms kick in here. No training on your data by default. SSO available. Basic admin controls. Data is retained and purged within 30 days of deletion. This is the floor for any business use involving anything you'd care about — client data, internal strategy, anything regulated.
- Enterprise (custom pricing): Full commercial protections, plus: custom data retention controls (minimum 30 days, configurable from there), audit logging, SCIM provisioning, role-based access controls, and the ability to configure a HIPAA-ready deployment with a Business Associate Agreement. Zero Data Retention is available as an add-on, subject to Anthropic approval. Under ZDR, inputs and outputs are not stored at all beyond what's needed for real-time abuse screening.
- API: Arguably the most privacy-protective option that doesn't require an Enterprise contract. As of September 2025, Anthropic reduced standard API log retention to 7 days. API data is never used for training. If you need longer retention for your own audit purposes, you can opt into 30 days via your Data Processing Addendum, but 7 days is the default. For organizations building Claude into internal tools or workflows, the API with a DPA is often the right answer.
Incognito mode: what it does and doesn't do
Claude has an incognito mode, accessible from the chat interface via a ghost icon. Incognito conversations are excluded from model training regardless of your other settings, and they're not saved to your chat history. They are still retained on Anthropic's backend for 30 days for safety purposes — so "incognito" doesn't mean what Chrome's incognito mode means. It's better described as "not saved to your history and not used for training," which is meaningfully useful but is not data erasure.
One nuance worth knowing for Enterprise customers: incognito chats are included in organizational data exports available to account Owners, and they appear in the Compliance API. On Team or Enterprise plans, incognito doesn't mean invisible to your organization — it means invisible to the user in their history.
The shadow AI problem
Even if your organization has licensed Claude for Work and configured everything correctly, individual employees can and do access claude.ai on personal accounts. They do this because the consumer product is free, because they don't know there's a difference, or because they want to use features or models that aren't available under your enterprise configuration. Every one of those sessions is operating under consumer terms. Your vendor contract with Anthropic covers nothing that happens outside your organizational account.
This is the same problem security teams have always had with shadow IT, now applied to a tool that employees are actively typing sensitive information into. The fix isn't technical - it's policy and awareness, like it's always been. Your acceptable use policy needs to explicitly address which Claude products are approved for which categories of data. Your employees need to know that "the free version" and "the company version" are not the same thing.
If you want to detect the scope of this problem, start with three signals. First, your IdP logs: any claude.ai authentication that didn't go through your SSO is a personal account. Second, your proxy or secure web gateway: query traffic to claude.ai and api.anthropic.com by user to get a volume picture. Third, your OAuth audit in Google Workspace or Microsoft 365: look for personal Claude accounts that have been granted access to corporate data sources. In most organizations, what comes back is more than anyone expected.
What Anthropic's certifications actually cover
Anthropic achieved SOC 2 Type II certification in early 2026, along with ISO 27001:2022 and ISO/IEC 42001:2023. These are real certifications, they were audited by a third party, and they matter for regulated industries that require them in vendor agreements. The SOC 2 Type II report is available under NDA through Anthropic's Trust Portal.
Those certifications do not mean Claude is appropriate for every workload, on every plan, under every configuration. SOC 2 Type II audits the security controls around Anthropic's infrastructure. It does not audit what data you're putting into the system, whether your employees are using the right plan, or whether you've done your own access control and logging work on top of Anthropic's baseline.
Think of it this way: a SOC 2 Type II report tells you that the building has good locks. It says nothing about who you've given keys to, or what you're storing inside.
The ISO 27001 certification covers Anthropic's information security management system. The ISO/IEC 42001 certification is specific to AI management systems; it's the newer standard for organizations building or deploying AI, and Anthropic is among the earlier adopters of it. For GRC practitioners evaluating Anthropic as a vendor, this one is worth understanding: 42001 is increasingly what regulators and enterprise procurement teams are going to ask about as AI governance frameworks mature.
HIPAA is worth addressing separately because it trips up a lot of people. Anthropic offers HIPAA-eligible service configurations for Enterprise customers, including a Business Associate Agreement. But HIPAA eligibility is not the same as HIPAA compliance. To process Protected Health Information with Claude legally, you need a signed BAA (Business Associate Agreement), ZDR (Zero Data Retention) -enabled or equivalent controls, audit logging active, mandatory human review of any AI-assisted outputs touching PHI, and your own organizational controls documented and operational. Consumer Claude (any Free, Pro, or Max account) is categorically not HIPAA-eligible. No BAA exists for those plans, full stop. If anyone in your organization is running PHI through consumer Claude, that's a reportable incident waiting to happen.
FedRAMP authorization is in progress as of early 2026 but not yet granted. If you're in a federal context or supporting federal customers, that's a hard blocker until it comes through.
GDPR: contested ground
For EU-based users and organizations with European operations, the GDPR picture is more complicated than Anthropic's marketing suggests.
Anthropic supports GDPR compliance for commercial customers through a Data Processing Addendum, and they use EU Standard Contractual Clauses for transfers outside Europe. Enterprise customers can configure data residency to EU regions via AWS or Google Vertex AI.
The contested part is the September 2025 consumer opt-in rollout. Privacy advocates and several legal analysts called the UI design — a large "Accept" button with the training toggle pre-enabled — a dark pattern potentially in violation of GDPR's requirements for freely given, unambiguous consent. As of now, no regulatory action has been taken. That could change. If your organization has European employees using consumer Claude and opted in to training before the deadline, the legal exposure is at minimum ambiguous.
The safe position for any organization with GDPR obligations is: commercial terms only, DPA in place, data residency configured for EU regions if required.
What a vendor risk assessment of Anthropic should look like
This is the part that most vendor risk frameworks miss because they weren't designed with AI providers in mind.
Standard third-party risk questionnaires will capture the basics: SOC 2 status, encryption standards, incident response procedures, subprocessor lists. Anthropic will pass those. What standard questionnaires won't capture are the AI-specific risks that matter for your organization. Here's what to add.
Data training posture. Which plan is your organization licensed for? Is training disabled contractually or only by configuration? Who administers the privacy settings, and are those settings auditable? For consumer plans in use: has the training toggle been confirmed off for all users?
Scope of data access. What categories of data are employees actually submitting to Claude? Do you have a data classification policy that maps to Claude usage? Are employees aware which data categories are prohibited from AI tools?
Shadow usage. What mechanisms do you have to detect Claude usage outside your organizational account? Have you reviewed identity logs, network traffic, or endpoint telemetry for personal-account claude.ai access?
Output handling. What controls govern how Claude's outputs are used? For regulated outputs (anything touching legal conclusions, financial recommendations, or medical information), what review process exists before those outputs are acted on?
Subprocessor exposure. Anthropic uses third-party subprocessors for some support functions. The subprocessor list is available through the Trust Portal and is updated periodically. Review it the same way you'd review any cloud vendor's subprocessor list, with particular attention to any that touch conversation data.
Model updates. Anthropic updates Claude models regularly. Unlike traditional software, model updates can change behavior in ways that aren't captured in release notes. If you've built workflows that depend on specific Claude behavior, what's your process for detecting and validating behavioral drift after a model update?
Incident notification. What is Anthropic's contractual commitment to notifying you of a security incident affecting your data? Under Enterprise terms, this is defined. Under consumer terms, it effectively isn't.
The practical starting point
If you're reading this because you've been handed a "should we be using Claude?" question and need to give a defensible answer, here's the short version.
For anything that matters - client data, regulated information, internal strategy, anything you'd care about in a breach - the minimum acceptable configuration is a Team plan or above, with your organization's SSO enforced, a data processing agreement in place, and an acceptable use policy that tells employees what they can and can't put in. That's the floor.
For regulated industries - healthcare, financial services, anything with explicit data handling requirements - you need Enterprise terms, a BAA if PHI is in scope, and a serious look at whether ZDR is appropriate for your highest-sensitivity workflows.
For everyone else, the immediate action item is inventory. Find out who in your organization is using Claude, on what plan, and for what purposes. The answer will almost certainly be more than you expect, on plans that provide fewer protections than you'd want.
The tool is genuinely useful. The compliance picture is manageable. But "manageable" requires knowing what you're actually dealing with — and right now, most organizations don't.