Anthropic Quietly Became an Automation Platform. Here's What That Means for Security Teams.

On April 14, Anthropic shipped a feature called Routines as part of Claude Code. It's in research preview, the rough edges are acknowledged, and the developer coverage has been enthusiastic. What it means for security teams has received considerably less attention, which is worth correcting, because Routines represents a more significant shift in how Claude operates than most of the coverage suggests.

What changed

Until now, Claude has been a tool you interact with. You open a session, you give it a task, it responds. Even the more capable agentic features, like Claude Code running in a loop, or Cowork taking actions on your desktop, require an active session and, in most configurations, some degree of human presence. You're in the room, at least notionally.

Routines removes that requirement. A Routine is a saved Claude Code configuration - a prompt, one or more repositories, and a set of connectors - that executes automatically on Anthropic's cloud infrastructure, not on your local machine. Your laptop can be closed, you can be asleep, and the routine runs on whatever schedule or trigger you've defined.

Three trigger types are supported: scheduled cadences, API calls via a per-routine HTTP endpoint, and GitHub events like pull requests or releases. A single routine can combine all three.

The connectors are MCP-based integrations with external services, like Slack, Linear, GitHub, Google Drive, and others. They're enabled by default when you create a routine, and they define what the routine can read and act on beyond the repositories themselves.

Anthropic's own documentation states it plainly: there are no permission prompts during a run and no approval steps mid-execution. The governance model is entirely front-loaded. You define scope when you set the routine up. After that, it acts.

Why this is a different category of risk

Most security teams have started thinking about Claude as a productivity tool with data handling implications - which plan employees are on, whether training is opted out, what data classification policy applies. That framing is correct but incomplete once Routines enters the picture.

An interactive Claude session is a bounded event. An employee opens it, types something, gets a response, closes it. The data handling questions are real but the exposure is transactional.

A Routine connected to your Google Drive, your Slack workspace, and your code repositories, running every night, is not a bounded event. It is a standing pipeline with persistent access to multiple corporate data sources, executing without a human watching it, on a third party's infrastructure. The risk profile is closer to a SaaS integration or an ETL job than it is to an employee using a chat interface. Your vendor risk program almost certainly hasn't been updated to reflect that distinction.

The data handling implications follow your plan tier. Team and Enterprise customers are under commercial terms: no training on your data, retention governed by your contractual agreement. But Routines is available on Pro and Max plans too, and developers being developers, personal-account experimentation against team repositories is going to happen. A Routine running on a Pro account under consumer terms, processing your internal codebase and Slack messages nightly, puts that data in a place you haven't accounted for. The shadow AI problem now has a cron job attached to it.

The governance gap

The way most organizations are currently set up, this falls between categories. It's not quite shadow IT, because Claude may be an approved tool. It's not quite a new vendor integration, because it's running under an existing subscription. It doesn't trigger the procurement or vendor risk workflows that a new SaaS connection would, even though the data exposure profile is comparable.

That gap is where the risk lives. The employees setting up Routines aren't being reckless, they're using a feature that Anthropic is actively promoting, doing exactly what it says on the tin. The problem is that the organizational controls haven't caught up with what the tool can now do.

The practical response is straightforward, if not quick. Your AI acceptable use policy needs to address automated agentic deployments explicitly and separately from interactive use. Any Routine that touches sensitive or regulated data needs to go through vendor risk assessment with the continuous processing dimension accounted for. And you need visibility into whether Routines are being set up on personal-plan accounts against team resources, which takes you back to the same identity and proxy log queries that surface other shadow AI usage.

The bigger picture

Routines is one feature in a pattern that has been building for a while. Claude Code launched as a coding assistant. It added skills and MCP integration. It added desktop automation through Cowork. Now it has cloud-hosted automation that runs under your identity on a schedule. Each addition is individually defensible as a productivity feature. Stacked together, they represent something that looks considerably more like enterprise automation infrastructure than an AI assistant.

That's probably the direction this is going. The security and governance frameworks for it are still catching up. The organizations that do that work now, before the tooling is deeply embedded, will be in a better position than the ones doing it reactively in twelve months.