STACK with Anthropic
Why this matters
STACK plugs into anything you build with Anthropic. One MCP server registration, and Claude picks up around 80 new capabilities (skills marketplace, drop-offs, passports, audit chain, revocation), exposed as tools your agent can call. Whether you're calling the Messages API directly, building a loop on the Claude Agent SDK, hitting Claude through AWS Bedrock or GCP Vertex, or wiring an MCP client around Anthropic's tooling, the integration is the same.
The integration is unusually clean here because MCP is the protocol Anthropic invented, and STACK is MCP-native. There's no shim between Claude and STACK's tool surface. The Claude Agent SDK uses MCP for tools by default; the Messages API supports MCP servers via the mcp_servers request parameter; Claude.ai and Claude Code discover MCP tools through the standard handshake. STACK joins all of that on day one.
Anthropic gives you a great model and a clean API around it. STACK gives you the layers around the model: where credentials live, who's authenticated to make a call, what runs sealed, what gets logged, who can pull the kill switch when something goes wrong at 3am. None of that ships in the API and none of it ships in the Agent SDK. It all ships through one MCP-server registration. And because STACK sits one layer below the model, the same passport, audit chain, and kill switch follow you if you decide to evaluate the same agent on GPT-4 or an open model tomorrow.
What it unlocks
Three things STACK adds to an Anthropic-Agent-SDK deployment that the SDK alone leaves to you:
- MCP-native, end to end. Anthropic invented MCP. STACK is an MCP server. So when Claude reaches for a STACK tool, the protocol shape is exactly what the Agent SDK was built for. No adapters, no JSON-schema translation, no glue code. The tool list discovers cleanly, the call shape lines up, and you skip the integration surface every other framework needs.
- Cross-vendor portability of identity, audit, revoke. Build today on Claude. Want to evaluate the same agent on the OpenAI Agents SDK or an open-model runtime tomorrow? The orchestration changes; the passport, audit chain, and kill switch don't. STACK sits one layer below the LLM, so the production-readiness layers don't need to be re-implemented per vendor.
- The whole STACK package, in one MCP registration. Anthropic's API and Agent SDK are both intentionally lean. Plug STACK in and you get credentials out of the agent process, identity verification on demand, the shared execution surface for your team's connected services, runtime detectors at the proxy, hash-chained audit, sub-60-second revocation, drop-offs for cross-process hand-offs, and the skills marketplace. None of those ship natively; all of them ship through one MCP-server reference.
Wiring it up
Install the Anthropic SDK, grab a STACK API key, and add STACK as an MCP server in your agent config. Step-by-step setup, working Python code, and the tools the agent gets:
→ /docs/integrations/anthropic
Last reviewed 2026-05-08.