LangChain integration
Five-minute path to wiring STACK into a LangChain agent. Result: your agent never holds raw credentials, every tool call goes through STACK with passport-bound scope, and every action lands in the audit log.
1. Install
pip install langchain langchain-openai getstackSTACK's Python SDK ships on PyPI as getstack. It exposes a thin client over the REST API plus a mission context manager that handles passport issuance, checkpoints, and checkout for you.
2. Connect a service
Before wiring code, connect at least one service to your operator at getstack.run/services. OAuth providers (Slack, GitHub, Google, etc.) only ask the user once; STACK stores the token KMS-encrypted on your behalf and the agent never sees it.
3. Register the agent
import os
from getstack import Stack
stack = Stack(api_key=os.environ["STACK_API_KEY"])
agent = stack.agents.register(
name="my-langchain-agent",
description="Customer-support triage bot",
accountability_mode="enforced", # auto-revoke on detector fire
)accountability_mode is a property of the agent (not the passport). Three values: standard (audit only), logged (record warnings, don't block), enforced (auto-revoke on critical detector fires).
4. Open a mission for the run
with stack.passports.mission(
agent_id=agent.id,
intent="Triage and acknowledge new support tickets",
services=["slack"],
checkpoint_interval="5m",
) as mission:
# mission.token is the passport JWT for proxied calls
# mission.log(...) records each tool action
# checkpoints fire automatically every 5m
# checkout fires automatically when the block exits
...The context manager owns the passport lifecycle so your code never manages JTIs by hand. If the block raises, checkout still fires with the failure reason as the summary.
5. Route LangChain tool calls through the STACK proxy
LangChain tools that hit external APIs should call the STACK proxy instead of the upstream directly. The proxy injects the credential server-side; your agent process never sees the token.
from langchain_core.tools import tool
@tool
def post_to_slack(channel: str, text: str) -> dict:
"""Post a message to a Slack channel via STACK's credential proxy."""
response = mission.proxy(
service="slack",
url="https://slack.com/api/chat.postMessage",
method="POST",
body={"channel": channel, "text": text},
)
return response.bodymission.proxy() auto-attaches the passport JWT, logs the tool call into the mission's checkpoint buffer, and returns a ProxyResponse with .status, .headers, and .body. Pass full URLs — the proxy validates them and rejects relative paths.
6. Wire the tool into a LangChain agent
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a customer-support triage bot. Use the tools provided."),
("user", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
langchain_agent = create_openai_functions_agent(llm, [post_to_slack], prompt)
executor = AgentExecutor(agent=langchain_agent, tools=[post_to_slack], verbose=True)
executor.invoke({"input": "Acknowledge ticket #4821 in #support."})Place this executor.invoke(...) call inside the with stack.passports.mission(...) block from step 4 — that's how the tool sees mission in scope.
7. Kill switch
If the agent goes off-script, revoke its passport. Propagation is sub-60-second across all STACK surfaces.
stack.passports.revoke(mission.passport.jti, reason="off-script")Full SDK reference: /docs/sdk/python. Underlying proxy contract: /docs/api/proxy. Tracking the run: /docs/concepts/audit.
Why bother
- No raw credentials in the LangChain process — prompt injection has nothing to leak
- Every tool call audit-logged with passport id, scope, and outcome
- Sub-60-second kill switch via stack.passports.revoke(jti) when the bot misbehaves
- Detector grid catches scope drift, credential bursts, post-checkout access in real time