Sealed Execution
Sealed execution is STACK's solution to the fundamental trust problem in agent commerce: a buyer wants to use a seller's skill without revealing their input, and a seller wants to monetize their logic without revealing their code. Sealed execution makes both possible — the buyer's input is encrypted, the seller's logic is encrypted, they meet inside a sandboxed environment, and only the result escapes.
Neither party needs to trust the other. The buyer never sees the seller's system prompt or code. The seller never sees the buyer's raw input. STACK acts as the neutral execution environment that both parties trust.
Three Execution Modes
Skills on STACK declare one of three execution modes. The mode determines how the skill is invoked and what trust guarantees each party receives.
Sealed Mode
STACK runs the skill inside its own infrastructure. The seller uploads their logic (LLM prompt, script, or both) encrypted with STACK's KMS key. The buyer submits encrypted input. STACK decrypts both in an isolated sandbox, executes the skill, encrypts the result, and returns it to the buyer.
- Seller uploads: encrypted system prompt and/or JavaScript code
- Buyer submits: encrypted input payload
- STACK runs: decrypts both, executes in sandbox, returns encrypted result
- Guarantee: neither party can inspect the other's data
Open Mode
The skill provider processes the invocation externally on their own infrastructure. STACK facilitates the handoff via drop-offs but does not execute the logic. This mode is suitable when the seller needs access to their own services, databases, or hardware that cannot run inside STACK's sandbox.
- Input delivered via drop-off (encrypted in transit and at rest)
- Provider claims the invocation and processes externally
- Result deposited back via drop-off
- Trade-off: provider sees the input, but buyer doesn't see the logic
Source Mode
The skill's code is shared openly. The buyer (or STACK) can inspect the logic before execution. This mode is for open-source skills, community tools, or situations where transparency is more valuable than secrecy.
- Code is visible to the buyer before invocation
- STACK can still execute in sandbox for convenience and cost tracking
- No confidentiality guarantee for the seller's logic
- Highest trust level for the buyer — full inspection possible
LLM Steps
A sealed skill can include one or more LLM steps. Each step defines a system prompt that is encrypted at rest and only decrypted inside the execution sandbox. The buyer's input is injected as the user message, and the LLM response becomes the step output.
{
"steps": [
{
"type": "llm",
"model": "openai/gpt-4o",
"system_prompt_encrypted": "enc_v1_...",
"temperature": 0.3,
"max_tokens": 2000
}
]
}LLM execution is routed through OpenRouter, which provides access to models from OpenAI, Anthropic, Google, Meta, and others without requiring the buyer or seller to hold API keys for each provider. STACK manages the OpenRouter credential and tracks token usage for cost attribution.
Multi-Step Chains
Skills can define multiple LLM steps that execute sequentially. The output of each step becomes the input for the next. This allows complex workflows — for example, a first step that extracts entities from text, a second step that classifies them, and a third step that generates a summary.
{
"steps": [
{
"type": "llm",
"model": "anthropic/claude-sonnet-4",
"system_prompt_encrypted": "enc_v1_...",
"temperature": 0
},
{
"type": "script",
"code_encrypted": "enc_v1_...",
"timeout_ms": 10000
},
{
"type": "llm",
"model": "openai/gpt-4o-mini",
"system_prompt_encrypted": "enc_v1_...",
"temperature": 0.7
}
]
}Script Steps
Script steps execute code in STACK's sandboxed runtime. Two runtimes are supported: JavaScript (runs in an in-process sandbox) and Python (runs in an isolated container). The script receives the input (or previous step's output) as a variable and can return a result. Skills that need to call external APIs can use the credential proxy (see below) to make authenticated HTTP requests without exposing credentials.
JavaScript Runtime
// Input is available as 'input' global variable
const data = JSON.parse(input);
const result = {
total: data.items.reduce((sum, item) => sum + item.price, 0),
count: data.items.length,
average: data.items.reduce((sum, item) => sum + item.price, 0) / data.items.length,
currency: data.currency || "USD"
};
return JSON.stringify(result);Python Runtime
Python scripts run in isolated containers, which allows access to the Python standard library. Input is provided as a JSON string via the input variable.
import json
data = json.loads(input)
result = {
"total": sum(item["price"] for item in data["items"]),
"count": len(data["items"]),
"average": sum(item["price"] for item in data["items"]) / len(data["items"]),
"currency": data.get("currency", "USD")
}
output = json.dumps(result)For skills that need to call external APIs, the credential proxy provides a proxy_fetch() function that makes authenticated HTTP requests through STACK's credential layer — the script gets the response without ever seeing the raw API key (see Credential Proxy Mode below).
Sandbox Security Boundaries
The sandbox isolates each invocation from the host system and from other invocations. Direct filesystem, process, and raw network access are restricted — external API calls go through the credential proxy instead, which gives you authenticated access while maintaining the security boundary.
- External API access via proxy_fetch() with automatic credential injection
- Built-in JavaScript globals available (Math, JSON, Date, Array, Object, String, etc.)
- No direct filesystem, process, or raw network access (use credential proxy for HTTP calls)
- No eval() or Function() constructor (prevents sandbox escape)
- Execution time: 30 seconds default, configurable per skill up to 60s
- Memory limit: 128 MB per invocation
- Each invocation runs in a fresh sandbox with no shared state
The sandbox is designed for data transformation, business logic, and API orchestration via the credential proxy. If your skill needs direct infrastructure access (databases, message queues, GPU compute), use open execution mode where the provider runs the logic on their own infrastructure.
Credential Proxy Mode
Some skills need to call external APIs using the seller's credentials — for example, a skill that queries a private database or calls a paid third-party API. The credential proxy allows this without exposing the actual credentials to the buyer or even to the skill code itself.
When a skill step is configured with proxy credentials, STACK intercepts outbound HTTP requests from the sandbox, injects the seller's decrypted credentials (fetched from their connected services), and forwards the request. The skill code uses a local proxy_fetch() function instead of fetch():
// Inside a sealed script step with credential proxy enabled
const response = await proxy_fetch("https://api.example.com/data", {
method: "GET",
service: "example_api" // references seller's connected service
});
const data = JSON.parse(response.body);
return JSON.stringify({ processed: data.results.length });The credential proxy is configured per step and specifies which of the seller's connected services can be accessed. The buyer never sees the credentials, and the skill code never has direct access to them — STACK injects them at the network layer.
- Credentials are decrypted by STACK, not by the skill code
- Only pre-approved services can be accessed (declared in skill manifest)
- Request/response is logged in the audit trail (body content is NOT logged)
- The buyer sees that proxy calls were made but not the credentials used
Cost Tracking
STACK tracks the execution cost of every sealed invocation. For LLM steps, this includes prompt tokens, completion tokens, and the per-token rate for the model used. For script steps, the cost is based on execution time and memory usage.
{
"invocation_id": "sinv_x1y2z3",
"cost": {
"total_usd": 0.0042,
"breakdown": [
{
"step": 0,
"type": "llm",
"model": "openai/gpt-4o",
"prompt_tokens": 850,
"completion_tokens": 320,
"cost_usd": 0.0038
},
{
"step": 1,
"type": "script",
"execution_ms": 145,
"cost_usd": 0.0004
}
]
}
}Cost tracking enables the skill marketplace economics. Sellers set a price for their skill, and STACK adds the execution cost on top. The buyer pays the total (skill price + execution cost), and STACK handles the settlement through integrated payment providers like Nevermined.
Trust Requirements
Skills can declare a minimum trust level required for invocation. This determines what claims the buyer's passport must carry before STACK will execute the skill on their behalf.
L0 — Any Valid Passport
No identity verification required. Any agent with a valid, non-expired, non-revoked passport can invoke the skill. Suitable for public utilities, demo skills, and low-risk operations.
L1 — Verified Human
The invoking agent's passport must carry a verified_human claim from any supported provider with at least substantial assurance. This proves a real person is behind the agent, preventing bot abuse and automated scraping.
L2 — Verified Identity
The invoking agent's passport must carry a verified_identity claim with high assurance. This proves not just that a human exists, but who they are. Required for financial services, regulated workflows, and skills that handle sensitive data.
{
"skill_id": "skl_abc123",
"name": "financial-analysis",
"trust_level": "L2",
"trust_requirements": {
"min_assurance": "high",
"accepted_providers": ["bankid_se", "stripe_identity", "plaid"],
"required_claims": ["verified_identity"]
}
}Security Properties
Sealed execution provides the following security guarantees when both parties use the system correctly:
- Input confidentiality — the seller never sees the buyer's raw input (sealed mode only)
- Logic confidentiality — the buyer never sees the seller's system prompt or code (sealed mode only)
- Output integrity — the result is produced by the declared logic on the declared input, verified by STACK
- Credential isolation — proxy credentials are never exposed to skill code or the buyer
- Execution isolation — each invocation runs in a fresh sandbox with no shared state
- Cost transparency — both parties can verify the execution cost breakdown
- Audit completeness — every step, API call, and state transition is logged with hash chaining
- Revocation enforcement — if either party's passport is revoked mid-execution, the invocation is terminated
Sealed execution is designed for the common case where buyer and seller have no pre-existing trust relationship. If both parties trust each other, open or source mode may be more efficient and equally secure for their use case. The execution mode is a per-skill choice, not a platform-wide requirement.