Guide: multi-agent handoff via drop-offs

Transfer a structured payload between two of your agents. The producer deposits; the consumer collects. Payloads are schema-validated at deposit, encrypted at rest, and destroyed on first collection.

1. Create the drop-off

One call declares the producer, the consumer, the JSON Schema the payload must satisfy, and the TTL. Default TTL is 30 minutes; max 24 hours.

bash
curl -X POST https://api.getstack.run/v1/dropoffs \
  -H "Authorization: Bearer $STACK_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "from_agent_id": "agt_researcher",
    "to_agent_id":   "agt_writer",
    "schema": {
      "type": "object",
      "required": ["topic", "findings"],
      "properties": {
        "topic":    { "type": "string" },
        "findings": { "type": "array", "items": { "type": "string" } },
        "sources":  { "type": "array", "items": { "type": "string", "format": "uri" } }
      }
    },
    "ttl_seconds": 1800,
    "on_expire": "notify"
  }'
json
{
  "id": "dof_abc",
  "status": "created",
  "expires_at": "2026-04-23T15:02:00Z"
}

2. Deposit (producer)

The producer submits the payload. STACK validates it against the schema, KMS-encrypts it, and records deposited. A validation failure throwsSCHEMA_VALIDATION_FAILED.

bash
curl -X POST https://api.getstack.run/v1/dropoffs/dof_abc/deposit \
  -H "Authorization: Bearer $STACK_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "payload": {
      "topic": "Q2 supplier audit",
      "findings": [
        "vendor A invoices skew 12% above market",
        "vendor B on-time delivery 94%"
      ],
      "sources": ["https://internal/audit/2026-q2"]
    }
  }'

3. Collect (consumer)

The consumer reads the payload once. The act of collecting decrypts, returns, and deletes the payload in the same call. A second collect on the same drop-off throwsDROPOFF_ALREADY_COLLECTED.

bash
curl -X POST https://api.getstack.run/v1/dropoffs/dof_abc/collect \
  -H "Authorization: Bearer $STACK_API_KEY"
json
{
  "id": "dof_abc",
  "status": "collected",
  "payload": {
    "topic": "Q2 supplier audit",
    "findings": ["vendor A invoices skew 12% above market", "vendor B on-time delivery 94%"],
    "sources": ["https://internal/audit/2026-q2"]
  },
  "collected_at": "2026-04-23T14:37:18Z"
}

After collect, the payload_encrypted column is nulled. The drop-off row persists as an audit artifact (status, schema, hashes, timestamps) but the data is gone.

Status transitions

text
created → deposited → collected   (happy path)
created → expired                 (TTL elapsed before deposit)
deposited → expired               (TTL elapsed before collect)
created / deposited → failed      (on_expire: "fail" triggered an error)

on_expire actions

  • notify (default) - post a security event when TTL elapses; drop-off goes to expired
  • retry - re-issue the drop-off with a fresh TTL; the producer is notified to redeposit
  • fail - mark the drop-off failed and page the producer operator

Multi-hop chains

Repeat the pattern for deeper pipelines - producer → consumer_A (who also becomes the next producer) → consumer_B. Each hop is its own drop-off with its own schema. The chain is not a first-class primitive; it is just two drop-offs in series.

Related

  • /docs/concepts/dropoffs - full schema, lifecycle, encryption model
  • /docs/api/dropoffs - complete endpoint reference
  • /docs/reference/errors - SCHEMA_VALIDATION_FAILED, DROPOFF_EXPIRED, DROPOFF_ALREADY_COLLECTED
stack | docs