// compliance

Article 14 compliance.

By default.

// why this matters now

The EU AI Act's high-risk-system provisions enter full effect on August 2, 2026. Article 14 requires effective human oversight, the ability to monitor operation and detect anomalies, oversight commensurate with risk, and the ability to override or halt the system. STACK provides the runtime primitives those obligations rest on - out of the box, no additional build required.

Article 14 applies if your agent operates in an Annex III high-risk category - recruitment automation, credit scoring, access to essential public or private services, biometric categorisation, law enforcement, migration, administration of justice, or similar. Plenty of agents in production fall outside Annex III; if yours is one of them, you don't need this page. If yours is not, fines for non-compliance with Article 14 obligations reach €15M or 3% of global turnover (Art. 99(4)), whichever is higher.

STACK provides the technical substrate: detectors, audit trail, kill switch, checkpoint-based review, three accountability modes scaled to risk. Using STACK removes the need to build the oversight mechanism yourself. That's roughly half of a typical Article 14 implementation, shipped. The legal, organisational, and documentation halves stay with you (see "what STACK does not cover" below).

// the mapping

The runtime-monitoring clauses of Article 14, matched to STACK primitives that already ship.

Article 14 also has clauses STACK does not cover (14(2) on documenting foreseeable misuse, 14(5) on biometric two-person verification). Those are listed in "what STACK does not cover" below.

  • Art. 14(1)

    Effective human oversight

    High-risk AI systems shall be designed and developed in such a way that they can be effectively overseen by natural persons during the period in which they are in use.

    STACK detectors

    • Credential Burst Detector
    • Scope Drift Detector
    • Checkpoint Silence Detector

    STACK primitives

    • Real-time security-event stream
    • Checkpoint-based behavioural monitoring
    • Per-operator dashboards
  • Art. 14(3)

    Oversight commensurate with risk

    The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system.

    STACK detectors

      STACK primitives

      • Three accountability modes per agent: enforced (auto-revoke on critical signals), logged (record only), standard (review queued)
      • Operator picks the rigor of oversight per agent, per service, per skill
      • Defaults adjustable by tier — Free agents default to enforced, Studio operators choose
    • Art. 14(4)(a)

      Duly monitor operation; detect anomalies

      Natural persons shall be enabled to properly understand the relevant capacities and limitations of the high-risk AI system and duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance.

      STACK detectors

      • Scope Escalation Pattern Detector
      • Undeclared Access Detector
      • Behavioral Anomaly Detector

      STACK primitives

      • Hash-chained audit log with full action lineage
      • Per-agent statistical baseline (last-20-checkout rolling window)
      • Post-hoc review engine surfacing flag reasons
    • Art. 14(4)(b)

      Remain aware of automation bias

      Natural persons shall be enabled to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system.

      STACK detectors

      • Action Volume Detector
      • Duration Overrun Detector
      • Behavioral Anomaly Detector

      STACK primitives

      • Human-review queue for flagged checkouts
      • Forced review of anomalies before agent continuation
      • Per-agent statistical baseline (last-20-checkout rolling window)
    • Art. 14(4)(c)

      Correctly interpret the system's output

      Natural persons shall be enabled to correctly interpret the high-risk AI system's output, taking into account the characteristics of the system and the interpretation tools and methods available.

      STACK detectors

      • Intent Deviation Detector

      STACK primitives

      • Declared intent vs. observed action comparator (LLM-graded)
      • Natural-language summaries per checkpoint
    • Art. 14(4)(d)

      Decide not to use the system or override its output

      Natural persons shall be enabled to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system.

      STACK detectors

      • Post-Checkout Access Detector

      STACK primitives

      • Sub-60-second global passport revocation
      • One-API-call kill switch
      • Proxy mode: revoking in STACK is sufficient, no agent-side chase-down
    • Art. 14(4)(e)

      Intervene or interrupt via a stop button

      Natural persons shall be enabled to intervene in the operation of the high-risk AI system or interrupt the system through a stop button or a similar procedure that allows the system to come to a halt in a safe state.

      STACK detectors

      • Delegation Downgrade Detector
      • Undeclared Delegation Detector

      STACK primitives

      • Revoke-by-agent, revoke-by-session, revoke-all endpoints
      • Automatic block on critical-severity signals (configurable per operator)

    // related obligations

    Two more clauses where STACK answers most of the technical question.

    Art. 12 — Record-keeping

    “High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the system... ensuring a level of traceability of the system's functioning appropriate to the intended purpose.”

    STACK's hash-chained audit log is the canonical answer here. Every passport issuance, every credential injection, every checkpoint, every revocation lands in an append-only log where any later modification breaks the chain visibly. The log exports to JSON / NDJSON / CSV and the chain is verifiable externally without trusting STACK.

    Art. 15 — Accuracy, robustness, cybersecurity

    “High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of... cybersecurity, and that they perform consistently in those respects throughout their lifecycle.”

    Two STACK primitives answer the cybersecurity half: the proxy keeps raw credentials out of the agent process entirely (nothing for prompt injection to exfiltrate), and KMS-envelope encryption keeps stored credentials encrypted at rest. Cybersecurity has many other dimensions Art. 15 covers; STACK addresses the agent-credential boundary specifically.

    // out of scope

    What STACK does not cover.

    Honest scope is a stronger procurement story than claiming full coverage. STACK is the runtime substrate. The rest of an Article 14 (and broader high-risk-system) compliance programme stays with you and your counsel:

    • ·Art. 14(2) — documenting foreseeable misuses. Provider obligation, requires written analysis of how the system can be misused. Not a runtime mechanism.
    • ·Art. 14(5) — biometric two-person verification. For Annex III point 1(a) systems only. Procedural; STACK does not enforce it.
    • ·Art. 9 — risk management system. Organisational programme, not a technical product.
    • ·Art. 13 — instructions for use. Pre-deployment documentation: model cards, intended use statements, performance metrics.
    • ·Art. 17 — quality management system. ISO-9001-shape obligations on the provider.
    • ·Art. 27 — Fundamental Rights Impact Assessment. Required for certain Annex III deployers; legal/policy work, not infrastructure.
    • ·Art. 72 — post-market monitoring. Provider obligation to track real-world performance and report serious incidents.
    • ·Conformity assessment + CE marking. Notified-body engagement for high-risk systems.

    Deploying agents in the EU? Let's talk about making your deployment Article 14 ready.

    Get in touch

    This page is a technical interpretation of Article 14 of Regulation (EU) 2024/1689 (the EU AI Act) as it applies to agent runtime infrastructure. Last reviewed 2026-04-25. The AI Act will continue to evolve through delegated acts and AI Office guidance; this page will be updated as that happens. It is not legal advice. For conformity assessment and legal review, engage counsel familiar with the EU AI Act.