Home / Research / AIR-ES Governance Stack

Executive Guide

Framework V2.0

The Executive's Guide to Enforcing the AI Governance Stack - Via AIR-ES

AI Reliability Enforcement Standards (AIR-ES) - The Missing Bridge Between AI Governance and the Engineering Layer

KM
Kossi Messan
Director, AI Reliability Institute
Revised: January 2026 Version 2.0

For executives leading AI deployment, the regulatory landscape has shifted from a theoretical debate into a concrete compliance requirement. The EU AI Act is now law. ISO/IEC 42001 is the new baseline for governance. And emerging standards like ISO/IEC 24970 are defining exactly what you must record when your AI fails.

These frameworks establish essential governance requirements — the "what" of responsible AI. What remains to be addressed is the "how": the engineering specification that translates policy objectives into auditable, implementable controls.

This guide demystifies the "Full Stack" of AI regulation and introduces the AI Reliability Enforcement Standards (AIR-ES) as the bridge that connects governance intent to engineering implementation.


Part 1: The AI Governance Stack

To navigate this landscape, you must stop viewing regulations as a flat list of requirements. They are a hierarchical stack. Each layer addresses a specific governance need, and AIR-ES provides the enforcement layer that makes them operational.

The AI Governance Stack Defined

Layer 1: Legislative

The EU AI Act — "The Building Code"

Function: Defines the consequences. It tells you what is prohibited and what the penalties are for non-compliance.

Role: Sets the risk appetite and legal boundaries.

Designed to address: Legal accountability and prohibited practices. AIR-ES extends this by providing the technical specification for how to implement "Human Oversight" (Article 14) at the engineering layer — for instance, how to build a system that allows a human to interrupt a recursive agent loop running at 10,000 cycles per second.

Layer 2: Taxonomy

NIST AI RMF — "The Dictionary"

Function: Defines the vocabulary. It provides the language to describe AI risk (Map, Measure, Manage, Govern) and ensures consistent terminology across stakeholders.

Role: Provides the framework for analysis.

Designed to address: Shared understanding and risk communication. AIR-ES extends this by translating the NIST functions into mandatory engineering controls. You can "map" your risks perfectly using NIST and still deploy an unsafe system. AIR-ES ensures that mapped risks trigger implemented controls.

Layer 3: Management

ISO/IEC 42001 — "The Office Manager"

Function: Defines the process. It specifies how to organise your team, document your procedures, and schedule your reviews to ensure safety is considered.

Role: Ensures organisational governance.

Designed to address: Management system certification. AIR-ES extends this by ensuring that a certified management system produces safe products — not just documented processes. You can have a perfectly ISO 42001-certified management system that produces a hallucinating, dangerous AI model, as long as you documented the fact that you reviewed it.

Layer 4: Observability

ISO/IEC 24970 — "The Black Box"

Function: Defines the evidence. It mandates what data must be recorded (Inputs, Outputs, Errors) so that after an incident, you can prove what happened.

Role: Ensures traceability and diagnosis.

Designed to address: Post-incident forensics. AIR-ES extends this by mandating logging of intermediate state transitions — the internal "Chain of Thought" where agentic failures actually occur. ISO 24970 watches the car crash and records the speed; AIR-ES applies the brakes.

Layer 5: Protocol

ISO/IEC TS 8200 — "The Switch"

Function: Defines the mechanics. It standardises the specific states of "Engaged," "Disengaged," and "Transfer" so that a human and a machine can pass control without dropping it.

Role: Ensures control transition mechanics.

Designed to address: Handoff procedures. AIR-ES extends this by defining the trigger logic — when and why the system must switch control, not just how. ISO 8200 provides the button; AIR-ES provides the finger to press it.


Part 2: The Bridge — AI Reliability Enforcement Standards (AIR-ES)

If you implement all the standards above, you have a legally compliant, well-managed, fully logged system that knows how to switch control... but doesn't know when to do it.

This is the Reliability Gap.

The AI Reliability Enforcement Standards (AIR-ES) is the framework developed by the AI Reliability Institute to bridge this gap. It is the enforcement layer — the "Automatic Braking System" for your AI.

The AIR-ES Definition

Function: Defines the intervention. It is the layer that actively does something to stop a failure. It uses Observability (ISO 24970) to trigger the Protocol (ISO 8200) based on the Risk Map (NIST), executing safety controls to satisfy the Law (EU AI Act).

Role: Ensures active engineering defence.

Scope: AIR-ES does not define social values — it relies on the Policy Layer (EU/NIST) to define what is acceptable, and then it enforces those boundaries through engineering.

How AIR-ES Completes the Stack

AIR-ES does not replace existing standards; it operationalises them.

Governance Layer Operational Risk (AIRI V2.0) AIR-ES Control
ISO 24970 (Observability) 4.3 Lack of Non-Repudiation Trusted Execution Environment (TEE) with Intermediate State Logging
ISO 8200 (Control Transfer) 1.2 Infinite Loops, 2.1 Denial of Wallet Semantic Kill-Switch (SKS) — wires the control point to automatic trigger logic
NIST AI RMF (Risk Mapping) 4.4 Unauthorized Autonomy Deterministic Step-Limiter (DSL) — mandatory iteration limits with human override
ISO 42001 (Management) 4.6 Control Handoff Failure HE-AIR™ governance framework — professional accountability structure
EU AI Act (Legal) 4.5 Regulatory Non-Compliance AIR-ES compliance mapping — technical implementation of legal requirements

Part 3: Evidence from the Field

The AI Reliability Observer, AIRI's global incident database, documents real-world failures that illustrate why the enforcement layer is essential. Each layer of the governance stack has documented failure cases:

EU AI Act Gap: Air Canada Chatbot (PUB-001)

What happened: AI chatbot provided incorrect bereavement fare policy, leading to legal liability.

The lesson: Legal requirements existed but no technical control prevented hallucination. The law defined accountability; AIR-ES would have prevented the harm.

NIST AI RMF Gap: Zillow Algorithm Failure (PUB-004)

What happened: Home-buying algorithm systematically overpaid for properties, leading to $569M write-down.

The lesson: Risks were "mapped" in documentation but not prevented in architecture. The framework for analysis existed; the engineering controls did not.

ISO 42001 Gap: Amazon Recruitment AI (PUB-005)

What happened: AI recruitment tool systematically discriminated against female candidates.

The lesson: The process was documented and reviewed; the product was discriminatory. Management certification does not guarantee safe outputs.

ISO 24970 Gap: Samsung Data Leak (PUB-020)

What happened: Employees pasted proprietary source code into ChatGPT, exposing trade secrets.

The lesson: Insufficient logging of what data was being sent to external models. Observability requirements did not extend to the AI interaction boundary.

ISO 8200 Gap: Knight Capital (PUB-003)

What happened: Trading algorithm lost $440 million in 45 minutes due to runaway execution.

The lesson: Control transfer mechanisms existed but trigger logic was absent. The switch was there; no one pressed it in time because no system knew when to press it.

These cases demonstrate that compliance with individual frameworks does not guarantee operational reliability. The enforcement layer is essential.


Part 4: The Professional Pathway

Executives need to know who implements AIR-ES. The AI Reliability Institute has defined six foundational professional roles:

Role Function Reports To
AIR™-Specialist First responder — applies safeguards within autonomous agents (Flow Engineering, Kill Switches) AIR™-Engineer
AIR™-Engineer System owner — operationalises reliability in production (MLOps, Observability, Root Cause Analysis) AIR™-Architect
AIR™-Architect Risk designer — system-level design of controls and architecture (Threat Modelling, Compliance Mapping) AIR™-Manager
AIR™-Auditor (Lead) Independent verifier — assurance and forensic audit (Log Forensics, Compliance Verification) AIR™-Officer (dotted line)
AIR™-Manager Head of execution — managerial oversight, integrated risk reporting, directing technical teams AIR™-Officer
AIR™-Officer Strategic risk lead — fiduciary accountability, board-level strategy, risk appetite setting Board / CEO

This framework ensures clear accountability from code-level implementation to board-level governance. Your lawyers handle the EU AI Act. Your consultants handle ISO 42001. Your AIR™ professionals build the active safety architecture that makes the rest of the stack work.


Part 5: Cross-Sector Applicability

AIR-ES applies across your portfolio. The enforcement principles remain constant; the specific controls adapt to sector requirements:

Sector Key Reliability Concerns AIR-ES Application
Financial Services AML/KYC compliance, fiduciary duty, market manipulation prevention SKS for regulatory boundaries (hard constraints), TEE for comprehensive audit trails
Healthcare Diagnostic accuracy, duty of care, patient safety, informed consent HITL for high-stakes clinical decisions, DSL for treatment recommendation limits
Autonomous Vehicles Pedestrian safety, traffic law compliance, edge case handling HOTL with rapid override capability, SKS for prohibited manoeuvres
Critical Infrastructure Grid stability, public safety, service continuity, national security Hard constraints via SKS, comprehensive TEE logging, multi-layer human oversight
Content Moderation Child safety, free expression balance, brand safety, legal compliance HITL for edge cases, appeal mechanisms, full audit capability

Part 6: The Strategic Path Forward

For the modern executive, "Compliance" is no longer a checklist; it is an engineering challenge.

The Risk of Inaction

Relying solely on "Process Standards" (ISO 42001) leaves you with a well-documented disaster. You will have a perfect paper trail explaining why your agent hallucinated a lawsuit.

The AIR-ES Advantage

By adopting the AI Reliability Enforcement Standards alongside your ISO compliance, you move from Passive Observation to Active Defence.

  • Your Lawyers handle the EU AI Act
  • Your Consultants handle ISO 42001
  • Your AIR™ Professionals build the active safety architecture that makes the rest of the stack work

Conclusion

Don't just record the risk. Engineer the reliability.

AIR-ES is the bridge between governance and engineering — the enforcement layer that transforms policy requirements into operational assurance.

Related Resources:
  • For foundational definitions of human-AI control relationships, see: "The 5 Pillars of Human-Machine Collaboration"
  • For the full technical specification, see: AI Reliability Enforcement Standards (AIR-ES) Specification
  • For real-world incident data, see: AI Reliability Observer
  • For the complete operational risk taxonomy, see: AIRI Risk Register V2.0

Audit Your AI Governance Stack

Don't just read about enforcement—execute it. Use our free 30-point reliability enforcement tool to test, benchmark and monitor your autonomous agents against AIR-ES requirements.

Launch Assessment Tool →

Bibliography

  • European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union.
  • National Institute of Standards and Technology (NIST). (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI 100-1). U.S. Department of Commerce.
  • International Organization for Standardization (ISO). (2023). ISO/IEC 42001:2023 Information technology — Artificial intelligence — Management system.
  • International Organization for Standardization (ISO). (2025). ISO/IEC 24970 Information technology — Artificial intelligence — Post-Incident Observability and Forensics Requirements.
  • International Organization for Standardization (ISO). (2025). ISO/IEC TS 8200 Information technology — Human-Machine Control Handoff Protocols.
  • AI Reliability Institute (AIRI). (2026). AI Reliability Enforcement Standards (AIR-ES) Specification, Version 2.0. Brussels: AIRI Press.
  • AI Reliability Institute (AIRI). (2026). AIRI Risk Register V2.0: Taxonomy of Agentic Failures.
  • Messan, K. (2025). The 5 Pillars of Human-Machine Collaboration. AI Reliability Institute Research Series.

© 2026 AI Reliability Institute (AIRI). All rights reserved.
This document may be cited with attribution.