Built for the regulation.
Enforced by architecture.

Every component of AIGF maps directly to a specific obligation under Regulation EU 2024/1689. This is not post-hoc alignment. The framework was designed from the ground up around the requirements that matter most: risk management, human oversight, transparency, and immutable audit. The table below shows precisely which article each component addresses and how.

Regulation EU 2024/1689 — Official Journal, 13 June 2024
Articles Mapped 14 Articles + 4 Supplementary Standards
Last Reviewed March 2026
EU AI Act — Regulation EU 2024/1689

Article by article.
Obligation by obligation.

The mapping below covers the primary articles of the EU AI Act that apply to deployers and providers of AI systems in regulated environments. Each article is cited to the Official Journal version of 13 June 2024. The AIGF implementation column shows the specific architectural component that addresses each obligation and how. Where AIGF exceeds the article requirement, that is noted explicitly.

EXCEEDS AIGF goes beyond the article requirement
ENFORCED Requirement addressed at the architecture level
Article EU AI Act Requirement AIGF Implementation Alignment
Chapter III, Section 2 — Requirements for High-Risk AI Systems
Art. 9Risk Management System

A continuous, iterative risk management system must be established, implemented, documented, and maintained throughout the AI system lifecycle. It must identify, analyse, estimate, evaluate, and manage risks to health, safety, and fundamental rights.

Gate 3 — Quality Firewall OIS — Operational Integrity Standard DCG — Decision Continuity Gate

Gate 3 applies a deterministic composite risk score across reputational, operational, and contextual dimensions before any output can proceed. The hard stop at Gate 3 cannot be overridden programmatically under any circumstance. OIS monitors framework-level integrity continuously across five dimensions. DCG extends risk monitoring through the full execution window, detecting and responding to condition changes after the decision point.

EXCEEDS
Art. 10Data and Data Governance

Training, validation, and testing datasets must meet quality criteria. Data governance practices must ensure datasets are relevant, representative, accurate, and appropriately managed. Known limitations and biases must be identified and addressed.

Gate 1 — Signal Extraction (MSE) Gate 2 — Contextual Alignment (CIE)

Gate 1 assesses input quality across Signal Strength, Relevance, and Timeliness with defined minimum thresholds. Inputs failing any dimension are rejected before entering the decision protocol with a SIGNAL_INSUFFICIENT reason code. Gate 2 assesses contextual alignment, ensuring inputs are appropriate for the deployment context. Both gates enforce data quality standards at the point of use, not only at model training level.

ENFORCED
Art. 11Technical Documentation

Technical documentation must be drawn up before the high-risk AI system is placed on the market or put into service. It must contain information demonstrating system compliance and must be kept up to date throughout the system lifecycle.

OIS — Version Control Integrity (VCI) Gate 5 — Framework Version in Record

The VCI dimension of OIS monitors that all models, prompts, contracts, schemas, and calibration baselines are current, versioned, and consistent with the certified configuration. Every change to any governed component is logged. The active framework version is captured in every Gate 5 Decision Record, enabling complete technical reconstruction of the governance state at any point in time on request from a competent authority.

ENFORCED
Art. 12Record-Keeping

High-risk AI systems must have the capability to automatically record events throughout their operational lifetime. Logging must enable verification of compliance and must capture sufficient information to reconstruct the circumstances of any decision or output.

Gate 5 — Immutable Decision Record DCG — Execution Continuity Log PostgreSQL — Append-Only, PITR Enabled

Gate 5 produces a cryptographically signed, append-only Decision Record for every governed decision. The record captures all gate scoring, threshold values applied, reviewer identity, confirmation rationale, timestamp, and a SHA-256 hash of the complete record. PostgreSQL enforces immutability at the database role level: UPDATE and DELETE permissions are revoked on the application role. Point-in-Time Recovery is enabled. The Execution Continuity Log extends the record through execution completion, providing end-to-end traceability from decision to action.

EXCEEDS
Art. 13Transparency and Provision of Information to Deployers

High-risk AI systems must be designed and developed to ensure sufficient transparency to enable deployers to interpret outputs and use them appropriately. Instructions must include system characteristics, capabilities, limitations, and human oversight measures.

Gate 5 — Scored Output with Full Rationale MCP — aigf_get_decision Tool

Every AIGF output includes the complete scoring rationale across all gate dimensions, the specific threshold values applied, reason codes for any gate failures, and the full evidence basis for the ACT, WAIT, or REFRESH determination. Outputs are designed for interpretation by a qualified human reviewer. No output is presented as a black-box result. The aigf_get_decision MCP tool provides the complete Decision Record on demand, including for regulatory examination.

ENFORCED
Art. 14Human Oversight

High-risk AI systems must be designed to allow effective human oversight during operation. Humans must be able to monitor, interpret, intervene, and override the system. The design must address automation bias. No consequential decision may proceed without appropriate human confirmation where required.

Gate 4 — Human Intelligence Layer HOIS — Human Oversight Integrity Standard ISVP — Intermittent State Verification

Gate 4 is mandatory and non-skippable. No output proceeds to Gate 5 without explicit human confirmation. Confirmation cannot be automated: the aigf_confirm_review tool enforces human_attestation:true and detects automation patterns including CONFIRM_TOO_FAST and AUTOMATION_PATTERN signals. HOIS addresses automation bias directly through its three-state reviewer classification model, detecting when reviewers drift from genuine critical engagement to habituated confirmation. ISVP provides empirical verification of reviewer governance capability through calibration scenarios with predetermined correct outputs.

EXCEEDS
Art. 15Accuracy, Robustness and Cybersecurity

High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity. They must be resilient against adversarial inputs. They must perform consistently and predictably throughout their operational lifetime.

Gate 3 — Deterministic Scoring Engine OIS — Decision Quality Integrity (DQI) Auth — SHA-256 API Key Enforcement

AIGF produces fully deterministic outputs: identical inputs against identical threshold configurations always produce identical gate results. Probabilistic interpretation is eliminated at the architecture level. The DQI dimension of OIS monitors decision quality consistency across volume and time, detecting drift against the established calibration baseline. Authentication is enforced through SHA-256 hashed API keys on all requests. The Simulation Engine provides ongoing accuracy verification through calibration scenario comparison.

ENFORCED
Chapter III, Section 3 — Obligations of Providers and Deployers
Art. 17Quality Management System

Providers must put in place a quality management system ensuring compliance throughout the AI system lifecycle. The system must cover risk management, testing, post-market monitoring, corrective actions, and version management.

OIS — Five Integrity Dimensions OIS — SUSTAINED / DEGRADED / COMPROMISED / SUSPENDED

OIS operationalises the quality management system requirement as a continuously monitored architecture across five dimensions: Decision Quality Integrity, Human Oversight Integrity, Execution Environment Integrity, Regulatory Alignment Integrity, and Version Control Integrity. The lowest dimension defines the overall framework posture. SUSPENDED posture halts all active decision cycles until integrity is restored. Monthly regulatory alignment reviews and triggered reviews on significant regulatory publications keep the RAI dimension current. This constitutes a live quality management system rather than a periodic review process.

EXCEEDS
Art. 19Automatically Generated Logs

Providers must ensure that high-risk AI systems automatically generate logs of events relevant to identifying risks, ensuring compliance, and enabling post-market monitoring. Logs must be kept for the required periods and available to competent authorities on request.

Gate 5 — Automatic Decision Record Generation MCP Server — Call Logging on Every Request PostgreSQL — Append-Only, PITR Enabled

Every MCP request is logged automatically on receipt with no manual intervention required. Gate 5 generates an immutable Decision Record automatically for every governed decision. PostgreSQL enforces append-only logging at the database role level. Point-in-Time Recovery is enabled for the full retention period. The complete audit trail is available to competent authorities on request with no reconstruction required: every event is logged as it occurred, in sequence, with cryptographic integrity protection.

ENFORCED
Art. 26Obligations of Deployers of High-Risk AI Systems

Deployers must use the AI system in accordance with instructions, assign human oversight to competent individuals, monitor operation, suspend use where necessary, retain logs for minimum six months, inform affected persons of AI use, and cooperate with competent authorities.

Gate 4 — Assigned Human Reviewer (Structural) DCG — SUSPEND and VOID Outputs Gate 5 — Minimum Six Month Retention HOIS — Reviewer Competence Monitoring

AIGF enforces all primary deployer obligations architecturally. Human oversight is assigned structurally at Gate 4 and cannot be bypassed. The DCG SUSPEND and VOID outputs provide the mechanism to halt use when conditions change materially, without requiring manual intervention. Decision Records are retained with full provenance for the required minimum period. HOIS monitors reviewer competence continuously across three states and intervenes when oversight quality degrades, ensuring the assigned individual remains a genuine oversight resource and not a procedural formality.

EXCEEDS
Art. 27Fundamental Rights Impact Assessment

Deployers that are bodies governed by public law or private entities providing public services must conduct a fundamental rights impact assessment before deploying a high-risk AI system. The assessment must be registered in the EU AI database where applicable.

Gate 2 — Value Alignment Dimension Gate 3 — Operational Risk Dimension

AIGF's Gate 2 Value Alignment dimension assesses contextual fit against defined organisational values and risk tolerance on a per-decision basis, providing a continuous operational version of the rights impact assessment at decision level. Gate 3's operational risk dimension captures exposure to individuals affected by each decision. The AIGF Readiness Assessment provides the structured documentation baseline that a formal FRIA requires, establishing the risk classification and governance architecture evidence needed for registration.

ENFORCED
Chapter IV — Transparency Obligations for Certain AI Systems
Art. 50Transparency Obligations

Deployers of AI systems interacting with natural persons must inform those persons they are interacting with an AI system, unless this is evident from context. AI-generated content must be marked as such.

Gate 5 — AI Origin Flag (Mandatory Field) Decision Record — Governance Output Disclosure

Every AIGF Decision Record carries the AI origin of the assessed output as a mandatory, non-removable field. The ACT, WAIT, or REFRESH output explicitly represents a governed AI-assisted determination, ensuring transparency is built into the decision artefact itself and cannot be stripped from it. The disclosure is structural rather than optional, satisfying the obligation at the architecture level.

ENFORCED
Chapter VII — Post-Market Monitoring, Information Sharing and Market Surveillance
Art. 72Post-Market Monitoring

Providers must establish and document a post-market monitoring system that actively and systematically collects, documents, and analyses data on high-risk AI systems after deployment. The system must identify corrective actions where necessary.

OIS — Decision Quality Integrity (DQI) ISVP — Calibration Baseline Monitoring DCG — Real-Time Execution Monitoring

AIGF's OIS DQI dimension provides continuous post-deployment monitoring of decision quality against the established calibration baseline. ISVP system-level calibration runs detect model drift through fixed reference points that the system cannot learn from, triggering OIS alerts when outputs deviate from baseline. DCG monitors each individual decision through the execution window in real time, flagging contextual drift and risk threshold breaches as they occur. Together these constitute an active, systematic post-market monitoring system rather than a periodic review process.

EXCEEDS
Chapter XII — Penalties
Art. 99Penalties

Infringements of prohibited practices: up to €35,000,000 or 7% of global annual turnover. Non-compliance with high-risk AI obligations: up to €15,000,000 or 3% of global annual turnover. Incorrect information to authorities: up to €7,500,000 or 1% of global annual turnover. Whichever is higher applies.

Complete Audit Trail — Available on Request Cryptographic Integrity — SHA-256 Record Hash Framework Version Record — Full Reconstruction

An organisation with AIGF deployed can demonstrate compliance to a competent authority on request through the complete, cryptographically signed Decision Record for any governed decision. The immutable audit trail makes reconstruction of any governance event straightforward and immediate. No manual compilation is required. The OIS framework posture at the time of each decision is captured in the record, providing the complete governance context that Article 99 examination will require. Organisations that can demonstrate this level of governance evidence are substantially better positioned in any enforcement interaction.

ENFORCED

All article references are to Regulation EU 2024/1689 of the European Parliament and of the Council of 13 June 2024 (Official Journal of the European Union, L series). This mapping reflects the AIGF architecture as of March 2026. It is provided for informational purposes only and does not constitute legal advice. The Digital Omnibus on AI proposals (COM(2025) 836) are in trilogue as of March 2026 and do not alter the article obligations mapped above. Organisations should seek qualified legal advice specific to their circumstances before making compliance decisions.

Supplementary Standards

Beyond the EU AI Act.

AIGF's architecture also addresses the primary obligations of the following international and national frameworks. Organisations operating across multiple jurisdictions can satisfy the common core requirements of these frameworks through a single governance architecture deployment.

International Standard

ISO/IEC 42001:2023

AI Management Systems. The first international standard for establishing, implementing, maintaining, and continually improving an AI management system within an organisation.

Clause 6.1 Actions to address risks → Gate 3, OIS
Clause 8.1 Operational planning → Five Gate Protocol
Clause 9.1 Monitoring and measurement → HOIS, OIS, ISVP
Clause 10.1 Continual improvement → DCG, OIS RAI dimension

US Federal Framework

NIST AI RMF 1.0 (2023)

AI Risk Management Framework. The primary US voluntary governance framework, widely referenced in procurement requirements and increasingly required for AI-related insurance underwriting.

GOVERN (policies and accountability) → Gate 4, HOIS
MAP (risk identification) → Gates 1, 2, 3
MEASURE (risk analysis) → Gate 3, ISVP
MANAGE (risk response) → DCG, OIS

UK Regulatory Guidance

FCA SS1/23 Model Risk

Model Risk Management Principles for Banks. Applies to AI used in credit, risk, and investment decisions in UK financial services. Active from May 2024.

Model identification and inventory → Gate 5 Records
Model risk assessment → Gate 3 Scoring
Model approval → Gate 4 Confirmation
Ongoing monitoring → OIS, HOIS, ISVP

EU Data Protection

GDPR Article 22

Right not to be subject to purely automated decisions producing legal or similarly significant effects. Requires meaningful human involvement and the ability to contest decisions.

Meaningful human involvement → Gate 4 (structural, non-skippable)
Right to contest → REVIEWER_OVERRIDE mechanism
Explanation of decision → Full rationale in Gate 5 Record
Human confirmation → human_attestation:true enforced

Governance is not what you publish.
It is what your system can refuse to do.
AIGF does not document compliance with these articles. It enforces it.

Understand your current exposure.

The AIGF Readiness Assessment establishes your current governance posture across all five pillars in 2 to 3 days. It produces a documented baseline, a critical risk register, and a prioritised 90-day remediation roadmap. It is the first step toward the architecture this mapping describes.

Request a Readiness Assessment →