Orient Framework Generator Contact
← hexis.center
AI Governance · Nisan 2026

AI Governance Engineering: The Anatomy of a New Engineering Discipline

AI governance is fragmented. This article maps AI Governance Engineering as a new discipline and proposes a four-layer competency model.

When Google Named SRE, System Administrators Already Existed

In 2003, Google took a step that reshaped the software industry. It did not invent a new technology; it named a discipline.

System administrators had been keeping servers running for decades. Operations teams had their own practices, tools, and tribal knowledge. But nobody had organized these practices into an engineering discipline with defined principles, career paths, and measurable outcomes.

Google called it Site Reliability Engineering (SRE). The core insight was simple but consequential: "Reliability is not an operations problem; it is an engineering problem." And engineering problems require engineering solutions — automation, error budgets, service level objectives, and code that replaces manual toil.¹

Today, AI governance is moving toward a similar point.

Organizations worldwide are building governance structures for their AI systems. Legal teams interpret regulations. Compliance officers write policies. Risk managers fill assessment forms. Data scientists monitor model performance. But these efforts remain fragmented across departmental silos, connected by spreadsheets and email chains rather than by engineering infrastructure.

The IAPP AI Governance Profession Report 2025 — based on survey data from over 670 professionals across 45 countries — identifies a critical shortage: while technical talent is abundant, there is a pronounced lack of professionals who can integrate governance, ethics, and compliance with the technical realities of AI deployment.² Thirty-three percent of respondents cited a lack of professional training and certification, and 31% pointed to a complete absence of qualified professionals as key barriers.²

This article argues that the missing piece is not more policies or more compliance officers. What is taking shape is the birth of a new engineering discipline: AI Governance Engineering.

A Fragmented Landscape

The components of this discipline already exist. They are scattered across different communities, each with its own terminology and priorities.

Policy and Management Layer. The most mature segment. Organizations like IAPP have created certification programs (AIGP), consulting firms offer compliance assessments, and law firms advise on regulatory interpretation. The EU AI Act, ISO/IEC 42001, and NIST AI RMF provide the regulatory scaffolding.³ By 2025, over 60% of Fortune 500 companies had established dedicated AI governance structures.² This layer answers the question: "What rules must we follow?"

Systems Engineering Layer. A smaller but growing community. EWSolutions, led by David Marco, PhD, uses the term "AI Governance Systems Engineering" to describe the practice of transforming governance principles into working architectures with clear inputs, outputs, controls, and feedback loops.⁴ This moves beyond policy documents toward process design, but remains primarily at the organizational and architectural level. This layer answers the question: "How should governance processes be designed?"

Technical Infrastructure Layer. The newest and fastest-growing segment. This is where governance meets code. In April 2026, Microsoft released the Agent Governance Toolkit — a seven-package open-source system providing sub-millisecond policy enforcement for AI agents, addressing all OWASP Agentic AI risks.⁵ In February 2026, Kyndryl announced a policy-as-code capability that translates organizational rules and regulatory requirements into machine-readable policies.⁶ Open Policy Agent (OPA) and its Rego policy language, already an industry standard for infrastructure governance, are being adapted for AI-specific enforcement.⁷ Freeman Jackson's November 2025 paper proposes a unified policy-as-code governance architecture for autonomous AI agents with multi-layer risk, compliance, and zero-trust control.⁸

This layer answers the question: "How do we embed governance directly into the systems we build?"

The critical observation is this: these three layers are not competing approaches — they are maturity levels of the same underlying need. Organizations start with policies (Layer 1), design processes around them (Layer 2), and eventually embed them into their technical infrastructure (Layer 3). But today, no defined discipline spans all three.

What Is AI Governance Engineering?

AI Governance Engineering: the engineering practice of translating regulatory requirements, ethical principles, and organizational policies into executable, testable, and auditable technical controls embedded directly into AI system lifecycles.

This definition is distinct from AI governance management (focused on policy creation and organizational oversight) and from AI governance systems engineering (focused on process architecture and organizational design). The difference between the three can best be understood through a concrete example.

Requirement: EU AI Act Article 9 mandates a risk management system for high-risk AI systems.

Governance Management approach: A compliance team writes a risk management policy document. They create an Excel template for risk assessments. Every quarter, someone manually checks completion. If incomplete, they send a reminder email.

Systems Engineering approach: A governance architect designs a risk management workflow. It defines roles (who performs the assessment), timing (when it must occur), and escalation paths (what happens when deadlines are missed). This is documented in a process map.

Governance Engineering approach: A governance engineer writes a policy-as-code rule — for example, in OPA's Rego language. This rule automatically evaluates whether a risk assessment exists and is current before a model can be deployed. If the assessment is missing, deployment is blocked at the pipeline level. Every decision is logged as an immutable governance event. Changes to the AI system automatically trigger re-evaluation of affected governance steps.

A point worth emphasizing: the third approach does not eliminate the need for the first two. Policy documents still guide human decision-making. Process designs still coordinate teams. But the engineering layer ensures that critical controls are enforced consistently, automatically, and with full traceability — regardless of whether someone remembered to check the spreadsheet.

The Human Oversight Question: Where Does Automation End?

A legitimate question arises: does automating governance mean removing humans from governance decisions?

Both regulation and industry practice give a clear answer: no.

EU AI Act Article 14 explicitly requires that high-risk AI systems be designed so that "natural persons can effectively oversee their functioning."³ This is not optional — it is a legal requirement. Moody's research confirms that only approximately 5% of surveyed professionals are comfortable with fully autonomous AI systems.¹²

But "human oversight" does not mean "humans do everything manually." As the IAPP has noted, a human review process that is too shallow, too slow, or too disconnected from the underlying AI decision chain does not constitute meaningful oversight — and regulators are beginning to say so explicitly.¹³

AI Governance Engineering positions itself precisely at the design of this balance. Determining what should be automated, where human judgment is indispensable, and how to design the transition points between the two is a core responsibility of this discipline:

Automate the deterministic: Risk classification checks, bias test result validation, deployment blocking for non-compliant systems, audit log generation. These are rule-based, repetitive, and error-prone when manual.

Inform and empower the human: When a model drift alert fires, the system should automatically assemble relevant context — what changed, what the potential impact is, what the regulatory implications are — and present it to a human decision-maker. The system informs; the human decides.

Design the transition points: Which decisions are fully automated, which require human approval, which are fully human? Defining these boundaries is one of the AI Governance Engineer's core tasks.

In short, AI Governance Engineering is not about handing governance over to automation. It is about strengthening the governance process with engineering principles. The rapid pace of technological advancement makes it imperative that we approach the AI governance process with greater technical rigor.

A Four-Layer Competency Model

What competencies does this discipline require? Based on an analysis of the current landscape, regulatory frameworks, and technical infrastructure, a four-layer model can be proposed.

Layer 1: Regulatory Engineering

The ability to translate legal and regulatory text into technical requirements.

This goes beyond reading the law. It requires decomposing regulatory obligations into implementable technical specifications. EU AI Act Article 9 does not merely say "do risk management." It specifies continuous iterative processes, identification and analysis of known and reasonably foreseeable risks, estimation of risks from intended use and reasonably foreseeable misuse, and evaluation of risks from post-market monitoring data.³

The competency is in converting these legal concepts into engineering specifications: what data to collect, what thresholds to set, what monitoring signals to track. This requires fluency in both regulatory language and technical systems — the very combination that the IAPP report identifies as scarce.²

Layer 2: Technical Infrastructure

The ability to implement governance controls using policy-as-code, monitoring, and automation tools.

This is the hands-on engineering layer. An AI Governance Engineer at this level can write OPA/Rego policies for runtime governance rule enforcement, build ML observability pipelines using tools like Evidently AI, Langfuse, or AIF360, construct CI/CD governance gates that block non-compliant deployments, and implement immutable audit trails using event-driven architectures.⁷ ¹⁰ ¹¹

This does not mean the AI Governance Engineer must be a full-stack software engineer. But they must be technically proficient enough to configure policies, read rule sets, and integrate governance checkpoints into existing development pipelines.

Layer 3: Operational Design

The ability to design governance workflows, incident response plans, and cross-functional coordination mechanisms.

An AI system that fails a bias test at 2 AM on a Saturday needs more than an automated alert. Someone must know what to do, who to notify, and when to escalate. The AI Governance Engineer designs these operational workflows — not as abstract process maps, but as runbooks integrated with the technical infrastructure.

An important distinction: the AI Governance Engineer does not replace the lawyer, the data scientist, or the security specialist. These experts continue their work within their domains. The AI Governance Engineer's role is orchestration — ensuring that a legal requirement flows seamlessly into a technical control, a technical control into a monitoring alert, and an alert into an incident response procedure. The person who connects the pieces and designs the transitions between them.

Layer 4: Strategic Foresight

The ability to anticipate the evolution of AI governance requirements and prepare today's systems for tomorrow.

AI governance is not a static field, and its evolution continues at pace. Today, most organizations operate with manual processes, consultant-driven assessments, and template-based documentation. But developments across the industry point toward governance being progressively embedded into software development pipelines (CI/CD), compliance-as-code approaches becoming widespread, and automated re-assessment on system changes becoming the norm.

Looking further ahead, as AI agents become more autonomous, governance mechanisms may need to incorporate more autonomous elements as well. Microsoft's Agent Governance Toolkit and Kyndryl's policy-governed workflows represent the first concrete steps in this direction.⁵ ⁶

The essence of this competency layer is ensuring that every governance control built today remains adaptable to tomorrow's changing requirements. Strategic foresight is not about making precise predictions; it is about embedding flexibility into the design.

What This Means in Practice

What we have described so far is a framework. What does it mean in practice today?

The honest answer: a professional profile that fully spans all four layers is not yet common. This is a natural consequence of the discipline being in its early formation. SRE was the same in its early years — Google's first SRE teams were composed of existing software engineers, and the discipline crystallized over time.

What is concrete today is this: the building blocks exist. OPA, Evidently, Langfuse, and Microsoft's Agent Governance Toolkit are production-grade, open-source tools.⁵ ⁷ ¹⁰ ¹¹ Regulatory frameworks (EU AI Act, ISO 42001) are defining technical requirements with increasing specificity.³ And most importantly, the talent gap that the IAPP report identifies continues to grow — the demand for professionals who can bridge governance and engineering is rising.²

Against this backdrop, direct messages can be offered to three audiences:

For organizations deploying AI systems: The question is not whether you need governance — regulation already mandates it. The question is whether your governance will be a manual burden that scales linearly with the number of AI systems, or an engineering capability that scales efficiently.

For governance professionals: Technical literacy — not deep coding skills, but the ability to understand policy-as-code tools, monitoring platforms, and CI/CD concepts — is becoming increasingly important for career progression in AI governance.

For educators and certification bodies: The four-layer competency model suggests a natural curriculum structure. Regulatory Engineering builds on existing legal education. Technical Infrastructure requires hands-on lab work. Operational Design draws on incident management traditions. Strategic Foresight integrates scenario planning and trend analysis. No such integrated curriculum exists today.

Conclusion: Naming the Discipline

The components of AI Governance Engineering are not new. Regulatory frameworks exist. Policy-as-code tools exist. ML monitoring platforms exist. Process design methodologies exist. What has been missing is the recognition that these components, taken together, constitute a coherent engineering practice.

Google did not invent server operations when they named SRE. They recognized that reliability was an engineering problem and organized a discipline around that insight.

A similar recognition is knocking at the door of the AI governance process: for AI systems operating at scale, under complex regulatory requirements, in high-stakes environments, governance is an engineering problem. And this problem calls for a holistic engineering approach rather than fragmented, piecemeal solutions.

---

References

1. Beyer, B., Jones, C., Petoff, J., & Murphy, N.R. Site Reliability Engineering: How Google Runs Production Systems. O'Reilly Media, 2016. sre.google

2. IAPP & Credo AI. AI Governance Profession Report 2025. International Association of Privacy Professionals. iapp.org

3. European Parliament and Council. Regulation (EU) 2024/1689 — Artificial Intelligence Act. EUR-Lex, 12 July 2024. eur-lex.europa.eu

4. EWSolutions. AI Governance Systems Engineering: The Executive Playbook for Responsible AI. ewsolutions.com

5. Microsoft. Introducing the Agent Governance Toolkit: Open-source runtime security for AI agents. Microsoft Open Source Blog, 2 April 2026. opensource.microsoft.com

6. Kyndryl. Kyndryl Introduces Policy-Governed Agentic AI. 11 February 2026. kyndryl.com

7. Open Policy Agent. Policy Language — Rego. openpolicyagent.org

8. Jackson, F. Governing Autonomous AI Agents with Policy-as-Code: A Multi-Layer Architecture for Risk, Compliance, and Zero-Trust Control. SSRN, November 2025. ssrn.com

9. NexaStack. Agent Governance at Scale: Policy-as-Code Approaches in Action. nexastack.ai

10. Evidently AI. Open-source ML and LLM observability. docs.evidentlyai.com

11. Langfuse. Open-source LLM engineering platform. langfuse.com

12. Moody's. Human in the Loop: Why Human Oversight Still Matters in AI-Driven Risk and Compliance. moodys.com

13. IAPP. 'Human in the Loop' in AI Risk Management — Not a Cure-All Approach. iapp.org

14. Axial Search. Market Analysis of 146 AI Governance Job Postings. axialsearch.com

---

Ozden Coskun is the founder of Hexis. He holds ISO/IEC 42001:2023 certification and is a member of the IAPP. Hexis develops the ORIENT methodology for AI governance assessment and builds governance tools at hexis.center.

Related Articles

EU AI Act: Risk Classification Guide → ISO 42001 and Governance Maturity: Where to Start? → GPAI Rules: What Changed in Articles 51–56? → EU AI Act and the Digital Omnibus: What Should Companies Do Now? →
← All posts