Practitioner Track · Module 5

AI Governance in Practice & Escalation

Learn to navigate AI governance structures, document use cases for approval, and know when and how to escalate issues.

25 min
175 XP
Jan 2026
Learning Objectives
  • Understand your organization's AI governance structure and key roles
  • Learn how to document an AI use case for approval (intake forms, required information)
  • Identify situations that require escalation to governance bodies
  • Know the procedure for reporting AI incidents or concerns

Why Governance Matters

By 2026, AI governance isn't a “responsible AI” side-quest—it's a prerequisite for scaling. Most enterprises now report regular AI use, and many are already experimenting with AI agents, but the hard part is still moving from pilots to enterprise impact.

That's why governance has moved up the org chart. McKinsey's global survey work has found CEO oversight of AI governance is among the factors most correlated with bottom-line impact, and organizations are increasingly “elevating governance” while trying to mitigate a growing set of gen-AI risks like inaccuracy, cybersecurity, and IP infringement. At the same time, readiness remains uneven—Deloitte reported that only 25% of leaders felt “highly” or “very highly” prepared to address GenAI governance and risk issues.

Regulation also turned governance from “best practice” into “table stakes.” The EU AI Act entered into force in August 2024 and includes phased obligations—such as AI literacy requirements (February 2025), GPAI governance obligations (August 2025), and broad applicability in August 2026 (with some extended timelines).

NIST's AI Risk Management Framework puts it plainly: governance means making sure “roles and responsibilities and lines of communication” for managing AI risk are documented and clear across the organization. Effective governance isn't bureaucracy—it's how you scale AI responsibly, protect customers and the business, and keep momentum when something goes wrong.

What Governance Provides

Without GovernanceWith Governance
No visibility into AI deployed across the orgCentral registry of all AI use cases
Inconsistent practices and risk exposureStandardized processes and controls
Reactive compliance (after problems occur)Proactive risk management
Unclear accountability when issues ariseDefined owners and escalation paths

Workplace Scenario: Customer Email AI

You are planning to deploy a new AI tool that analyzes incoming customer emails and recommends actions—categorizing complaints, routing urgent issues, and suggesting response templates.

Part 1: Approval Before deployment, you need to prepare an AI Use Case Intake Form for governance review. This documents what the AI does, what data it uses, and what risks exist.

Part 2: Incident After deployment, the AI miscategorizes several complaint emails as "routine inquiries," causing delayed responses to frustrated customers. One customer escalates publicly on social media. You need to handle this incident appropriately.


AI Governance Structure

Organizations commonly choose one of three governance models:

ModelDescriptionBest For
CentralizedDedicated AI governance office reviews all initiativesRegulated industries, early AI adoption
FederatedGovernance embedded in business units with central coordinationLarge, diverse organizations with mature AI
HybridCentral team sets standards; business units execute within guardrailsMost medium-to-large enterprises

Key Governance Roles

RoleResponsibility
AI Ethics OfficerSets principles, reviews high-risk cases, handles escalations
AI Risk ManagerAssesses and monitors risk across AI portfolio
AI Product OwnerDefines business requirements and owns value delivery
Technical LeadEnsures architecture meets standards and security requirements
AI ChampionDrives adoption, surfaces concerns, bridges business and governance

The Champion's role: You're the connector between your team and governance. You help ensure use cases get properly documented and reviewed, and you raise concerns before they become incidents.

Knowledge Check

Test your understanding with a quick quiz


Form Practice: AI Use Case Intake Form

Complete this intake form for the Customer Email AI scenario:

AI Use Case Intake Form

Fill out each section of this intake form as if you were submitting the Customer Email AI for governance review. Use the hints and examples for guidance.

Scenario: You are deploying an AI tool that analyzes incoming customer emails to categorize complaints, route urgent issues, and suggest response templates. Your customer service team of 15 agents will use this tool daily.
Harm TypeSpecific Risk
RiskMitigation

Reflection Exercise

Apply what you've learned with a written response


Decision Tree: When to Escalate

Use this flowchart to determine when escalation is required:

Step 1: Assess the Risk Level

Does the AI involve any of the following?

  • Personal identifiable information (PII)
  • Decisions affecting health, safety, or legal rights
  • Customer-facing interactions
  • Financial transactions or recommendations

If YES to any: Proceed to Step 2. If NO to all: Standard approval process may suffice.

Step 2: Check for Novel Concerns

Is there something unusual about this use case?

  • First time using this AI capability
  • No precedent or playbook exists
  • Cross-functional conflict about approach
  • Uncertainty about appropriate controls

If YES: Escalate to your governance contact for guidance.

Step 3: Evaluate Incident Severity

If an issue has occurred, how severe is it?

SeverityCriteriaEscalation Level
LowMinor error, quickly corrected, no external impactTeam lead
MediumCustomer impact, policy question, repeated issueDepartment manager + governance contact
HighPublic exposure, regulatory concern, significant harmSenior leadership + legal + communications
CriticalSafety risk, major compliance breach, media involvementExecutive team + board notification

Step 4: Document and Communicate

When escalating, include:

  1. Clear statement of the issue and urgency level
  2. Relevant context (what happened, when, who's affected)
  3. Options you've considered
  4. Your recommended path forward
  5. Decision needed and timeline

Knowledge Check

Test your understanding with a quick quiz


Role-Play: Reporting the Incident

The Customer Email AI has miscategorized complaint emails, and a customer has posted publicly about delayed responses. Practice escalating this issue.

Scenario Details

  • What happened: AI categorized 12 urgent complaints as "routine inquiries" over 3 days
  • Impact: Average response time for these customers was 72 hours vs. target of 24 hours
  • Visibility: One customer posted on LinkedIn mentioning the company by name
  • Root cause: Unknown—could be model drift, edge case, or data issue

Your Escalation Communication

Draft a brief escalation message (3-5 sentences) that includes:

  • What happened
  • Who is affected
  • Severity level
  • Your recommended immediate action
  • What decision you need

Example:

"Our customer email AI has been miscategorizing urgent complaints as routine inquiries—12 cases over the past 3 days, with one customer posting publicly about delayed response. I recommend we immediately pause AI categorization for the 'urgent' threshold and revert to manual triage while we investigate. I've notified the customer service manager. Need decision: Should we issue proactive outreach to the affected 12 customers? Please advise by EOD."

Reflection Exercise

Apply what you've learned with a written response


Governance Documentation Requirements

Essential Documents for Each AI Use Case

  1. Intake Form – Describes the use case, data, risks, and mitigations
  2. Approval Record – Documents who approved, when, and with what conditions
  3. Monitoring Plan – Defines metrics to track and review frequency
  4. Incident Log – Records any issues and how they were resolved

Maintaining Your AI Registry

UK ICO guidance on explaining AI-assisted decisions highlights the need for clear internal policies, responsibilities, and reporting lines for explainability and oversight. An AI registry provides this visibility:

FieldPurpose
Use Case NameIdentify the AI system
OwnerWho is accountable
Risk LevelEnables prioritized oversight
StatusActive, paused, retired
Last Review DateEnsures ongoing attention
IncidentsTracks issues for patterns

Completion: Governance Artifacts

To complete this module, submit two artifacts:

Artifact 1: AI Use Case Intake Form

Complete the intake form for a real or hypothetical AI use case in your context. Include:

  • Description and data used
  • At least 3 potential harms
  • Risk level with justification
  • Mitigations for each harm
  • Human oversight approach

Artifact 2: Escalation Memo

Write an escalation memo for a hypothetical AI incident. Include:

  • Clear description of the issue
  • Impact and affected parties
  • Severity level
  • Recommended immediate action
  • Decision requested

Assessment Criteria:

CriterionIntake FormEscalation Memo
CompletenessAll sections addressedAll required elements included
SpecificityConcrete details, not genericClear, actionable information
Risk AwarenessThoughtful harm identificationAppropriate severity assessment
QualityMitigations are realisticCommunication is professional

Practical Exercise

Complete an artifact to demonstrate your skills


Key Takeaways

  • Governance enables responsible AI at scale—it's not bureaucracy, it's protection
  • Document use cases properly: intake forms create accountability and enable oversight
  • Know when to escalate: risk level, novel concerns, and incident severity all matter
  • Champion role: You're the connector between your team and governance structures
  • Inaccuracy was companies most-cited gen-AI risk

Sources

Next Steps

In the next module, we'll explore AI Operating Models—understanding how to organize AI capabilities (centralized, federated, or hybrid) and how Champions work within each structure.