Practitioner Track · Module 5
AI Governance in Practice & Escalation
Learn to navigate AI governance structures, document use cases for approval, and know when and how to escalate issues.
- Understand your organization's AI governance structure and key roles
- Learn how to document an AI use case for approval (intake forms, required information)
- Identify situations that require escalation to governance bodies
- Know the procedure for reporting AI incidents or concerns
Why Governance Matters
By 2026, AI governance isn't a “responsible AI” side-quest—it's a prerequisite for scaling. Most enterprises now report regular AI use, and many are already experimenting with AI agents, but the hard part is still moving from pilots to enterprise impact.
That's why governance has moved up the org chart. McKinsey's global survey work has found CEO oversight of AI governance is among the factors most correlated with bottom-line impact, and organizations are increasingly “elevating governance” while trying to mitigate a growing set of gen-AI risks like inaccuracy, cybersecurity, and IP infringement. At the same time, readiness remains uneven—Deloitte reported that only 25% of leaders felt “highly” or “very highly” prepared to address GenAI governance and risk issues.
Regulation also turned governance from “best practice” into “table stakes.” The EU AI Act entered into force in August 2024 and includes phased obligations—such as AI literacy requirements (February 2025), GPAI governance obligations (August 2025), and broad applicability in August 2026 (with some extended timelines).
NIST's AI Risk Management Framework puts it plainly: governance means making sure “roles and responsibilities and lines of communication” for managing AI risk are documented and clear across the organization. Effective governance isn't bureaucracy—it's how you scale AI responsibly, protect customers and the business, and keep momentum when something goes wrong.
What Governance Provides
| Without Governance | With Governance |
|---|---|
| No visibility into AI deployed across the org | Central registry of all AI use cases |
| Inconsistent practices and risk exposure | Standardized processes and controls |
| Reactive compliance (after problems occur) | Proactive risk management |
| Unclear accountability when issues arise | Defined owners and escalation paths |
Workplace Scenario: Customer Email AI
You are planning to deploy a new AI tool that analyzes incoming customer emails and recommends actions—categorizing complaints, routing urgent issues, and suggesting response templates.
Part 1: Approval Before deployment, you need to prepare an AI Use Case Intake Form for governance review. This documents what the AI does, what data it uses, and what risks exist.
Part 2: Incident After deployment, the AI miscategorizes several complaint emails as "routine inquiries," causing delayed responses to frustrated customers. One customer escalates publicly on social media. You need to handle this incident appropriately.
AI Governance Structure
Organizations commonly choose one of three governance models:
| Model | Description | Best For |
|---|---|---|
| Centralized | Dedicated AI governance office reviews all initiatives | Regulated industries, early AI adoption |
| Federated | Governance embedded in business units with central coordination | Large, diverse organizations with mature AI |
| Hybrid | Central team sets standards; business units execute within guardrails | Most medium-to-large enterprises |
Key Governance Roles
| Role | Responsibility |
|---|---|
| AI Ethics Officer | Sets principles, reviews high-risk cases, handles escalations |
| AI Risk Manager | Assesses and monitors risk across AI portfolio |
| AI Product Owner | Defines business requirements and owns value delivery |
| Technical Lead | Ensures architecture meets standards and security requirements |
| AI Champion | Drives adoption, surfaces concerns, bridges business and governance |
The Champion's role: You're the connector between your team and governance. You help ensure use cases get properly documented and reviewed, and you raise concerns before they become incidents.
Knowledge Check
Test your understanding with a quick quiz
Form Practice: AI Use Case Intake Form
Complete this intake form for the Customer Email AI scenario:
Fill out each section of this intake form as if you were submitting the Customer Email AI for governance review. Use the hints and examples for guidance.
| Harm Type | Specific Risk | |
|---|---|---|
| Risk | Mitigation | |
|---|---|---|
Reflection Exercise
Apply what you've learned with a written response
Decision Tree: When to Escalate
Use this flowchart to determine when escalation is required:
Step 1: Assess the Risk Level
Does the AI involve any of the following?
- Personal identifiable information (PII)
- Decisions affecting health, safety, or legal rights
- Customer-facing interactions
- Financial transactions or recommendations
If YES to any: Proceed to Step 2. If NO to all: Standard approval process may suffice.
Step 2: Check for Novel Concerns
Is there something unusual about this use case?
- First time using this AI capability
- No precedent or playbook exists
- Cross-functional conflict about approach
- Uncertainty about appropriate controls
If YES: Escalate to your governance contact for guidance.
Step 3: Evaluate Incident Severity
If an issue has occurred, how severe is it?
| Severity | Criteria | Escalation Level |
|---|---|---|
| Low | Minor error, quickly corrected, no external impact | Team lead |
| Medium | Customer impact, policy question, repeated issue | Department manager + governance contact |
| High | Public exposure, regulatory concern, significant harm | Senior leadership + legal + communications |
| Critical | Safety risk, major compliance breach, media involvement | Executive team + board notification |
Step 4: Document and Communicate
When escalating, include:
- Clear statement of the issue and urgency level
- Relevant context (what happened, when, who's affected)
- Options you've considered
- Your recommended path forward
- Decision needed and timeline
Knowledge Check
Test your understanding with a quick quiz
Role-Play: Reporting the Incident
The Customer Email AI has miscategorized complaint emails, and a customer has posted publicly about delayed responses. Practice escalating this issue.
Scenario Details
- What happened: AI categorized 12 urgent complaints as "routine inquiries" over 3 days
- Impact: Average response time for these customers was 72 hours vs. target of 24 hours
- Visibility: One customer posted on LinkedIn mentioning the company by name
- Root cause: Unknown—could be model drift, edge case, or data issue
Your Escalation Communication
Draft a brief escalation message (3-5 sentences) that includes:
- What happened
- Who is affected
- Severity level
- Your recommended immediate action
- What decision you need
Example:
"Our customer email AI has been miscategorizing urgent complaints as routine inquiries—12 cases over the past 3 days, with one customer posting publicly about delayed response. I recommend we immediately pause AI categorization for the 'urgent' threshold and revert to manual triage while we investigate. I've notified the customer service manager. Need decision: Should we issue proactive outreach to the affected 12 customers? Please advise by EOD."
Reflection Exercise
Apply what you've learned with a written response
Governance Documentation Requirements
Essential Documents for Each AI Use Case
- Intake Form – Describes the use case, data, risks, and mitigations
- Approval Record – Documents who approved, when, and with what conditions
- Monitoring Plan – Defines metrics to track and review frequency
- Incident Log – Records any issues and how they were resolved
Maintaining Your AI Registry
UK ICO guidance on explaining AI-assisted decisions highlights the need for clear internal policies, responsibilities, and reporting lines for explainability and oversight. An AI registry provides this visibility:
| Field | Purpose |
|---|---|
| Use Case Name | Identify the AI system |
| Owner | Who is accountable |
| Risk Level | Enables prioritized oversight |
| Status | Active, paused, retired |
| Last Review Date | Ensures ongoing attention |
| Incidents | Tracks issues for patterns |
Completion: Governance Artifacts
To complete this module, submit two artifacts:
Artifact 1: AI Use Case Intake Form
Complete the intake form for a real or hypothetical AI use case in your context. Include:
- Description and data used
- At least 3 potential harms
- Risk level with justification
- Mitigations for each harm
- Human oversight approach
Artifact 2: Escalation Memo
Write an escalation memo for a hypothetical AI incident. Include:
- Clear description of the issue
- Impact and affected parties
- Severity level
- Recommended immediate action
- Decision requested
Assessment Criteria:
| Criterion | Intake Form | Escalation Memo |
|---|---|---|
| Completeness | All sections addressed | All required elements included |
| Specificity | Concrete details, not generic | Clear, actionable information |
| Risk Awareness | Thoughtful harm identification | Appropriate severity assessment |
| Quality | Mitigations are realistic | Communication is professional |
Practical Exercise
Complete an artifact to demonstrate your skills
Key Takeaways
- Governance enables responsible AI at scale—it's not bureaucracy, it's protection
- Document use cases properly: intake forms create accountability and enable oversight
- Know when to escalate: risk level, novel concerns, and incident severity all matter
- Champion role: You're the connector between your team and governance structures
- Inaccuracy was companies most-cited gen-AI risk
Sources
- The State of Generative AI in the Enterprise, Jan 2024 (Deloitte)
- AI RMF Core, Jan 2023 (NIST)
- NIST AI Risk Management Framework Launch Event, Jan 2023 (NIST)
- Explaining decisions made with AI, Dec 2019 (ICO)
Next Steps
In the next module, we'll explore AI Operating Models—understanding how to organize AI capabilities (centralized, federated, or hybrid) and how Champions work within each structure.