Direct Answer
Most AI sections in board audit reports are structured like innovation updates: tools deployed, use cases in progress, teams engaged. That is not an audit function. An audit committee’s job is to assess whether material risks are understood, controlled, and disclosed — and AI creates material risks in operations, data, compliance, and financial reporting. The audit committee that accepts a roadmap slide is not doing AI oversight. It is giving management a pass on accountability.
Deeper Answer
The audit committee should be asking four questions that most AI reports never address. First: where is AI making or influencing decisions that could be material to financial results or regulatory standing? Second: do we have audit trails sufficient to reconstruct those decisions if a regulator, litigator, or external auditor asks? Third: what controls exist to detect when an AI system’s output quality has degraded to the point of creating risk? Fourth: what is our exposure if a key AI vendor fails, changes their terms, or is acquired?
Controls testing is the gap between current practice and where audit committees need to be. For any AI system that touches financial reporting, customer contracts, regulatory submissions, or employee decisions, the audit committee should require evidence that controls have been tested — not just documented. A data processing agreement in a vendor file is not a control. A tested, logged, regularly reviewed access control that prevents unauthorized data submission is a control. The distinction matters when the SEC, GDPR supervisory authority, or plaintiff’s counsel starts asking questions.
Model risk is a concept well-established in financial services — every model that influences material decisions requires documentation, validation, and change management controls. Audit committees outside financial services are only beginning to apply the same discipline to AI systems. The relevant questions: Who approved this model for production use? What validation was performed before go-live? What change management process governs updates to the model or its prompts? Who is notified when performance degrades below the defined threshold? If those questions do not have documented answers, the model is not under appropriate audit control regardless of how technically sophisticated it is.
Vendor dependency is an underweighted risk in most AI governance programs. Organizations have concentrated significant operational capability in a small number of AI vendors. The audit committee should understand: what happens to critical workflows if OpenAI, Anthropic, or Google changes their API pricing by 300%? What happens if a key vendor has a major outage? What is the documented business continuity plan for AI-dependent processes? These are the same questions audit committees ask about any other critical vendor dependency. AI vendors are not exempt.
The practical ask for management: before the next audit committee presentation, replace the AI roadmap slide with a one-page AI risk register covering the four questions above. If management cannot produce that register, the committee has found its first finding.
Related Reading
- AI Board Governance Scorecard — 18-question assessment across six governance dimensions including audit and controls
- How should boards oversee AI? — the five metrics that constitute real board oversight
- How do we prove AI governance creates ROI?










