Governance guidance
1. Establish AI visibility before debating AI ambition
Many institutions underestimate AI exposure because they only count formal data-science models. A readiness review should also include AI-enabled vendor platforms, embedded productivity tools, rules-based automation with AI components, staff use of generative AI, customer-facing analytics, and decision-support models.
- Inventory use cases by business owner, system, vendor, data type, and customer or regulatory impact.
- Separate low-risk productivity use from higher-risk decisions, regulated processes, or sensitive-data contexts.
- Record where AI is already active, where pilots are planned, and where shadow use is likely.
Governance guidance
2. Connect governance to accountable owners
A governance framework is not ready if ownership is unclear. Each material AI use case needs an accountable business owner, control-function involvement, technology and data responsibilities, and a route for escalation when risk, performance, or vendor conditions change.
- Define decision rights for intake, approval, monitoring, exceptions, and retirement.
- Clarify board and senior-management reporting for material AI exposures.
- Map risk, compliance, legal, cyber, procurement, audit, data, and technology roles.
Governance guidance
3. Evidence controls, not principles alone
Responsible AI principles are useful, but readiness depends on operating evidence. Institutions should be able to show how requirements are applied through approval records, testing, controls, monitoring, vendor due diligence, training, and issue management.
- Review AI policy alignment with data, outsourcing, model-risk, cyber, privacy, and conduct controls.
- Confirm what records would be available for internal challenge or external review.
- Define minimum evidence packs for material AI use cases.
Governance guidance
4. Include GenAI and vendor AI in the same readiness view
Generative AI and vendor-embedded AI can create exposure even when internal teams have not launched a formal AI programme. Readiness requires acceptable-use boundaries, sensitive-data rules, human review expectations, and third-party oversight.
- Document employee acceptable-use rules and prohibited data entry.
- Review vendor AI capabilities, contracts, data flows, and monitoring rights.
- Train business and control teams on escalation triggers.
