Liability and Governance · Updated 2026-04-26
AI Risk Management Guidelines for Banks
Core Point
Formal supervisory expectations for AI model risk management in financial services — among the first dedicated banking-AI regulations globally.
Detailed Note
MAS has codified the experience accumulated through FEAT (2018) → Veritas (2021) → MindForge (2024) into a formal set of supervisory expectations. Coverage spans model governance, third-party AI risk, model monitoring, human-in-the-loop, and incident response and accountability. The companion BuildFin.ai platform allows regulated institutions to continuously test and report. This is among the first dedicated banking-AI regulations in the world, landing earlier than the financial services provisions of the EU AI Act.
Position in the Legal Framework
A gradual path from principles to tools to enforcement — FEAT → Veritas → MindForge → AI Risk Management Guidelines.
Related Legal Cards
Guide on Use of Generative AI Tools by Court Users
Lawyers and litigants bear ultimate responsibility for legal documents prepared with AI assistance and must disclose any AI use.
Guidelines and Companion Guide on Securing AI Systems
Best practices for AI system security across the full lifecycle — filling a gap in AI security governance.
Personal Data Protection Act (PDPA) — AI Application
Sets the legal perimeter for personal data in AI — the Business Improvement Exception leaves room for AI training.