Liability and Governance · Updated 2026-04-26
Guidelines and Companion Guide on Securing AI Systems
Core Point
Best practices for AI system security across the full lifecycle — filling a gap in AI security governance.
Detailed Note
Issued by CSA in October 2024, the guidelines cover the full AI system lifecycle: threat modelling at the planning and design stage, data and model security during development, security testing at deployment, and monitoring and incident response in operations. Particular focus is given to AI-specific risks such as adversarial attacks, data poisoning, model theft and supply-chain security. A 2025 companion paper, "Securing Agentic AI", extends the framework to agentic AI use cases.
Position in the Legal Framework
A gradual path from principles to tools to enforcement — FEAT → Veritas → MindForge → AI Risk Management Guidelines.
Related Legal Cards
Guide on Use of Generative AI Tools by Court Users
Lawyers and litigants bear ultimate responsibility for legal documents prepared with AI assistance and must disclose any AI use.
AI Risk Management Guidelines for Banks
Formal supervisory expectations for AI model risk management in financial services — among the first dedicated banking-AI regulations globally.
Personal Data Protection Act (PDPA) — AI Application
Sets the legal perimeter for personal data in AI — the Business Improvement Exception leaves room for AI training.