📋 AI Policy Library
Singapore’s core AI policy documents, grouped by category, newest first within each.
🏛️ National Strategy (7 items)
Public AI Research Investment 2026-2030
S$1 billion (US$779 million) in public AI research investment, focused on responsible and resource-efficient AI.
On 24 January 2026, the Ministry of Digital Development and Information announced over S$1 billion (about US$779 million) in public AI research funding for 2026-2030. Three priorities: research on "responsible and resource-efficient AI," extending the trusted-AI track that includes AI Verify; full-pipeline AI talent development, from junior college pre-tertiary programmes to university faculty training; and industry applications, shortening the path from research to commercialisation. Coming after 2024's S$500 million in high-performance computing, this marks Singapore's shift from pilot exploration to scaled build-out. Per-capita AI investment reaches US$139 — far above the US (US$33) and China (US$7).
National AI Strategy 2.0 (NAIS 2.0)
Upgraded national AI strategy with twin tracks — AI for Public Good and AI for Growth — and nine priority sectors.
NAIS 2.0 shifts Singapore's AI strategy from targeted applications to systemic enablement. Twin objectives: AI for Public Good and AI for Growth. The strategy spans fifteen action lines, lifts the funding envelope above S$1 billion, and builds out national AI compute infrastructure. Nine priority sectors are designated: transport and logistics, manufacturing, finance, safety and security, cybersecurity, smart cities, healthcare, education, and government services — with healthcare and fintech receiving the largest investment weighting.
Smart Nation 2.0
Digital infrastructure upgrade blueprint built on three pillars: Digital Government, Digital Economy, Digital Society.
Smart Nation 2.0, released in October 2023, is a full upgrade of the 2014 Smart Nation Initiative. Three pillars: Digital Government — driving end-to-end digitalisation and AI adoption across public services; Digital Economy — supporting enterprise digital transformation and AI adoption; Digital Society — ensuring universal digital literacy and closing the digital divide. In October 2024, an implementation plan was launched, including a S$120 million AI application fund supporting five national AI projects: intelligent freight planning, municipal services, chronic disease prediction and management, personalised education, and border clearance. At the infrastructure layer, the plan covers a national AI compute platform, data-sharing infrastructure, and a secure digital identity system.
National AI Strategy (NAIS 1.0)
Singapore's first national AI strategy, identifying five focus sectors and three enablers.
NAIS 1.0 marked the elevation of AI from a technology topic to a national strategy. Five focus sectors: intelligent transport and logistics, smart cities, healthcare, education, and safety and security. Three enablers: triple-helix collaboration, an AI talent pipeline, and data architecture plus trusted AI. The strategy spawned AI Singapore and the 100 Experiments programme.
Smart Nation Initiative
Singapore's overarching digital transformation framework, laying the institutional foundation for subsequent AI strategies.
In 2014, Prime Minister Lee Hsien Loong announced the Smart Nation Initiative as a whole-of-nation strategic framework for digital transformation. Core goals: use digital technology to improve citizens' lives, create more economic opportunities, and build more tightly connected communities. Although not an AI-specific policy, it provided the institutional and policy foundation for subsequent AI strategies.
SAF Digital and Intelligence Service — Fourth Service
Establishment of the SAF's fourth Service — embedding AI and digital intelligence into the force structure itself.
In October 2022, Singapore's Ministry of Defence formally established the SAF Digital and Intelligence Service (DIS) as the fourth Service alongside the Army, Navy, and Air Force, with sole responsibility for digital and intelligence operations, cyber defence, and AI decision support. In 2025, DIS was further reorganised into two commands: DCCOM (Digital Cyber Command) and SAFC4DC (C4 and Defence Computing Command). This is the deepest structural move in Singapore's national AI-native strategy — writing AI into the Service structure itself rather than running it as a departmental project. Supporting elements include the DIS × AI Singapore MoU, the DIS Sentinel Programme with AI curriculum, and upgrades to the SAF Digital Range / CyTEC.
Singapore AI Safety Institute
National research institute for frontier AI safety, hosting the Singapore Consensus coordination function.
The Singapore AI Safety Institute (AISI) was established in 2024 with an annual budget of S$10M, jointly operated by IMDA and the Digital Trust Centre and hosted at NTU. It covers three core research areas on frontier AI models: red-team evaluation, alignment research, and traceability testing. AISI also serves as the coordination centre for the Singapore Consensus on Global AI Safety Research Priorities (signed by 11 countries, including the US and China) and hosts the International Scientific Exchange on AI Safety (ISESEA) I + II. AISI is the most critical institution in Singapore's strategy of "leveraging 0.07% of the world's population into G7-level AI governance influence."
⚖️ AI Governance Frameworks (8 items)
ISO/IEC 42119-8 Generative AI Testing Standard (Proposal)
Singapore's draft of the world's first international standard for testing generative AI systems, tabled at the 17th ISO/IEC JTC 1/SC 42 plenary.
On 20 April 2026, the 17th ISO/IEC JTC 1/SC 42 plenary opened in Singapore — the first time in ASEAN, co-organised by IMDA and Enterprise Singapore, with 35+ national bodies and 250+ AI experts participating. Singapore formally tabled **ISO/IEC 42119-8**, which, if adopted, will be the world's first international standard for testing generative AI systems. **Two core pillars:** - **Benchmarking** — using shared datasets to measure AI performance, solving the comparability problem of "what to test and how to score" - **Red Teaming** — simulating attacks to surface hidden risks, standardising "how to find what's hidden" The proposal builds on IMDA's domestic testing infrastructure: the AI Verify Toolkit, the Starter Kit for Testing of LLM-Based Applications, and the Global AI Assurance Sandbox. Changi Airport's February 2025 ISO/IEC 42001 AI Management System certification — the world's first for an airport — supplied a working precedent that AI governance can be externally audited. IMDA CEO **Ng Cher Pong** (in post since November 2025), in his opening address, said: "Standards setting cannot move at a glacial pace" — or it risks being outpaced by AI itself. He also stressed that standards must be representative across sectors, cultures and languages, and that Southeast Asia — one of the world's most diverse regions — must be plugged into standards-making. ISO standards typically take years from proposal to publication. But once a proposal is on the table, the framing for global discussion is set — which is precisely how Singapore translates 0.07% of the world's population into G7-tier AI governance influence.
Model AI Governance Framework for Agentic AI
Dedicated governance framework for autonomous AI agents, addressing the new challenges posed by AI making independent decisions.
As Agentic AI (autonomous AI agents) takes off, IMDA released a dedicated governance framework in January 2026. It focuses on the core issues for AI agents: how far they can decide on their own, human oversight, accountability, and safety safeguards.
Proposed Model AI Governance Framework for Generative AI
Dedicated governance framework proposal for generative AI, addressing the new challenges posed by large models.
One of the world's earliest dedicated governance frameworks proposed for generative AI. Nine dimensions: accountability, data governance, trusted development and deployment, incident reporting, testing and assurance, security, content provenance, user literacy, and supporting measures. The framework takes a multi-stakeholder approach and leans on sandbox-style governance.
AI Verify
The world's first AI governance testing framework and toolkit, enabling enterprises to self-assess AI system compliance.
The world's first AI governance testing framework and toolkit. Eleven testable indicators, an open-source toolkit, and alignment with international standards. The AI Verify Foundation was established in 2023 to drive global collaboration. The framework moves AI governance from "principles" to "operational practice."
Model AI Governance Framework
Asia's first AI governance framework, articulating principles of explainability, transparency, and human-centric AI governance.
Released at Davos in 2019, this is Asia's first AI governance framework. Four core principles: internal governance structures, human involvement in AI-augmented decisions, operations management, and stakeholder communication. The OECD has cited it as a best practice.
Personal Data Protection Act (PDPA)
Singapore's core data protection law, with AI-relevant provisions added in the 2020 amendments.
Singapore's core data protection law, enacted in 2012 and significantly amended in 2020. The amendments added a Business Improvement Exception, data portability rights, and stronger enforcement powers — setting the legal perimeter for AI data use.
Project MindForge — GenAI Risk Framework for Financial Sector
GenAI risk framework for the financial sector, co-developed by 24 institutions and four major cloud/AI providers (Microsoft / AWS / Google / NVIDIA).
Project MindForge is a MAS-led GenAI risk framework for the financial sector, launched in 2024. Consortium members include 24 financial institutions (DBS, UOB, OCBC, HSBC, JPMorgan, and others), the four major cloud and AI providers (Microsoft, AWS, Google, NVIDIA), and the regulator. The framework covers seven risk areas: model hallucination, data leakage, bias and fairness, supply-chain dependency, explainability, adversarial attacks, and accountability. What's unusual is putting regulators, regulated banks, and tech providers around one table — the financial-sector version of Singapore's "permissive training, strict output" stance, and the third layer in the FEAT → Veritas → MindForge → BuildFin.ai stack.
AI Risk Management Guidelines for Banks
Supervisory expectations document for AI model risk management in the financial sector, formally constraining how banks use AI.
MAS released the AI Risk Management Guidelines in December 2024, codifying years of practical experience from FEAT, Veritas, and MindForge into formal supervisory expectations. Coverage includes: model governance (data, training, validation, deployment), third-party AI risk (cloud providers, model vendors, APIs), model monitoring (drift, bias, performance), human-in-the-loop, and incident response and accountability. The accompanying BuildFin.ai platform enables regulated institutions to test and report on a continuous basis. These are among the world's first dedicated supervisory documents for AI in banking, landing ahead of the EU AI Act's financial provisions.
🏢 Sector Regulation (9 items)
Artificial Intelligence in Healthcare Guidelines (AIHGle)
Joint guidelines on the safe use and good practice of AI in healthcare for hospitals, clinicians, and AI developers.
The Artificial Intelligence in Healthcare Guidelines (AIHGle) were jointly released in October 2021 by the Ministry of Health (MOH), the Health Sciences Authority (HSA), and the then Integrated Health Information Systems (IHiS, reorganised as Synapxe in 2023). They are Singapore's core non-binding guidance on healthcare AI. Two objectives: support the safe and effective deployment of AI in healthcare, and complement HSA's binding regulation of AI-Medical Devices (AI-MD). The guidelines cover the full lifecycle on both the developer and healthcare-institution sides: evidence of clinical validity at the development stage, integration into clinical workflows and human-in-the-loop at deployment, post-market monitoring and adverse event reporting, and patient communication and informed consent. Together with HSA's medical-device registration requirements under the Health Products Act, AIHGle creates a "soft guidance + hard law" two-layer structure — the compliance baseline beneath national healthcare-AI initiatives like ACE-AI and Synapxe's AI platforms.
Health Products Act — AI-Medical Device (AI-MD) Regulation
AI-containing medical devices must be registered with HSA — the hard-law gate for healthcare AI market access.
The Health Products Act 2007 is Singapore's core law for medical devices, administered by the Health Sciences Authority (HSA). Medical devices containing AI components (AI-Medical Devices, AI-MD) — whether standalone Software as a Medical Device (SaMD) or algorithms embedded in hardware — must be registered with HSA according to risk class before they can be marketed or used clinically in Singapore. Supporting regulation includes the Regulatory Guidelines for Software Medical Devices (revised 2022), which contains a dedicated section on AI-MD covering training data quality, change-control plans for model updates, the special requirements for continuous-learning systems, levels of clinical evidence, cybersecurity, and data protection. AI-MDs are also expected to follow Good Machine Learning Practice (GMLP) principles, aligned with the multilateral framework agreed with the US FDA and Health Canada. This is one of the two pieces of pre-existing legislation that the W&C tracker singles out as AI-relevant — a concrete illustration of what "Singapore has no dedicated AI law" actually means: AI is brought into hard-law regulation through modernised sector statutes, not through a horizontal AI act.
Road Traffic Act — Autonomous Vehicle Trials and Use
A 2017 amendment introduced Section 6C, empowering LTA to regulate autonomous vehicle trials and use.
The Road Traffic Act 1961 was amended in 2017 (Road Traffic (Amendment) Act 2017) to insert Section 6C — "Trials and use of autonomous motor vehicles" — bringing AVs into hard law. Core provisions: the Land Transport Authority (LTA) is empowered to make subsidiary regulations, issue trial and operational permits for AVs, set insurance and safety requirements, and grant exemptions within designated areas. The accompanying Road Traffic (Autonomous Motor Vehicles) Rules 2017 cover trial applications and approvals, safety driver requirements, data logging and incident reporting (black-box), ongoing reporting obligations to LTA, and minimum insurance thresholds. In parallel, Singapore established CETRAN (Centre of Excellence for Testing and Research of Autonomous Vehicles) and the one-north AV trial zone, anchoring the legal authorisation in physical infrastructure. Together with the Health Products Act, this is one of the two core examples that the W&C tracker singles out for "regulating AI through existing sector statutes" — and the legal foundation for "intelligent transport and logistics," one of the five priority sectors of NAIS 1.0.
Guidelines on Securing AI Systems
Best-practice guidelines for end-to-end security across the AI system lifecycle.
In October 2024, CSA released the Guidelines on Securing AI Systems together with a companion practice handbook, filling a governance gap in the AI security space. The guidelines cover the full AI system lifecycle: threat modelling at the planning and design stage, data and model security during development, security testing at deployment, and monitoring and incident response in operations. They focus on AI-specific risks including adversarial attack defence, data poisoning prevention, model theft protection, and supply chain security.
Guide on Use of Generative AI Tools by Court Users
Principles and guidance on the use of generative AI tools in legal proceedings.
In 2024, the Supreme Court of Singapore issued the Guide on the Use of Generative AI Tools by Court Users (Registrar's Circular No. 1 of 2024), applicable across the entire court system. Core principles: lawyers and parties bear ultimate responsibility for all materials submitted to court, regardless of whether AI was used to generate them; legal documents prepared with GenAI assistance must disclose the AI use; cited cases and statutory provisions must be verified by a human. The guide reflects the judiciary's pragmatic stance on AI tools — not banning their use, but making clear that human responsibility cannot be transferred.
Veritas Initiative
Translates the FEAT principles into an operational assessment toolkit, with an open-source methodology.
The Veritas initiative is the practical extension of the FEAT principles, jointly developed by MAS and partner financial institutions. The project's goal is to build an open-source, operational assessment methodology and toolkit that helps financial institutions translate FEAT principles into concrete AI applications. Use cases covered include fairness assessments for customer marketing and transparency assessments for credit risk scoring. Veritas is iterated continuously, embodying Singapore's incremental "principles → tools → practice" AI governance path.
Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems
PDPC clarifies how PDPA applies to AI recommendation and decision systems — giving organisations certainty when using personal data to train and run AI.
In March 2024, PDPC issued the Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems, spelling out how the PDPA applies in concrete AI scenarios. The guidelines cover three common situations: (1) using personal data to train, test, and monitor AI models — which can rely on the Business Improvement Exception or Research Exception, subject to reasonableness, data minimisation, and de-identification thresholds; (2) using AI for recommendations or decision-making — which triggers notification and consent obligations, and where decision-making applications must inform data subjects; (3) best-practice templates for Data Protection Impact Assessments (DPIA). This is the key document by which PDPC translates the 2020 PDPA amendments (legitimate interests, Business Improvement Exception) into an operational handbook for AI deployment, forming — together with Section 244 of the Copyright Act — Singapore's dual legal foundation on the training side.
Fairness, Ethics, Accountability, Transparency (FEAT) Principles
Fairness, Ethics, Accountability, and Transparency principles for AI use in the financial sector.
MAS issued the FEAT principles in 2018 to provide governance guidance for financial institutions using AI and data analytics. Four principles: Fairness — ensuring AI decisions do not produce discrimination; Ethics — AI use aligns with ethical standards; Accountability — clear assignment of responsibility for AI decisions; Transparency — AI decision processes are understandable and explainable. The 2022 update incorporated additional practical guidance.
Copyright Act 2021 — Section 244 (Computational Data Analysis Exception)
AI-training safe harbour — alongside Japan, the most permissive copyright stance on AI training in the world.
Section 244 of the Copyright Act 2021, "Computational Data Analysis," provides an explicit safe harbour for AI training data use: lawfully accessed content (whether or not copyrighted) may be used for AI model training, text and data mining, and other "computational analysis" purposes without constituting copyright infringement. Together with Article 30-4 of Japan's Copyright Act, this is the world's most permissive copyright stance on AI training — the US is still navigating fair use case law, while the EU relies on the opt-out mechanism in its Text and Data Mining Exception. Combined with IPOS's "When Code Creates" report (2024) and the "permissive training + strict output" philosophy (the OCHA + Elections Bill + Criminal Law Bill + Online Safety Bill quartet), Singapore offers AI companies **one of the clearest legal perimeters in the world** — a key part of the backdrop that lets EDB attract institutions like OpenAI, Anthropic, and DeepMind.
💰 Budget & Funding (4 items)
Budget 2026 — National AI Acceleration
Establishment of the National AI Council, AI tax incentives, the one-north AI district, and the AI Mission programme.
Budget 2026 elevates AI to an unprecedented level of priority. Core measures: a National AI Council chaired by the Prime Minister; the Enterprise Innovation Scheme's 400% tax deduction extended to AI spending; construction of the one-north AI district; the AI Mission programme focused on critical-sector applications; and a National AI Literacy Programme. This is the budget that takes Singapore's AI policy from strategy to full-scale execution.
MOH Committee of Supply 2026 — Healthcare AI & MediSave Reform
Deployment of the ACE-AI prediction tool, BRCA1/2 genetic testing subsidies, MediShield Life coverage for preventive surgery, and increased MediSave limits.
In the March 2026 Committee of Supply debate, Minister for Health Ong Ye Kung announced that Singapore has formally become a super-aged society (population aged 65+ exceeds 21%). Core measures: (1) ACE-AI, a prediction tool developed by national health tech agency Synapxe, forecasts 3-year risk of diabetes and hyperlipidaemia. Patients with >75% risk move from triennial to annual screening, with rollout in early 2027 to all roughly 1,100 Healthier SG clinics, following the principle of "AI augmentation, not AI decision-making" with clinicians remaining in the loop. (2) BRCA1/2 genetic testing receives subsidies of up to 70% from December 2026, with 2,000+ eligible people each year. (3) MediShield Life coverage expands to include preventive mastectomy (Q3 2026) and risk-reducing salpingo-oophorectomy (Q4 2026). (4) MediSave chronic and preventive care limits rise from 500/700 to 700/1000 (effective January 2027), benefiting 910,000+ patients.
Budget 2025 — AI-related Measures
Lawrence Wong's first budget as Prime Minister, signalling large-scale AI investment.
Budget 2025 is Lawrence Wong's first budget as Prime Minister and the first to designate AI as a fiscal priority. Key measures include: increased grants to accelerate enterprise digital transformation, expanded coverage of AI skills training programmes, and additional AI R&D funding. The budget provides the fiscal foundation for executing NAIS 2.0 and marks the formal transition of AI from strategic planning into the appropriations phase.
Research, Innovation and Enterprise 2025 Plan
S$25 billion five-year R&D plan, with AI designated as a priority investment area.
The RIE2025 plan covers 2021-2025 with a total commitment of S$25 billion — the largest R&D investment in Singapore's history. Four strategic domains: Manufacturing, Trade and Connectivity; Human Health and Potential; Urban Solutions and Sustainability and Smart Nation; and Digital Economy. AI runs across all domains as a core enabling technology. The plan supports national AI research programmes such as AI Singapore and funds AI talent development, fundamental research, and industrial applications.
🌏 International Collaboration (7 items)
Seoul AI Safety Commitment
Participation in the Seoul AI Safety Summit, advancing further AI safety governance commitments.
In May 2024, Singapore joined the second AI Safety Summit in Seoul and signed the Seoul AI Safety Commitment. Building on the Bletchley Declaration, the commitment goes further: safety evaluation standards for frontier AI, cooperation among AI Safety Institutes, and shared AI safety testing methodologies. Two summits in a row — Singapore continues to lock in its role as an active player in global AI governance.
Bletchley Declaration on AI Safety
Signed the Bletchley Declaration, committing to international cooperation on AI safety.
In November 2023, Singapore joined 28 signatories at the first global AI Safety Summit at Bletchley Park, UK. By signing the Bletchley Declaration, signatories committed to: identifying shared risks posed by frontier AI, taking on national responsibilities for AI safety, and strengthening international collaboration on AI safety research. The declaration places particular emphasis on potential risks from frontier AI models, including cybersecurity threats, biotechnology risks, and disinformation.
Global Partnership on AI (GPAI)
Singapore became a founding member of GPAI, participating in international governance for responsible AI.
Singapore became a founding member of GPAI in 2020. GPAI is a multi-government initiative that promotes the responsible development and use of AI through multi-stakeholder collaboration. Singapore takes part in working groups on Responsible AI, Data Governance, Future of Work, and Innovation and Commercialisation. Membership reflects Singapore's commitment to international AI governance and pulls outside perspectives into domestic policymaking.
Singapore Consensus on Global AI Safety Research Priorities
Singapore-initiated consensus on global AI safety research priorities, signed by 11 countries — including the US and China.
The Singapore Consensus emerged from the International Scientific Exchange on AI Safety (ISESEA I), convened by Singapore alongside ICLR in April 2024, and was ultimately signed by 11 countries or jurisdictions — **rare in bringing both the US and China into the same AI safety document**. The consensus is organised around three research priorities: (1) standardisation of risk assessment methodologies; (2) cross-border red-teaming collaboration on frontier models; (3) safety thresholds for AI deployment in critical infrastructure. This is one of the highest-leverage outputs of Singapore's national AI-native strategy — using 0.07% of the world's population to establish a "neutral coordinate" in AI governance. Supporting mechanisms include ISESEA II (2026) for ongoing consensus updates, AISI as the coordination centre, and continuing output through the Bletchley / Seoul / Paris AI Summits.
ASEAN Guide on AI Governance and Ethics
AI governance guide adopted by all 10 ASEAN member states, drafted under Singapore's leadership, with IMDA serving as secretariat.
The ASEAN Guide on AI Governance and Ethics was drafted under Singapore's leadership and formally adopted by the ASEAN Digital Ministers Meeting in February 2024 across all 10 member states. The guide is built directly on Singapore's Model AI Governance Framework — effectively the "regionalised version" of Singapore's governance template. Coverage includes: organisational governance, data governance, AI system lifecycle management, human-in-the-loop, and risk tiering. Singapore continues to hold the secretariat function through the ASEAN Working Group on AI Governance (WG-AI). This is a key lever in Singapore's strategy — turning its domestic governance standard into the regional default, so that **foreign capital deploying AI in Southeast Asia naturally operates within boundaries defined by Singapore**. Extension: the 2026 ASEAN Hanoi Declaration further deepens digital ministerial cooperation.
Responsible AI in the Military Domain (REAIM) Seoul Summit
One of five co-hosts of the REAIM Seoul Summit, putting responsible military AI on the international agenda.
The Responsible AI in the Military Domain (REAIM) Seoul Summit 2024 is the second edition in the REAIM series. As one of five co-hosts (alongside South Korea, the Netherlands, the UK, and Kenya), Singapore helped place the responsible use of military AI on the international agenda. The Summit adopted the Blueprint for Action — the first multilateral document to translate military AI governance into operational steps, covering: the position of humans in the command chain, the boundaries of autonomous weapons, the application of international humanitarian law to AI decision-making, and cross-border trust-building mechanisms. Singapore also chairs the REAIM Asia Regional Consultations, extending the dialogue across Southeast Asia. This is a flagship move in Singapore's "governance neutral zone" positioning, intervening on the most sensitive AI topic (military AI) — not through hard power, but through rule-drafting authority.
International Scientific Exchange on AI Safety
Global AI safety scientific exchange convened alongside ICLR — now in its second edition — and the incubation platform for the Singapore Consensus.
The International Scientific Exchange on AI Safety (ISESEA) is co-hosted by IMDA and the Singapore AI Safety Institute, leveraging ICLR (the International Conference on Learning Representations) and convened annually in Singapore or in partner venues. ISESEA I in 2024 incubated the Singapore Consensus; ISESEA II in 2026 continues to refresh the consensus and extend research priorities. The conference is deliberately positioned as a three-way mix of "scientists + government + industry," avoiding the politicisation of purely diplomatic settings. This is the "soft entry point" of Singapore's international AI governance strategy — building depoliticised consensus through academic activity, then letting governments adopt it.