⚖️ Singapore AI Legal Framework

Last updated: 2026-04-26. "Permissive on training + strict on outputs" — a dual track that makes Singapore one of the world's most predictable jurisdictions for AI companies today.

Core Position

Singapore's AI legal framework reduces to a single line: permissive on training + strict on outputs.

  • Training side: Legally acquired content (with or without copyright) can be used to train AI — among the world's most permissive regimes, on par with Japan.
  • Output side: Deepfakes, AI-generated intimate images, AI-driven disinformation, and election manipulation — strictly governed by a four-piece legislative bundle.

This combination makes Singapore one of the most predictable legal jurisdictions for AI companies today: the boundaries of what you can and cannot do are clear. It is one of the key reasons EDB has been able to attract OpenAI, Anthropic, DeepMind, Mistral and others to set up here.

1. Training side — among the world's most permissive

The "Computational Data Analysis" exception immunises the use of training data for AI, on par with Article 30-4 of Japan's Copyright Act. The United States is still litigating fair use case-by-case and the EU relies on an opt-out TDM exception — Singapore and Japan are currently the only two jurisdictions to write this carve-out explicitly into statute.

Copyright Act 2021 — Section 244 (Computational Data Analysis Exception)

Training In force

AI training safe harbour: lawfully accessed content may be used for AI model training, text and data mining and similar purposes without constituting copyright infringement.

Section 244 of the Copyright Act 2021 ("Computational Data Analysis") provides an explicit safe harbour for the use of training data for AI. "Lawfully accessed" means content obtained through normal channels — subscriptions, purchases, legitimate APIs, public web pages and so on. This makes Singapore one of the jurisdictions with the clearest stated position on AI training and copyright globally, and is a core reason EDB has been able to attract OpenAI, Anthropic, DeepMind and similar institutions.

🔗 Statute text

IPOS — When Code Creates: AI Authorship Position Paper

Training Issued

Clarifies IPOS's position on authorship of AI-generated content: copyright can be asserted only where a human has made a substantive creative contribution.

"When Code Creates" is IPOS's 2024 official position paper on copyright authorship in the era of generative AI. Core position: fully AI-generated output with no substantive human creative input does not qualify as a "work" under copyright law; but where a human makes substantive creative choices (prompt design, output curation, iterative refinement), that human can claim authorship. This diverges from the UK's 1988 "computer-generated works without an author" model and aligns more closely with the US Copyright Office position.

2. Output side — a tight four-part regime

Permissive on training does not mean permissive on output. Deepfakes, AI-generated intimate imagery, AI-generated disinformation and election manipulation are all governed by a tight four-part legislative package — Singapore's policy hedge against "AI freedom" being abused.

Online Criminal Harms Act (OCHA)

Output In force

A unified toolkit for online criminal harms — covers AI-generated scams, extortion and intimidation.

Passed in 2023, OCHA gives police and prosecutors a unified toolkit for governing online criminal harms. It is particularly relevant in the AI era: AI-generated scam messages, deepfake extortion content and automated harassment can all be addressed through governance orders, takedowns, access restrictions and payment-blocking under OCHA. The Act is the foundational layer of Singapore's output-side AI governance — not AI-specific, but most AI-enabled criminal conduct falls within its scope.

🔗 Statute text

Elections (Integrity of Online Advertising) (Amendment) Bill

Output In force

Bans deepfakes during elections: prohibits publishing "misleading, AI-generated content that purports to depict candidates' statements or conduct".

A 2024 amendment to the Elections Act targeting deepfakes specifically. Core clause: during the campaign period (from issuance of the writ of election to polling day) it is unlawful to publish "misleading, AI-generated, deepfake content purporting to represent statements or conduct of candidates". Anyone who publishes, shares or funds such content commits an offence. During the campaign window the authorities may issue corrective directions requiring platforms to take down content, block access or display correction statements. This is among the earliest targeted election-deepfake laws in the world, ahead of the corresponding provisions in the EU AI Act.

Criminal Law (Miscellaneous Amendments) Bill 2025

Output Enacted

Criminalises AI-generated intimate imagery and child sexual exploitation material — production, possession and distribution are all prosecutable.

The 2025 Criminal Law amendments expressly bring AI-generated intimate imagery (nudity, sexual imagery) and child sexual exploitation material into the criminal code. Notable innovations: (1) even where the "person" in the image is fictitious (AI-generated rather than a real individual), the offence still applies if the depicted person appears to be a minor; (2) production, possession and distribution all constitute offences; (3) aggravated penalties apply to deepfake intimate imagery targeting identifiable individuals. The amendment closes the legal gap for the new category of "AI-generated non-existent persons".

Online Safety (Relief and Accountability) Bill 2025

Output Enacted

Fast-track victim relief plus platform accountability — AI-abuse complaints must be acted on within 24 hours.

The 2025 Online Safety Act focuses on victim relief and platform accountability: (1) victims may file complaints with the Online Safety Commission (OSC), which platforms must act on within 24 hours; (2) platforms that fail to comply face significant fines; (3) AI-generated defamation, harassment and sexual imagery all fall within scope. This marks a key step in Singapore's output-side governance shifting from after-the-fact punishment to in-process accountability.

3. Liability and Governance

A gradual path from principles to tools to enforcement — FEAT → Veritas → MindForge → AI Risk Management Guidelines.

Guide on Use of Generative AI Tools by Court Users

Liability Issued

Lawyers and litigants bear ultimate responsibility for legal documents prepared with AI assistance and must disclose any AI use.

The Supreme Court of Singapore's Registrar's Circular No. 1 of 2024 applies across the entire court system. Three principles: (1) lawyers and litigants bear ultimate responsibility for all content submitted to court, whether or not AI was used; (2) legal documents prepared with the assistance of generative AI must disclose that AI was used; (3) cited cases and legal provisions must be verified by a human (to prevent the risk of AI fabricating precedent). This represents a pragmatic judicial posture toward AI tools — not banning use, but holding human responsibility non-transferable.

🔗 Statute text

AI Risk Management Guidelines for Banks

Governance In force

Formal supervisory expectations for AI model risk management in financial services — among the first dedicated banking-AI regulations globally.

MAS has codified the experience accumulated through FEAT (2018) → Veritas (2021) → MindForge (2024) into a formal set of supervisory expectations. Coverage spans model governance, third-party AI risk, model monitoring, human-in-the-loop, and incident response and accountability. The companion BuildFin.ai platform allows regulated institutions to continuously test and report. This is among the first dedicated banking-AI regulations in the world, landing earlier than the financial services provisions of the EU AI Act.

🔗 Statute text

Guidelines and Companion Guide on Securing AI Systems

Governance Issued

Best practices for AI system security across the full lifecycle — filling a gap in AI security governance.

Issued by CSA in October 2024, the guidelines cover the full AI system lifecycle: threat modelling at the planning and design stage, data and model security during development, security testing at deployment, and monitoring and incident response in operations. Particular focus is given to AI-specific risks such as adversarial attacks, data poisoning, model theft and supply-chain security. A 2025 companion paper, "Securing Agentic AI", extends the framework to agentic AI use cases.

Personal Data Protection Act (PDPA) — AI Application

Governance In force

Sets the legal perimeter for personal data in AI — the Business Improvement Exception leaves room for AI training.

Enacted in 2012 and substantially amended in 2020 to add AI-relevant provisions. The most important changes for the AI era: (1) the Business Improvement Exception — allowing personal data to be used to improve products and services, including AI training, without user consent, subject to a reasonableness test; (2) the right to data portability; (3) strengthened enforcement and penalties. Together with Copyright Act §244, the PDPA forms the dual legal foundation for the use of training data for AI in Singapore.

Why this combination is attractive to AI companies

Why this legal package is attractive to AI companies:

  1. Permissive training, strict outputs — clear boundaries; you know what is and isn't allowed.
  2. One-stop governance — IMDA + MAS + CSA + MINLAW form a unified framework, no policy fragmentation.
  3. Soft law as the default — voluntary frameworks first, hard law only when the soft law is exceeded; gives industry room to adjust.
  4. English common-law system — judicial reasoning is highly transparent and reusable for cross-border legal opinions.

This is one of the structural reasons OpenAI, Anthropic, DeepMind, Mistral and others have set up regional headquarters in Singapore — beyond capital, talent, and infrastructure, the legal certainty is itself a moat.