AI Governance & Regulation · 2026-04-20 · 03:46

Over 250 AI experts gather in Singapore to set global testing standards

Speaker
CNA
CNA report
Type
Industry Leader
Source
CNA

In Brief

Singapore-proposed AI safety testing standards take centre stage at an ISO international meeting, drawing over 250 experts from the US, China, Japan, South Korea and beyond — the working group's first session in ASEAN. Nearly 100 AI standards are now published or in development, triple the count of a year ago.

Key Takeaways

  • Over 250 AI experts from the US, China, Japan, South Korea and beyond gathered in Singapore for the ISO working group's first ASEAN session.
  • Nearly 100 AI standards are published or in development, three times the count from 18 months ago.
  • Singapore leads two efforts: codifying AI red teaming, and standardising tests for text-based generative AI apps, due next year.
  • Singapore also released Project Moonshot, an open-source tool to help test AI models for bias, privacy and accuracy risks.

Summary

Singapore's proposed AI safety testing standards took centre stage at an ISO meeting drawing more than 250 experts from the US, China, Japan and South Korea. It was the working group's first session in ASEAN. Nearly 100 AI standards have been published or are in development — three times the figure from 18 months ago — as AI moved from generative to multimodal to agentic in just over three years.

Singapore is driving two work streams. The first codifies AI red teaming, where people deliberately try to break a model into producing inappropriate content or leaking information. The second, put forward last year, sets out how to test text-based generative AI apps such as ChatGPT or DeepSeek and is expected to be ready next year. Singapore also released Project Moonshot, an open-source tool to support testing.

Early standards give startups and enterprises the guardrails they need to innovate with confidence. Officials compared the work to building runways — AI innovation, like a high-performance aircraft, stays grounded without one. The standards also signal Singapore's positioning to the global business community and lift trust.

Full transcript

Caption language: en · Fetched: 2026-05-02

Welcome back. Singapore's proposed standard on testing AI safety is being discussed by over 250 experts from around the world. It's part of meetings by the International Standards Organization and includes representatives from the US, China, South Korea, and Japan. This is the first time the group has met in ASEAN. Nearly 100 AI standards have been published or are in the works, three times more than a year and a half ago. This is necessary given the breakneck pace of AI development and use. In slightly over 3 years, we've seen the use of AI grow from generative AI to multimodal and now to agentic AI. So, standards work must keep pace. Embedding standards early is like building new runways. AI innovation is like a high-performance aircraft. Without a safe, well-built runway, its potential remains grounded.

Early standards provide the guardrails that allow startups and enterprises to innovate with confidence. Nicholas Ng explains how the proposed standards could make AI systems more reliable and how Singapore stands to benefit. So, how can you make sure that ChatGPT isn't feeding you incorrect information? Because that is one of the risks that can come with using generative AI systems, along with giving answers that are biased against particular social groups, or infringing personal privacy, among others. Fixing these problems takes testing, and how to do it is a question countries and companies have been trying to answer, especially since AI models aren't like other kinds of software. We have there this millions and billions of parameters, and what emerges there from behavior is better understood with complex systems theory.

Think about the desert. If you look at the single grain of sand, it's very hard for you to deduct where the dunes are, how they are moving, and so on. It's emerging behavior from from smaller components. Singapore's tried to solve that problem and released an open-source tool to help with the process called Project Moonshot, among other methods to find concrete, well-defined, and effective ways to test AI models. There's also the International Organization for Standardization Subcommittee working on AI. They've already published some standards. For example, evaluating such systems in general and applying software testing practices to an AI context. In Singapore, they'll be working on two things. The first aims to codify AI red teaming. This means safety is tested by people trying to break the product.

For example, getting a model to produce something inappropriate or leak information. The other, put forward by Singapore last year, aims to standardize how to test text-based generative AI apps. Think chatbots using ChatGPT or DeepSeek. And defining these standards solves a problem that's been stopping AI developers from getting off the ground, not to mention other wider-reaching benefits. It is going to be a tough journey, a lot of work to be done, but definitely that is going to be well worth it because of the recognition that there are things that we can contribute to the international community as well. And we return, of course, uh enterprises and businesses will also know the positioning of Singapore to be able to uh handle that, and that can only increase the trust.

The standard includes guidance on setting up benchmarks to test how good a generative AI app is. It's expected to be ready next year.

Related Videos

Singapore's ONE Pass adds AI and tech tracks to attract top global talent

2026-04-19 · CNA · 02:54

ONE Pass adds AI and tech tracks; since launching in 2023 it has drawn over 8,000 professionals. The new version relaxes criteria and recognises equity compensation, making it easier for startups and high-growth firms to attract talent.

Josephine Teo on enterprise AI adoption and online safety regulation

2026-03-31 · Josephine Teo · 02:30

Josephine Teo says the government is prepared to intervene if enterprise AI adoption falls short of expected outcomes, while releasing IMDA's second Online Safety Assessment Report.

Josephine Teo urges nations to proactively address agentic AI governance risks

2026-02-20 · Josephine Teo · 05:12

At the World Economic Forum, Josephine Teo unveils the world's first agentic AI governance framework and calls on nations to proactively shape AI governance rules.

Singapore releases the world's first agentic AI governance framework

2026-01-22 · Josephine Teo · 08:19

IMDA launches the world's first agentic AI governance framework at the World Economic Forum, establishing deployment norms for autonomous AI systems.

President Tharman's ICCS 2025 keynote

2025-11-17 · Tharman Shanmugaratnam · 29:51

President Tharman Shanmugaratnam delivers a keynote at the International Cyber Conference Singapore (ICCS), addressing security challenges posed by agentic AI and quantum computing.

AI governance must balance ambition and humility: President Tharman

2024-05-29 · Tharman Shanmugaratnam · 01:42

At Asia Tech x Singapore, President Tharman Shanmugaratnam stresses that AI governance must strike a balance between ambition and humility.