AI Governance & Regulation · 2026-01-22 · 08:19
Singapore releases the world's first agentic AI governance framework
In Brief
IMDA launches the world's first agentic AI governance framework at the World Economic Forum, establishing deployment norms for autonomous AI systems.
Key Takeaways
- Minister Josephine Teo launched the world's first agentic AI governance framework at Davos, making Singapore the first government to publish one.
- Built by IMDA, the framework pools practices from agencies and leading firms, covering risk assessment, accountability, technical controls, and user education.
- It is guidance, not law — IMDA will not enforce it but is collecting case studies to iterate.
- The framework targets risks unique to autonomous agents: wrong-action deletions, customer data leaks, and human over-reliance.
Summary
Singapore launched the world's first agentic AI governance framework at Davos, with Minister Josephine Teo making the announcement and IMDA leading the work. Agentic AI systems make decisions and act on their own, often with access to customer databases or payment gateways. When they fail — by deleting live code bases, leaking personal data, or applying outdated policies — the impact is immediate, and limited human oversight means errors slip through. The framework's purpose is to spread best practices beyond frontier firms so SMEs can also implement agents safely.
Accountability sits at the center. Lee Wan See, IMDA's Cluster Director for AI Governance and Safety, says enterprises cannot offload responsibility to the agent. Humans must remain accountable across the lifecycle, with concrete measures like limiting agent access to only systems needed for the task and requiring human review at key checkpoints — for example, before an email is sent to a customer.
It is guidance, not legislation. IMDA will not enforce it. Lee uses customer refunds to illustrate: continuous human oversight defeats the purpose of deploying agents, so the framework recommends identifying meaningful checkpoints (a refund above a threshold), training supervisors on common failure modes (an agent following outdated policies), and using automated monitoring to catch patterns across thousands of cases that no single reviewer could spot. IMDA will collect feedback and case studies to refine the framework over time.
Full transcript
Caption language: en · Fetched: 2026-05-02
Singapore has rolled out a new guide to help enterprises deploy AI agents more safely and reliably. Now, these are systems that can make decisions and take actions independently. Launched by in Davos by Digital Development and Information Minister Josephine Teo, Singapore becomes the first government to put out such a framework for this emerging technology. The whole purpose of then putting out these kind of frameworks is to say, well, why should it only be the companies at the frontier have an opportunity to figure it out because they are better resourced, right? Why can't we aggregate this knowledge and make it more widely available to anyone that that has an interest to implement this, including small and medium enterprises. The framework pulls together best practices from government agencies and leading companies on using AI agents.
It includes guidelines to assess risks and ensure accountability by making it clear who bears the responsibility in the case of failure. Among its measures are building in controls to stop and check and educating users. IMDA is accepting feedback and case studies from those who have deployed agentic tools to continue refining the framework. AI agents are taking off with their use cases ranging from handling basic customer service requests to managing complex operations like keeping factory systems running more efficiently. And that's because they can take on routine and repetitive work, freeing up employees time for higher value tasks or operate continuously and at scale to support complex business operations and boost productivity. But, technology doesn't think and the risks are real.
AI agents with access to data can take the wrong actions from mistakenly deleting records to exposing sensitive information. Users may also grow complacent relying on agents that have performed well before, reducing oversight and allowing errors to slip through. With AI agents increasingly interacting with one another, a single point of failure could trigger cascading effects that cause a widespread system disruption. AI agents could also generate conflicting responses, so you might end up acting on the wrong information. That's why industry players have been calling for clear standards on monitoring AI agents and ensure humans remain accountable. For Singapore, its AI governance framework aims to strike a balance, managing risks while still leaving the room for experimentation and innovation. Well, let's get more now. We speak with Ms.
Lee >> [snorts] >> Wan See. She is Cluster Director for AI Governance and Safety at IMDA. First of all, Ms. Lee, welcome to the show. Now, what has changed in AI capabilities that make agentic AI governance urgent today? Well, as you've highlighted to you earlier, agents are increasingly given a lot of autonomy. They have ability to complete a lot of tasks on behalf of humans with very limited human oversight, right? So, and in doing all of this, they're also given access to sensitive information such as customer info. They can connect to external systems like payment gateways. This means that when agents take the wrong actions, there can be immediate impact. There has been real instances as you've already highlighted of coding agents deleting live code bases and data when they have not been instructed to do so.
And another example is agents leaking personal data from customer databases. So, as human involvement is limited, it also means that agent mistakes may not be caught in time. So, increasingly we want to as this space is changing so quickly, we want to be able to bring together emerging best practices from leading companies so that organizations can have a comprehensive resource to understand and manage the risks of agentic AI. That's why it's it's urgent for us to be able to start looking into this now. Yeah, I can understand the urgency. Now, that's probably why the framework also emphasizes end-user responsibility. But, what does this mean in practice for employees or customers who have to interact with AI agents? So, they have to understand that they still have to take some accountability. They cannot be overly reliant on the agents.
So, cannot expect that also responsibility shifts to agents and be absolved of any any problems when agents take wrong or harmful actions. So, we want to emphasize to enterprises that as they deploy agents in their businesses, they must put in place the measures to ensure that humans are responsible and accountable across the whole life cycle. Such as putting in place technical controls. This means, for example, limiting access to only systems that are required for the agents' task. Or providing some effective human oversight. Such as ensuring human review is completed before sending an email to a customer, for example. So, it's a new model of governance. It's a new framework for agentic AI. But, how then will it be enforced and who will do the enforcing? Well, it's essentially a set of guidelines.
So, for a start, these are recommendations for organizations to think about what they have to put in place internally. We will not be looking at enforcement. For example, I'm not going to go after a company that has not implemented these measures right now. But, really it's recommendations for them to start figuring out what are the risks they worry about. And as they think about these risks during the deployment of the agentic AI, so these are recommended measures that they can implement internally. So, it's it's important that we recognize that we are all at a very beginning of agentic AI deployment. And in order to give that space, it's really identifying best practices for organizations to follow while they are able to then try and implement and adopt AI effectively.
Yeah, so as you mentioned, it's the start of a journey and you know, it's still a balance, I guess, because how does the framework help organizations guard against the temptation to over-rely or bias of agentic AI? Maybe I'll just explain a little bit of what over-reliance or bias mean. It means that tendency for humans to over-trust and automate a system like an like an agent in this case, right? Especially when the system has performed reliably in the past. So, some examples of the recommendations that we have given in the framework are one, continuous human oversight over agents may be impractical. And really, I think if you expect the humans to be involved at every stage, it defeats the purpose of even deploying agents in the first place. So, instead we say identify significant checkpoints that require human approval.
So, if I use an example of processing a customer refund, maybe identify a point where the refund exceeds a certain amount and hence that requires a human to come in and and not be an automated response from an agent. Another recommendation could be perhaps not all human oversight can be effective. We need to make sure that we train human supervisors on very common failure examples or modes. Such as again using the customer refund example, the human may not be able to catch a problem like when an agent has been following outdated policies. So, the humans must also be trained on updated policies so they know what to look out for. And the final third example is how do we then complement with ways where we could have automated monitoring? Again, customer refund example, the humans may not be able to catch patterns over thousands of refunds.
So, we may need to find ways of identifying these patterns through collection of data, analysis of data, and so on. So, these are some quite specific recommendations that we made in the framework. Ms. Lee, thank you so much for taking us through this very new model framework for agentic AI. I've been speaking there with Ms. Lee Wan See from IMDA.
Related Videos
Over 250 AI experts gather in Singapore to set global testing standards
2026-04-20 · CNA · 03:46
Singapore-proposed AI safety testing standards take centre stage at an ISO international meeting, drawing over 250 experts from the US, China, Japan, South Korea and beyond — the working group's first session in ASEAN. Nearly 100 AI standards are now published or in development, triple the count of a year ago.
Josephine Teo on Singapore's AI priorities and online safety safeguards
2026-03-31 · Josephine Teo · 45:00
At a Lorong AI media Q&A, Josephine Teo details bilingual AI talent, agentic AI governance, the National AI Impact Plan, and AI's impact on the workplace.
Josephine Teo on enterprise AI adoption and online safety regulation
2026-03-31 · Josephine Teo · 02:30
Josephine Teo says the government is prepared to intervene if enterprise AI adoption falls short of expected outcomes, while releasing IMDA's second Online Safety Assessment Report.
Singapore's National AI Impact Plan: supporting 10,000 firms and 100,000 workers
2026-03-02 · Josephine Teo · 02:45
Minister Josephine Teo announces the National AI Impact Plan, targeting training of 100,000 AI professionals and support for 10,000 firms by 2029.
Josephine Teo urges nations to proactively address agentic AI governance risks
2026-02-20 · Josephine Teo · 05:12
At the World Economic Forum, Josephine Teo unveils the world's first agentic AI governance framework and calls on nations to proactively shape AI governance rules.
Singapore's view of AI has shifted: Josephine Teo interview
2026-02-11 · Josephine Teo · 22:15
Josephine Teo details Singapore's AI strategic shift — from cautious observation to full embrace — and how the government systematically drives AI adoption.