MDDI 演讲稿 · 2025-10-22
杨莉明部长在 AI 高级别小组会议上的开幕致辞
Opening Address by Minister Josephine Teo at HLP (AI) on 22 Oct 2025
要点
- • 两项技术在「同一时刻」重塑世界——智能体 AI(agentic AI)+ 量子计算。两者都要求我们从「被动监管」转向「主动准备」。
- • 智能体 AI 的治理目标 3 条:①以「保障」(assurance)建立信任,不是控制每一个部署;②框架与测试要在真实场景里相关稳健(要给安全的实验空间);③及时行动——不要重蹈数字鸿沟、虚假信息、网络诈骗的覆辙。
- • 新加坡智能体 AI 工具栈:GovTech 的「Agentic Risk and Capability Framework」、IMDA 的 AI Verify + Project Moonshot(red-teaming + 基准测试)+ AI Assurance Sandbox + GovTech-Google Cloud 沙盒。原则——「自主越高,保障越强」。
- • 量子安全:CSA 公开咨询两份新工具——《量子准备度指数》(Quantum Readiness Index,自评工具)+《量子安全手册》(Quantum-Safe Handbook);都视为 MVP 与活文档。
- • 国际协作:AI 与量子都不尊重国界。新加坡—NIST 互通让公司「测试一次、全球合规」;AI Verify 与 ISO/IEC 42001 / G7 广岛 AI 进程对齐;CSA 与 Google、AWS、TRM Labs 签署合作备忘录——Google Play Protect 的增强反欺诈功能截至 2025 年 9 月在新加坡已拦截 622,000 台设备上的 278 万次恶意应用安装。
完整译文(中文)
MDDI 英文原文译文 · 翻译日期:2026-05-02
我的内阁同事 Goh Pei Ming 先生,
各位部长、各位阁下,
各位嘉宾,
各位同事与朋友:
欢迎来到「新加坡国际网络周」(SICW)第二天。今天看到这么多开发者、安全实务者与政策制定者齐聚一堂,我们感到很高兴。
我们正生活在一个非凡的科技时刻。两件事正在我们眼前重塑世界。
第一是「智能体 AI」(agentic AI)——它不只分析与建议,还会决策与行动。
它们已经能帮我们安排会议、写并部署代码,甚至自动化整段业务运作。
若实施得当——智能体 AI 很可能成为受欢迎的「队友」——放大人的能力、把我们从重复劳动中解放、对复杂问题做更快的回应。
但当系统出错、人失去控制时——也带来「问责」的问题。
第二是「量子计算」。
这项技术将从根本上改变我们对「信任」的思考——尤其在加密与安全通信领域。
它在药物发现、金融建模上的革命性能力让人期待——但它也可能击穿现有加密——危及国家安全与商业运作。
两项技术都带来巨大承诺——也都带来严肃风险。
更重要的是——两者都要求我们做出新的姿态:从「被动监管」转向「主动准备」——因为它们的影响无法被完全预测。
这种转变可以是我们的志向——但需要集体的意志、智慧与行动——好让我们在这些技术「治理我们」之前,「治理它们」。
国际扫视
好消息是——许多国家已经在寻找答案。
在智能体 AI 上——我们都在角力同一个基本问题:如何治理一个能自主行动的 AI?
欧盟与韩国已经建立全面的 AI 法规——但智能体 AI 的自主决策能力——给「透明、人类监督」等关键要求的落地带来了实操挑战。
美国国家标准与技术研究院(NIST)正在为 AI 智能体开发测试标准——而非规定式规则。
英国的 AI Security Institute 已开发出测试 AI 智能体的「沙盒工具包」——但「通过测试」是否能保证好行为还不确定——因为智能体在学习与演化。
在量子领域——也有越来越大的势能。
联合国已宣布 2025 年为「国际量子科学与技术年」——这是国际社会对量子变革潜能的非凡共识。
欧盟启动「Quantum Europe Strategy」——把科学领先转为产业实力。
韩国成立「量子战略委员会」并配以重大资金;日本宣布 2025 年为「量子产业化元年」。
希望与恐惧共存——人们也担心量子能力被滥用以击穿加密、威胁我们数字系统的根基。
我们想知道——如何在「后量子未来」中蓬勃——既驾驭机会、又管理风险。问题是——我们能等多久?
我们的治理目标
作为政策制定者——采取行动时必须对「治理目标」保持清晰。无论是智能体 AI 还是量子计算——我建议在此节点有 3 个目标。
第一——我们的目标必须是「通过保障建立公民信任」——而不是非要控制 AI 智能体与量子技术被部署的每一个实例。
良好的治理始于——即便我们不去控制,也要理解风险——并构建工具系统化地管理风险。
我们需要在系统大规模部署之前,就为它们建立测试、验证与问责的实操框架——一旦部署铺开,再去补救风险可能就晚了。
第二——我们必须确保框架与测试在真实世界应用中相关而稳健。这就要求提供——配有合适护栏的——「安全实验空间」。
第三——我们要确保「及时行动」。在多个领域——我们已经知道「行动太晚」的代价是什么——数字鸿沟、虚假与误导信息、在线伤害、诈骗。我们尽量不要在智能体 AI 与量子上重蹈覆辙。
新加坡不会假装拥有所有答案——但我们想分享我们怎么思考这些议题、以及我们正在做的事。
我们对智能体 AI 治理的方法
对一个人手不足的国家而言——智能体 AI 提供了巨大潜能。
我们看到它们被用来——增强公共服务交付、预判公民需求并提供个性化支持。
我们的中小企业可以从更自动化的运营、资源优化中受益。
我们的国家网络安全也可以更强——以智能体在「机器速度」上侦测、防御、回应。GovTech 已经在试用。
但每一项新能力都带来新风险。智能体 AI 出错时谁担责?我们如何防止恶意使用——自动化的网络攻击或虚假信息行动?我们如何管理对就业的系统性影响、或潜在的「人类失去控制」?
首先——我们必须系统性地识别风险。今年 GovTech 推出了「智能体风险与能力框架」(Agentic Risk and Capability Framework)——它定义了智能体 AI 系统的组件与能力,用以映射风险——并规定保障措施。原则是:在我们能信任「自主性」之前,必须先理解风险出现在哪里、如何出现。
第二——让保障变得可操作、可测量。
通过 IMDA 的「AI Verify 框架」与「AI Assurance Sandbox」——我们给开发者开放工具,测试系统的鲁棒性、透明性与安全性。
IMDA 也通过「Project Moonshot」增强了 AI Verify,使其覆盖生成式 AI 的独有风险——把基准测试与内容红队结合起来——测试幻觉与有害内容生成等问题。
我们也在为智能体 AI 改造工具与安全框架——基于 CSA 的《保护 AI 系统指南与配套指南》(Guidelines and Companion Guide on Securing AI Systems)。
第三——通过真实部署「做中学」。
通过 GovTech-Google Cloud 沙盒倡议——MDDI 旗下机构有机会测试与评估 Google 最新的智能体能力、评估风险、开发缓释措施,并把所学分享给新加坡更广泛的 AI 实务社群。
通过观察这些系统如何运作——以及有时怎样失败——我们能学到「真正需要的护栏」是什么。
第四——我们一致地采用「基于风险」的治理。
我们对治理采取「分行业」的方法。
这种分行业方法旨在确保治理措施与风险成比例。
例如——影响生计的金融决策比娱乐推荐受到更多审视;医疗诊断的验证标准比物流优化更高。
在所有受监管的行业里——我们遵循一个原则:「自主越高,所需的保障越强」。
最重要的是——人始终是最终的责任承担者。
这种协调式做法旨在创建一个全面的治理生态——让测试框架、安全要求、落地指引能彼此协作。随着时间推移——我们希望搭出一座「能随 AI 能力与风险而扩展、但每一层都保留人类问责」的「治理栈」。
我们对量子安全的方法
在量子方面——我们也在采取具体行动。
去年我们公布了《国家量子战略》——5 年内承诺投入 3 亿新元支持量子研发。这些投资建立在 2000 年代初以来打下的基础上——给学界资源去推动科学边界,给业界能力去发展商业应用。
但我们也在管理风险。
尽管量子威胁的认知在上升——但很少有组织真正启动「量子安全迁移」。
原因可能是——量子发展不确定,且缺少具体指南。
CSA 今天将通过启动两份资源公开征求意见,来填补这一空缺。
第一——「量子准备度指数」(Quantum Readiness Index)——一个自评工具——帮助组织理解自己面对加密量子威胁的当前准备度,并规划向「量子安全系统」的迁移路径。
第二——「量子安全手册」(Quantum-Safe Handbook)——为组织(尤其是关键信息基础设施持有者与政府机构)提供向「量子安全密码学」过渡的指引。这本手册由 CSA、GovTech、IMDA 联合开发——并与领先科技公司、网络安全咨询公司、专业协会合作完成。
我们把这些资源视为 MVP——「最小可行产品」——是会通过公开反馈持续改进的「活文档」。欢迎大家贡献——我们一起学习。
国际合作
现在我谈一谈国际合作这一重要议题。
对我们今天讨论的两项技术——有一个根本现实:
智能体 AI 与量子计算都不尊重国界。
量子计算在任何地方的突破,都会影响所有地方的加密。
一个国家系统中的漏洞,能在全球级联放大。
这意味着——国际合作必须从「原则」走向「实践」。
一种方式是确保「跨不同系统、跨不同国家可互操作的治理框架」。例如:
新加坡与 NIST 的「互通对照」(crosswalk)希望让公司「测试一次、全球合规」(test once, comply globally)。
AI Verify 的测试框架与国际标准对齐,包括 ISO/IEC 42001 与 G7「广岛 AI 进程」原则。
这降低了合规负担——同时维持严格的标准。这是一个我们必须时刻记住的实操考量。公司在每一项行动(包括测试)上都会评估成本与收益。
通过与澳大利亚、英国等国签订的《数字经济协定》(DEA)——我们也把治理原则嵌入贸易关系中。我们 2024 年发布了《ASEAN AI 治理与伦理指南》——以协调东南亚的做法——并在 2025 年扩展至覆盖生成式 AI。
在「智能体 AI 安全」这件事上——我们也在国际上主动出招。
CSA 正在公开征求意见——发布一份关于「保护智能体 AI」的文件。
该文件是其《保护 AI 系统指南与配套指南》的「附录」——专门覆盖智能体 AI 系统的独有风险。
它也是一封邀请函——邀请政府、研究者、产业伙伴——共同塑造「保护智能体 AI」的全球参考。
在量子计算上——NIST 的新「抗量子密码学标准」给我们一套共同的技术基础。
但仅靠标准还不够。
我们必须在区域与国际上协作——制定并协调迁移建议。
这是一个我的 ASEAN 同行希望进一步讨论的领域——我们将研究如何促成这样的对话。
除了政府间合作——我们也在与产业深化实操级伙伴关系。
CSA 将与多家主要科技公司——包括 Google、AWS、TRM Labs——签署合作备忘录——加强 AI 驱动的网络威胁情报共享,并启动针对恶意活动的联合行动。
我们与 Google 的伙伴关系展现了实在的好处——「Google Play Protect」中的「增强反欺诈保护」功能——截至 2025 年 9 月——已在新加坡 622,000 台设备上拦截了 278 万次恶意应用安装。
结尾
让我做个收尾。
智能体 AI 时代已至——量子安全准备的时刻就在当下。它们带来许多承诺——也带来许多未知。
「最大化上行、最小化下行」——这是我们的共同利益。带着紧迫感与目的感一起协作——我们才能学得更快、把握更大的成功概率。
再次感谢各位参加 SICW——祝大家有更多富有成效的讨论。
英文原文
MDDI 官网原始记录 · 抓取日期:2026-05-02
My Cabinet colleague, Mr Goh Pei Ming
Fellow Ministers, excellencies,
Distinguished guests,
Colleagues and friends
Welcome to Day 2 of the Singapore International Cyber Week. We are glad to see so many developers, security practitioners, and policymakers gathered here today.
We are living through an extraordinary moment in technology. Two developments are reshaping our world right before our eyes.
The first is agentic AI – systems that do not just analyse and recommend, but decide and take action.
They can already help us schedule meetings, write and deploy code, even automate entire business operations.
Implemented properly, agentic AI will likely be a welcomed teammate that amplifies human abilities, freeing us from repetitive work and enabling faster responses to complex problems.
But there are also questions of accountability when systems malfunction, and humans lose control.
The second is quantum computing.
This technology will fundamentally change how we think about trust, especially in cryptography and secure communications.
While it promises revolutionary capabilities in drug discovery and financial modelling, it could also break current encryption, potentially compromising both national security and business operations.
Both technologies offer tremendous promise. But they also pose serious risks.
More significantly, both demand something new from us: a shift from reactive regulation to proactive preparation when their implications cannot be fully predicted.
This shift can be our aspiration, but it will take collective will, wisdom and action to govern these technologies before they govern us.
INTERNATIONAL SCAN
Fortunately, many countries are already seeking answers.
On agentic AI, we wrestle with the same basic question: how to govern AI that can act autonomously?
The EU and South Korea have established comprehensive AI regulations, but agentic AI's autonomous decision-making capabilities create practical challenges in meeting key requirements like transparency and human oversight.
The US National Institute of Standards and Technology (NIST) is developing testing standards for AI agents rather than prescriptive rules.
The UK's AI Security Institute has developed sandboxing toolkits for testing AI agents, though it is not known if “passing” a test guarantees good behaviour as the agents learn and evolve.
In quantum, there is also growing momentum.
The UN has declared 2025 the International Year of Quantum Science and Technology – an extraordinary international consensus on quantum's transformative potential.
The EU launched its Quantum Europe Strategy to turn scientific leadership into industrial strength.
South Korea established a Quantum Strategy Committee backed by significant funding. Japan declared 2025 the first year of quantum industrialisation.
Along with hope, there is fear that quantum capabilities can be misused to break encryption and threaten the foundation of our digital systems.
We want to know how to thrive in a post-quantum future – both in terms of harnessing the opportunities and managing the risks. The question is: how long can we afford to wait for the answers?
OUR GOVERNANCE OBJECTIVES
As policymakers, we should always strive to be clear about our governance objectives when taking actions. Whether for agentic AI or quantum computing, I suggest that there are three objectives at this juncture.
First, our goal must be to build trust with citizens through assurance, and not necessarily control all the instances where AI agents and quantum technologies are deployed.
Good governance begins with understanding risks even when we do not exercise control, and building the tools to manage the risks systematically.
We need practical frameworks for testing, validation, and accountability before systems are deployed at scale, because it may be too late to address the risks by then.
Second, we must ensure that the frameworks and tests are relevant and robust in real-world applications. This calls for the provision of safe spaces for experimentation, with appropriate guardrails.
Third, we want to ensure timely action. In several areas, we know the costs of not having acted early enough – the digital divide, misinformation, disinformation, online harms, and scams, for example. Let us try not to make the same mistakes with agentic AI and quantum.
Singapore will not pretend to have all the answers. But we would like to share how we are thinking about these issues and what we are doing in response.
OUR APPROACH TO AGENTIC AI GOVERNANCE
For a country with insufficient manpower, agentic AI offers tremendous potential.
We can see them being used to enhance public service delivery, to anticipate citizens’ needs and provide personalised support.
Our SMEs can benefit from more automated operations and resource optimisation.
Our national cybersecurity can be stronger with the use of intelligent agents to detect, defend and respond at machine speed. GovTech is already experimenting.
But every new capability brings new risks. Who is accountable when agentic AI malfunctions? How do we prevent malicious use – automated cyberattacks or misinformation campaigns? How do we manage systemic impacts on jobs or potential loss of human control?
First, we must identify risks systematically. This year, GovTech launched the Agentic Risk and Capability Framework. It defines components and capabilities of agentic AI systems, to map risks, and prescribes safeguards. The principle is that we must understand where and how risks arise before we can trust autonomy.
Second, making assurance practical and measurable.
Through the IMDA’s AI Verify Framework and AI Assurance Sandbox, we give developers open tools to test their systems for robustness, transparency, and safety. systems for robustness, transparency, and safety.
IMDA had also enhanced AI Verify to cover generative AI's unique risks through Project Moonshot, which combines benchmarking and content red-teaming to test for issues like hallucination and harmful content generation.
We are adapting our tools and security frameworks for agentic AI – building on the CSA’s Guidelines and Companion Guide on Securing AI Systems.
Third, learning by doing with real deployment.
Through the GovTech-Google Cloud sandbox initiative, MDDI agencies have a chance to test and evaluate Google’s latest agentic capabilities, assess the risks, develop mitigation measures, and share the lessons learned with the broader community of AI practitioners in Singapore.
By observing how these systems behave – and sometimes fail – we learn what guardrails are truly needed.
Fourth, we are applying risk-based governance consistently.
We take a sector-specific approach to governance.
This sector-specific approach is designed to ensure that governance measures are proportionate to the risks.
For example, financial decisions affecting livelihoods receive more scrutiny compared with entertainment recommendations, and medical diagnoses demand higher validation standards than logistics optimisation.
Across our regulated sectors, we follow the principle that the higher the autonomy, the stronger the assurance needed.
Most importantly, humans remain ultimately responsible.
This coordinated approach aims to create a comprehensive governance ecosystem where testing frameworks, security requirements, and practical implementation guidance work together. Over time, we hope to build a governance stack that scales with AI capability and risk, while maintaining human accountability at every level.
OUR APPROACH TO QUANTUM SAFE
In quantum, we are also taking concrete action.
Last year, we announced the National Quantum Strategy with S$300 million committed over five years to quantum research and development. These investments build on foundations dating back to the early 2000s to give academia resources to push scientific boundaries, and support industry with capabilities to develop commercial applications.
But we are also managing the risks.
While there is growing awareness of the quantum threat, few organisations have embarked on quantum safe migration.
This is likely because of uncertainty over quantum developments and the lack of specific guidance.
CSA will plug this gap by launching two resources for public consultation today.
First, the Quantum Readiness Index is a self-assessment tool that helps organisations understand their current preparedness for quantum threats to encryption, and chart their migration journey towards quantum-safe systems.
Second, the Quantum-Safe Handbook provides guidance for organisations, particularly Critical Information Infrastructure owners and government agencies, to ready themselves for the transition to quantum-safe cryptography. This handbook was jointly developed by CSA, GovTech, and IMDA, in collaboration with leading technology companies, cybersecurity consultancies, and professional associations.
We consider these resources to be MVP – minimum viable products – live documents that get improved through public feedback. And we welcome you to contribute so we can all learn together.
INTERNATIONAL COOPERATION
Let me now turn to the important topic of international cooperation.
There is a fundamental reality about both technologies that we have discussed today.
Neither agentic AI nor quantum computing respects borders.
A breakthrough in quantum computing anywhere affects encryption everywhere.
A vulnerability in one country's systems can cascade globally.
This means international cooperation must turn from principle to practice.
One way is to ensure interoperable governance frameworks that work across different systems and countries. For example:
Singapore’s crosswalk with NIST hopes to enable companies to "test once, comply globally".
AI Verify's testing framework aligns with international standards including ISO/IEC 42001 and the G7's Hiroshima AI Process principles.
This reduces compliance burden while maintaining rigorous standards. It is a practical consideration that we must keep in mind. Companies always evaluate the cost and benefit of any action, including testing.
Through Digital Economy Agreements with countries like Australia and the UK, we also embed governance principles into trade relationships. We published the ASEAN Guide on AI Governance and Ethics in 2024 to harmonise Southeast Asian approaches, with a further expansion in 2025 to cover generative AI.
On agentic AI security specifically, we are taking proactive steps to address the challenges internationally.
CSA is releasing for public consultation a document on securing agentic AI.
This document is an addendum to its Guidelines and Companion Guide on Securing AI Systems, to cover the unique risks of agentic AI systems.
It is also an invitation – to governments, researchers, and industry partners – to help shape a global reference for securing agentic AI.
On quantum computing, the new NIST quantum-resistant cryptographic standards give us a common technical foundation.
But standards alone are insufficient.
We need to work regionally and internationally to develop and coordinate migration advice.
This is an area that my ASEAN colleagues have asked for further discussions on, and we will see how to facilitate.
Besides inter-governmental cooperation, we are deepening practical partnerships with industry.
CSA will be signing memoranda of cooperation with major technology companies, including Google, AWS, and TRM Labs, to enhance AI-driven intelligence sharing on cyber threats and enable joint operations against malicious activities.
Our partnership with Google demonstrates the tangible benefits – the Enhanced Fraud Protection feature within Google Play Protect has blocked 2.78 million malicious app installations across 622,000 devices in Singapore as of September 2025.
CONCLUSION
Let me conclude.
The age of agentic AI is upon us and the time for quantum-safe preparation is now. They bring much promise but also many unknowns.
We have a collective interest in maximising the upsides while minimising the downsides. By working together with a sense of urgency and purpose, we will learn faster and better our chances of success.
On that note, I thank you once again for being part of SICW and wish you many more fruitful discussions.