MDDI 演讲稿 · 2026-02-20

杨莉明部长在联合国数码与新兴技术办公室(ODET)「科学在国际 AI 治理中的角色」专题讨论上的闭幕致辞

Closing Remarks by Minister Josephine Teo at the United Nations Office for Digital and Emerging Technologies (ODET): The Role of Science in International AI Governance Panel Session

Josephine Teo · 数码发展及新闻部长 · 联合国数码与新兴技术办公室(ODET):科学在国际 AI 治理中的角色专题讨论

要点

  • 新加坡定位 AI 为「公共利益的力量」,并以 10 亿新元投入《国家 AI 研发计划》(含负责任 AI 的基础与应用研究);同时建有「数字信任中心」(AI Safety Institute)与「在线安全先进技术中心」。
  • AI 治理永远存在「快」与「稳」的张力——必须把科学与政策整合起来。这件事不容易,但不能放弃;国际合作做出可互操作的方案,是关键基础。
  • 联合国的独特价值,是合法性与包容性,能在碎片化的全球 AI 治理图景里推动互操作。新加坡欢迎成立「独立国际 AI 科学小组」(基于联合国 AI 高级咨询机构的工作)。
  • 在透明、问责、公平、安全等高层原则上,全球已有相当共识——挑战在「运营化」:标准化评估方法 + 能力建设,让不只是大 AI 国家也能参与。
  • 新加坡贡献:举办《国际 AI 安全科学交流》并促成《新加坡 AI 安全研究优先事项全球共识》;两届「AI 安全红队挑战赛」;担任 ASEAN AI 治理工作组主席,推出《ASEAN AI 治理与伦理指南》并扩展涵盖生成式 AI 风险。

完整译文(中文)

MDDI 英文原文译文 · 翻译日期:2026-05-02

各位早安。

首先,请允许我感谢联合国秘书长的发言——它对所有从事这项重要技术工作的我们来说,是非常有用的指引。

在这段闭幕发言里,我觉得提供一个「小国视角」会有用。新加坡只有 600 万人口,30 多年前在联合国,我们成为「小国论坛」的召集国——这个论坛至今仍有约 108 个成员。

关于我们如何看待这条战线上的进展,我提三点。

第一点:我们相信 AI 应当被用作「公共利益的力量」。要做到这一点,就必须持续投资于支撑 AI 的科学,并把信任建立在证据之上。

这显然需要持续投入研究。这也是我们为什么在《国家 AI 研发计划》中划拨 10 亿新元——其中将包含针对负责任 AI 的基础与应用研究。我们相信这件事,必须把钱放进这一努力。

当然还有其他投入——比如建设「数字信任中心」(Digital Trust Centre),这是我们指定的 AI 安全研究机构(AI Safety Institute),一直在参与这方面重要对话;还有设立「在线安全先进技术中心」(Centre for Advanced Technologies in Online Safety)——这只是作为一个小国,我们能投入的资源中的一部分。

第二点是——鉴于 AI 发展的节奏,我们「快速行动」的冲动几乎一定会与「审慎行动」的需要(基于最新出现的证据)发生张力。

这两种冲动都是必要的。我们相信——通过把科学与政策整合起来,平衡两者并非不可能。

这不容易,但不是我们应当放弃的努力。在这一点上,我还想补充:如果我们能在国际层面合作,开发稳健且能跨司法辖区互操作的方法,会好得多。这正是联合国所开展工作的根基之一。

这就引到我的第三点。我想强调像联合国这样的组织在「促成全球对话、把科学与政策衔接起来」这件事上的重要角色。这一努力的重要性,怎么强调都不为过。

我们必须看到——全球 AI 治理图景正在变得越来越碎片化:多个倡议、多个框架、多个机构并存。

联合国的独特价值在于它的合法性与包容性——足以在不同努力之间推动互操作。

因此,我们欢迎「独立国际 AI 科学小组」(Independent International Scientific Panel on AI)的成立——它建立在联合国 AI 高级咨询机构的工作之上(该机构于 2024 年底发布了报告《为人类治理 AI》)。

我们注意到——这个小组采用多学科路径(机器学习、应用 AI、社会科学、伦理学),这对应对 AI 治理挑战的复杂性是必要的。

最后,我想指出:在 AI 的高层原则方面,我们目前已经有了相当多的共识——Yoshua 也谈到了——透明、问责、公平、安全;挑战在于「如何把它们运营化」。

我们需要找到能在不同监管语境下都能用的「标准化评估方法学」。我们需要能力建设,让所有国家都能有意义地参与到技术证据的对话中——而不只是那些拥有大型 AI 研究生态的国家。

我希望所有利益相关方都能把科学输入视为更稳固、更有效治理的基础——而不是政策灵活性的约束——只有这样,公众的信任才能维系下来。

我们必须让这些对话持续下去——一种「让科学指引治理、让治理打磨科学」的对话。

最后我想强调——新加坡将继续致力于推进这些讨论。

我们非常荣幸举办了《国际 AI 安全科学交流》,并促成了《新加坡 AI 安全研究优先事项全球共识》。Yoshua 也来到了新加坡参加这一非常重要的活动。

我们将继续参与「国际先进 AI 测量、评估与科学网络」(International Network for Advanced AI Measurement, Evaluation and Science)的联合测试。我们已举办两届「新加坡 AI 安全红队挑战赛」(AI Safety Red Teaming Challenge)——这是亚太地区第一个多文化、多语言的 AI 安全红队演练。

作为「东盟 AI 治理工作组」(ASEAN Working Group on AI Governance)主席,我们积极牵头——在东盟范围内构建可信环境,把全球规范与最佳实践调适到 ASEAN,并通过《ASEAN AI 治理与伦理指南》推动区域协调,并将其扩展到生成式 AI 的风险。

我们目前正在 ASEAN 内部探索 AI 安全测试的实用工具,并希望共同开发一套反映本地区关切的 AI 安全基准。

最后,我想欢迎各位同行加入我们——5 月 17 日至 18 日在新加坡举办第二届《国际 AI 安全科学交流》,期待与各位继续深化这一领域的对话。再次感谢大家。

英文原文

MDDI 官网原始记录 · 抓取日期:2026-05-02

Good morning, everyone.

First, allow me to thank the Secretary General for his remarks. It serves as a very useful guidance to all of us working on this important technology.

For the closing remarks, I thought it would be useful to offer a perspective from a small state. Singapore has a population of just 6 million people, and more than 30 years ago, at the United Nations (UN), we became the convener of the Forum of Small States, which still has about 108 members.

I will just make three points on how we look at developments on this front.

The first point is that we believe in AI being used as a force for the public good. But in doing so, it is important that we continue to invest in the science that underpins it, and ground trust in evidence.

This certainly requires sustained investment in research. It is also the reason why we set aside a $1 billion investment in the National AI R&D Plan, which will include foundational and applied research into responsible AI. We believe in it, and we have to put money behind this effort.

There are, of course, other investments, such as in building up a Digital Trust Centre. It is our designated AI Safety Institute that has been participating in important conversations on this topic, as well as setting up a Centre for Advanced Technologies in Online Safety -- those are just some of the efforts that we can dedicate resources to as a small state.

The second point I want to make is that there is almost always going to be a tension between moving quickly, given the pace of AI development, and moving carefully, given the latest evidence that presents itself on what we should be paying attention to.

Both impulses are necessary, and we believe that it is not impossible to try and balance them through the integration of science and policy.

It is not easy, but it’s not an effort that we must give up on. I should just add that on this score, it will be much better if we can cooperate internationally to develop sound approaches that can also be interoperable across different jurisdictions, and this is one effort that we believe underpins the work that is being carried out by the UN.

This brings me to my third point. I want to highlight the important role that an organisation like the United Nations plays in facilitating global discourse to bridge science and policy. I cannot overemphasise the importance of this effort.

We must recognise that the global AI governance landscape is becoming increasingly fragmented – there are multiple initiatives, frameworks, and institutions.

The UN's unique value lies in its legitimacy and inclusiveness to encourage interoperability across efforts.

We, therefore, welcome the establishment of the Independent International Scientific Panel on AI, building on the work of the UN High-Level Advisory Body on AI, which published its report on Governing AI for Humanity at the end of 2024.

We note that the Panel's multidisciplinary approach -- machine learning, applied AI, social science, and ethics – is necessary to address the complexity of AI governance challenges.

Finally, I would just like to acknowledge that we now have substantial convergence on the high-level AI principles. Yoshua talked about this -- transparency, accountability, fairness, and safety – but the challenge is in operationalising them.

We need to find standardised evaluation methodologies that work across different regulatory contexts. We need capacity building so that all countries can meaningfully engage with the technical evidence, and not just those with the large AI research ecosystems.

I would encourage all stakeholders to view scientific input not as a constraint on policy flexibility, but as a foundation for more durable, effective governance that can maintain public trust.

We need to keep the conversations going, one where science informs governance and governance sharpens science.

I would just perhaps end by highlighting Singapore's continued commitment to contributing to advancing these discussions.

We were very fortunate to host the International Scientific Exchange on AI Safety and to bring about the Singapore Consensus on Global AI Safety Research Priorities. Yoshua was in Singapore for this very momentous event.

We will continue to participate in joint testing efforts of the International Network for Advanced AI Measurement, Evaluation and Science. We have organised two editions of the Singapore AI Safety Red Teaming Challenge, the first multicultural and multilingual AI safety red teaming exercise focused on the Asia Pacific region.

And as chair of the ASEAN Working Group on AI Governance, we have actively spearheaded efforts to foster a trusted environment in ASEAN by adapting global norms and best practices for ASEAN and bringing about regional harmonisation through the ASEAN Guide on AI Governance and Ethics, as well as expanding it to address the risks in generative AI.

We are now working within ASEAN to explore practical tools for AI safety testing and aim to collectively develop a set of AI safety benchmarks that reflect our region's concerns.

And finally, I'd like to welcome all colleagues to join us in Singapore for the second edition of the International Scientific Exchange, which we expect to take place on the 17th and 18th of May, and we look forward to furthering our discussions in this area. Thank you very much once again.