MDDI 演讲稿 · 2023-11-16

杨莉明部长在第四届欧洲 AI 联盟大会上的线上访谈实录

Transcript of Minister Josephine Teo's virtual interview at the 4th European AI Alliance Assembly

Josephine Teo · 数码发展及新闻部长 · 第四届欧洲 AI 联盟大会

要点

  • 新加坡的「可信 AI」三层信任:①公众信任(公平、安全、有益;为伦理设护栏;机构问责与透明);②政府—产业信任(开源 AI Verify,已聚集 90+ 公司含 IBM、Google、微软、Salesforce、Red Hat);③政府—政府信任(新欧《数码伙伴关系》、新加坡—NIST AI Risk Management Framework 与 AI Verify 的「映射」)。
  • 国际合作必须包容——所有受 AI 影响的国家都该在场。新加坡在 ASEAN(已发布《AI 治理与伦理指南》;明年新加坡担任 ASEAN 数码部长会议主席)+ 联合国(高级 AI 咨询机构)+ 「Digital Forum of Small States」(108 个成员)+ G7 广岛进程 + 英国 AI 安全峰会(与英国 AI 安全研究院合作)等多通道发声。

完整译文(中文)

MDDI 英文原文译文 · 翻译日期:2026-05-03

本文已从早期版本的网站迁移过来——格式可能有不一致之处。

Q1) 您能分享新加坡在「确保可信 AI」上的愿景与努力吗?

AI 安全如今是大小国家共同的关切。同时——我相信我们都看到 AI 能带来的巨大机会。AI 显然有变革潜力——我们相信它能被用于公共利益。对于一个根本上受劳动力规模约束的国家——新加坡——我们对 AI 的兴趣——其实根植于它作为「力量倍增器」的潜力。

但我们也清楚——AI 的广泛使用——伴随多个领域不安感的上升——包括它如何被用于诈骗、网络攻击、误导与虚假信息。构建可信生态——是「为公共利益借力 AI」的核心基础。这意味着——把标准与治理框架放就位——确保 AI 被安全、负责任地开发与部署——并在缓释风险的同时为创新留出空间。

我们认为——这一生态的运转——需要 3 个层级的信任建设。

首要是「公众信任」。要加速 AI 的广泛采用——每个人都需要相信——这项技术是公平、安全、有益的。政府在帮助构建公众信任、回应「AI 会扰动职场与生计」的关切上——扮演关键角色。这是我们必须努力的领域。我们也必须为「AI 的伦理开发与使用」设立护栏。关键的——我们也必须确保——组织对其 AI 使用做到「问责」与「透明」。

在这种思考的基础上——新加坡可能是世界上最早把 AI 治理框架放就位的国家之一——我们 2019 年这样做了——当时大概是亚洲首个。

但构建公众信任并非政府独自能做。产业也必须向公众传达——其对维护标准、遵循 AI 治理的承诺。组织同时也必须帮助员工适应 AI 采用。这些都是公众信任的根基。

第二——政府与产业之间的信任也非常重要。把公私部门的最佳能力汇集起来——让我们能以「安全且负责」的方式——实验 AI 技术。今年 6 月——新加坡把治理框架与测试工具包「AI Verify」开源了。这让模型与应用开发者、第三方测试方与研究社群——能集体地为「更可信的 AI」做贡献。我很高兴分享——已有 90 多家公司(包括 IBM、Google、微软、Salesforce、Red Hat 等主要玩家)——加入「AI Verify Foundation」——帮助我们让该工具包以更稳健的方式运转。我们也欢迎欧洲公司参与 AI Verify。最近——我们也通过一个首创的「评估沙盒」——把这一工具包扩展到生成式 AI。同样——我们非常欢迎欧盟成为这一倡议的一部分。

第三是——政府之间的信任。各国必须走到一起——在 AI 治理方法上做协调。AI 创新——需要共同的规则、标准与基准——才能起飞。今年 2 月——我们与欧盟签署了《新欧数码伙伴关系》。这或许是非约束性框架——但它涵盖了非常全面的议题——包括数字贸易便利化、跨境数据流动、网络安全、AI 等新兴技术与标准。

因此——我们希望——能探索如何在我们的 AI 治理框架之间——推动「互操作性」。它们可能来自略有不同的哲学——或并不相同——但寻找共同基础始终有价值。这就是为什么——新加坡与美国——最近宣布了我们的合作。我们最近完成了一次成功的「映射」——把美国国家标准与技术研究院(NIST)的《AI 风险管理框架》——与新加坡的 AI Verify——做了对照。所以——我们相信——国际合作的基础已开始建立——这将促进「负责任且安全的创新」——为我们的人民与企业带来切实的好处。

Q2) 新加坡如何为「全球可信 AI」的国际合作做贡献?

我先后退一步说——一谈到国际合作——我们必须尽力做到「包容」。这意味着——所有「人民与企业会受到 AI 广泛开发与部署影响」的国家——我们都希望成为这场对话的一部分——不希望被遗漏。

我们也处在 AI 发展的关键节点——「作为一个国际社群协作」——会让我们更有机会确保「人人都能以安全、可靠的方式充分收获 AI 收益」。所以我们必须继续推动「多利益相关方交流」——把多元视角带到桌前——不能由少数人「替我们所有人发言」。

因此——我们希望继续鼓励同行——以「包容」的方式——确保较小与发展中国家——在塑造「健康的国际可信 AI 环境」时——有发言权。

也分享——在我们所在的 ASEAN 区域——成员国处在 AI 发展与数字访问的不同阶段。但我们仍能合作开发出《ASEAN AI 治理与伦理指南》——给我们一个区域级的「基线」来指引前进。明年——新加坡担任「ASEAN 数码部长会议」主席时——我们将继续在这一基础上做贡献并扩展。

若把合作扩到联合国层面——我们当然欢迎「AI 高级咨询机构」(high-level advisory body on AI)的成立——它将与多边层面的多项数字倡议互补——并把跨地区、多元的群体聚到一起。

在成员国之间——新加坡通过「Digital Forum of Small States」(共 108 个成员)——支持我们的全球数字目标。这是一个我们分享经验、彼此支持构建能力的社群——好让每个国家都能在全球 AI 对话中有意义地参与。这些努力将与 G7 广岛进程等其他全球对话互补。

本月早些时候——我出席了在英国举办的「AI 安全峰会」——它汇集了不同利益相关方讨论 AI 安全(尤其涉及基础模型)的重要议题。参与者跨越政府、产业专家、研究社群与公民社会。峰会上宣布——新加坡将与英国「AI 安全研究院」合作——共同推进 AI 测试方面的能力与专长。最终——我们希望共同创造可信 AI——能切实惠及经济与社会。

演讲实录的 PDF 版本

英文原文

MDDI 官网原始记录 · 抓取日期:2026-05-02

This article has been migrated from an earlier version of the site and may display formatting inconsistencies.

Q1) Could you share the vision and efforts of Singapore to ensure trustworthy AI in your country?

AI safety is now an issue of concern for all countries, both large and small. At the same time, I think we all see the tremendous opportunities that AI can bring. AI certainly has transformative potential, and we believe can be harnessed for the public good. And for a country like Singapore that is fundamentally constrained by the size of our workforce, our interest in AI is really grounded in its potential as a force multiplier.

However, we are also aware that the use of AI on a widespread basis is accompanied by a growing sense of unease in several areas, including how they can be used for scams, cyberattacks, and for misinformation and disinformation. Building a trusted ecosystem is the core foundation for harnessing AI for the public good. And this means putting in place standards and governance frameworks to ensure that AI is developed and deployed safely and responsibly, and that we can mitigate risks, while maximising the room for innovation.

We believe there are three levels of trust building that are necessary for this ecosystem to operate and function.

First and foremost is public trust. To accelerate the widespread adoption of AI, everyone needs to feel that the technology is fair, safe and beneficial. Governments play an essential role in helping to build the public trust, to address concerns that AI will bring disruption to workplaces and livelihoods. That is one area that we must work on. We must also set up guardrails for the ethical development and use of AI. And critically, we must also ensure that organisations are accountable and transparent about the use of AI.

As a result of this thinking, Singapore was probably one of the first countries in the world to put in place an AI governance framework. We did this in 2019. At the time, we were probably the first in Asia.

Now, building public trust is not just for the government alone. Industry must also communicate to the public its commitment to uphold standards and adhere to AI governance. Organisations must at the same time help their workers to adapt to AI adoption. So, these are the foundations of public trust.

Second, the trust between government and industry is also very important. Harnessing the best of public and private sector capabilities allows us to experiment with AI technologies in a safe and responsible way. In June this year, Singapore open sourced our governance framework and testing toolkit called AI Verify. This allows us to involve model and app developers, third-party testers and the research community to collectively contribute to more trustworthy AI. I am glad to share that over 90 companies, including major players like IBM, Google, Microsoft, Salesforce, and Red Hat, are now part of the AI Verify Foundation to help us make this toolkit work in a more robust manner. We also welcome European companies to participate in AI Verify. We also recently extended this toolkit to generative AI through a first of its kind evaluation sandbox. Again, we very much welcome the EU to be part of this initiative.

The third area is the trust that we will need between governments. There is certainly a need for countries to come together to harmonise the approaches around AI governance. AI innovation needs common rules, standards and benchmarks to take off. With the EU, we have signed the EU- Singapore digital partnership in February this year. This may be a non-binding framework, but it covers a very comprehensive range of issues including digital trade facilitation, cross border data flows, cybersecurity, and emerging technologies such as AI and standards.

So, we hope that we can explore ways to promote interoperability between our AI governance frameworks. They may come from a slightly different philosophy, and they may not be identical, but there will always be value in us trying to find common ground. And this is the reason why Singapore in the US announced our recent collaboration. We recently went through a successful mapping between the US National Institute of Standards and Technologies’ AI Risk Management Framework and Singapore's AI Verify. So, we are confident that the foundation for international collaboration has already begun to be built. And it will help to promote innovation that is responsible and safe and can lead to tangible benefits for both our people as well as our businesses.

Q2) How does Singapore see itself contributing to international cooperation for trustworthy AI globally?

I will take a step back to first suggest that when it comes to international cooperation, we really must try our best to be inclusive. And by that, we mean that all the countries whose populations whose people and businesses will be impacted by the widespread development and deployment of AI. We all want to be part of this conversation, and we don't want to be left out.

We are also at a critical point in AI development, where working together as one international community will give us a better chance of ensuring that everyone can fully reap the benefits of AI in a safe and secure manner. We must therefore continue to promote multi-stakeholder exchanges to bring diverse perspectives to the table. It cannot be a few speaking for all of us.

And so, we want to continue to encourage our colleagues to be inclusive in ensuring that the smaller as well as developing countries have a voice in shaping a healthy international environment for the trustworthy use of AI.

Now, let me also share that within the region in ASEAN, the member states are of course in different stages of AI development and digital access. Yet, we have found it possible to work together to develop an ASEAN guide on AI governance and ethics. This allows us as a region to have a baseline to guide our progress. We will continue to contribute as well as to build on this work when Singapore chairs the ASEAN Digital Ministers’ Meeting next year.

If we then broaden this cooperation to the UN level, we certainly welcome the setting up of the high-level advisory body on Artificial Intelligence. It will complement the various ongoing Digital Initiatives at the multilateral level and bring together an important cross-regional and diverse group.

Among the member states, Singapore is supporting our global digital objectives through the Digital Forum of Small States, which comprises 108 members. This is a community through which we share experiences and support one another in building capacity, so that everyone can participate meaningfully in the global AI conversation. These efforts will complement other ongoing global conversations on AI such as the G7 Hiroshima process.

And earlier this month, I was at the AI Safety Summit that was held in the United Kingdom, which brought together various stakeholders to discuss the important issue of AI safety, especially with foundation models. Participants cut across governments, industry experts, as well as the research community and civil society. It was announced at the Summit that Singapore will be working with the UK AI Safety Institute to collectively advance capabilities and expertise in AI testing. Ultimately, we want to co-create trustworthy AI that can effectively benefit the economy as well as society.

PDF version of the transcript