MDDI 演讲稿 · 2026-02-20

杨莉明部长在「全球层面的 AI 安全:数码部长与官员见解」专题讨论上的发言

Remarks by Minister Josephine Teo at AI Safety at the Global Level: Insights from Digital Ministers & Officials Panel

Josephine Teo · 数码发展及新闻部长 · 全球层面的 AI 安全:数码部长与官员见解专题讨论

要点

  • Josephine 用「航空枢纽」做类比:新加坡不造飞机(不拥有 Boeing/Airbus),但仍要在制造、维护、空管等环节投入。AI 也是——要让本地区广泛采用,就必须同等地投入风险缓释。
  • AI 监管要「目标精准」:太宽会拖慢创新,太松会给公民「虚假承诺」。新加坡去年立法,要求平台在被通知后必须删除 AI 生成的有害图像内容。
  • 三层 AI 与安全的关系:①AI 作为威胁(被用来攻击系统);②AI 作为攻击目标(多智能体系统失控风险尤大);③AI 作为对抗这些威胁的工具——后者最需要国际合作。
  • 用 IKEA 类比 AI 工具的可信度:用户没法自己检验,必须有可靠的「测试 + 标准」体系,让厂商在「卖出去之前」就把责任承担掉。
  • 对「数字主权」的看法:把 AI 圈在自家国境内不可行、也是虚假的安全感;真正的主权是 Bengio 说的「让每个国家都能坐在桌前,而不是出现在菜单上」。

完整译文(中文)

MDDI 英文原文译文 · 翻译日期:2026-05-02

主持人 Lee Tiedrich(2026 年《国际 AI 安全报告》高级顾问):Teo 部长,新加坡一直走在 AI 治理前沿——从《东盟 AI 治理指南》(ASEAN AI Governance Guide)到《新加坡 AI 安全共识》(Singapore Consensus on AI Safety)。Yoshua(Bengio)刚才提到——这份报告强调,必须把这些评估「翻译」成不同文化与不同规范,并且能落地到实践。基于新加坡的经验,把科学转化为工具与实践、让全世界的人能用,看起来是什么样子?

Josephine Teo 部长:也许我可以以一个小国的视角来分享——我们这片地区对 AI 技术的采用很有兴趣,但对风险范围的认识可能仍在加深当中。

在与各国部长交流时,我常分享一个角度——他们都来过新加坡,进出过我们的航空枢纽。我向他们解释:新加坡不拥有飞机制造技术,波音不属于我们,空客也不属于我们。但我们必须关心这些飞机是如何制造的安全问题;必须关心维修、维护与大修;必须关心空中交通管理。如果这些环节没有到位,很难想象怎样才能拥有一个繁荣的航空枢纽,并对每天通过机场的数百万人的生命负责。

这就是为什么我们认为必须深度参与 AI 安全的对话与努力。如果我们希望在这个地区看到广泛采用,就必须同等地了解如何缓释风险。这是起点。

我想说的第二点是——作为政策制定者,我们理解安全方面的努力,最终必须转化为可操作的护栏。这往往意味着标准的制定,意味着监管与法律。

但我们必须以审慎的方式做这件事,因为我们仍想从这项技术中受益。如果我们在落地这些要求时不够精准,结果可能不只是创新节奏受影响。

我们最终可能给到公民一个「虚假承诺」——给他们一种「已经被保护好了」的印象,事实上并没有。

所以我认为我们需要审慎。新加坡的关切之一也包括——一旦明确了该做什么,我们希望能很快推进。

Yoshua 谈到 AI 的滥用——比如用来生成针对女性与儿童的图像。我们去年通过了一项新法律,对那些把这些图像传播给大量人群的服务方设立了法定义务。他们一直说「内容生成不是我们的责任」。这一点我们接受。但是一旦被通知此类有害内容存在,你就有义务移除。我们通过的这部新法律,正是设立这种义务。

Yoshua 也谈到报告中的发现——AI 与网络安全正在以非常令人担忧的方式相交。比如,AI 被用来攻击系统,所以 AI 是一种威胁。但与此同时,我们也看到 AI 本身可以成为网络攻击的目标。当 AI 成为攻击目标——尤其是多智能体系统——这类风险很容易失控。

因此,即便新加坡政府正在试用 AI,我们也希望对这些 AI 智能体系统的架构非常审慎——具体到「赋予智能体多少自主权」这个决策过程里到底放了什么。是否有办法在它周围设置护栏?

所以我会说:AI 作为威胁、AI 作为目标,以及——我们真正需要更深合作、做得更好的——AI 作为对抗这些威胁的工具。这些都是我们希望在东盟层面取得进展的事情。

主持人:除了政策制定者使用这些信息——我工作中接触很多组织、非营利、中小企业。我经常听到的是:「太好了,要从科学起步,那就是零号阶段。」但对其他这些组织来说,他们需要的是工具——他们没有一整支科研团队来把它落实成实践。Teo 部长,从政府视角看,我们有什么办法去推进工具化、让企业与机构更容易把这些好的研究真正部署起来?

部长:我最近在一个类似场合也讨论过这个话题。我用 IKEA 来比喻——你去 IKEA 买家具,IKEA 向你承诺这件家具经过测试。如果是沙发,它可能被人「跳过」25,000 次都没坏,那你就知道你家小孩跳上去也不会受伤——好吧,至少 25,000 次以内不会。

如果你设身处地考虑一个站在技术接收端的用户——指望他自己去施加安全条件是非常不合理的。他根本没有这种能力,也没有决定「什么能卖给他、什么不能」的权力。

所以,我们作为政策制定者必须意识到:在我们鼓励采用 AI 工具与技术的群体之间,存在巨大的能力鸿沟。我们必须想清楚——在哪些点上设立强制要求,在哪些场合把行业聚到一起、可能比强加严格法规更有用。比如在达沃斯,我们讨论过保险机制、为 AI 模型开发者创造正确激励的可能性。这件事还没有「轻松落地」的办法,但如果我们不能用理性方式把这些对话推进下去,那在管理风险方面我们只会更落后。所以我会说——这种审慎要应用在很多不同层级。

AI 安全方面的研究也必须持续。所以我很高兴,我们正在通过新加坡举办的「AI 安全国际科学交流」第二届继续这场对话。我们希望更新——在安全研究方面应该优先关注哪些方向。我相信今年「多智能体系统」会很显眼。

但不能止步于此。我们还有持续推进的项目。我们一开始就在《国家 AI 研发计划》之下做了承诺。在基础研究层面,「负责任的 AI」(responsible AI)是我们非常关注的一块——这两件事必须并行推进。但难道我们不能先有一些测试框架与工具包吗?我们认为「等齐了再做」也不可取。更务实的方式,是承认这些测试工具的不足,然后投入更多努力,去推动用更审慎的方式看待这些系统的风险,并研究如何缓释。

最终我们应当走到这样一个状态:终端用户有「安全保证」,不必再费力地想「该走的测试是否已经走过」。我们离那一点还有距离,但我们必须想办法把路线图理清楚。

主持人:我感兴趣的是——也呼应「如何把科学带到实践」「如何创建评估生态」的主题。第一步是发展科学;第二步是搞清楚「该怎么评估」;第三步是「由谁来评估」?你们怎么看评估生态的形成?是政府来做评估?还是像会计行业那样,用第三方认证审计师来做?我想听听各位的意见,先从 Teo 部长开始。

部长:在东盟语境下,我会主张一种「先解决眼前与近在咫尺的危险」的做法。

如果不聚焦于公众与决策者今天最在意的问题,对话就会显得太理论化——我们可能失去兴趣与势能,连合作的基础都没有真正搭起来。

AI 与现实交叉的领域有哪些?AI 被用来——或被滥用来——以内容创造伤害他人,这是一个领域。

几乎我接触到的每一位决策者都对此非常生气:他们必须回应选民的担忧——那些借助 AI 制造的有害图像。这对我们的社会非常冒犯。

如果我们不能用务实的方式处理这些领域,我担心同行的注意力会从这些事情上溜走。

那我们能做什么?我们必须严肃地问:水印(watermarking)是处理这件事的正确方式吗?还有别的标识 AI 生成内容的方法吗?这是不是我们应当前进的方向?

另一个会很显眼的方向,是 AI 在网络安全中的应用。我认为目前「AI 作为威胁」还远未被充分应对,而「AI 作为攻击目标」更是离人们的视线还很远。围绕同行所在意的领域把对话拉回来——更有机会牢牢吸引他们的注意,并创造有意义的契机来说:「这是你可以测试的方法」「这是可以落地的工具」。

它们不会完美,但它们是重要的起点。

观众提问:我们四处听到「数字主权」(digital sovereignty)这个词,越来越多国家在以各种方式宣告它。我很想听听——至少在 AI 安全领域——你们怎么看它的影响?又有哪些最迫切的安全关切,会因此最先被「丢出窗外」?

部长:我很高兴 Yoshua 给出的视角,对我而言非常稳健。Yoshua 早些时候说:「我们想要一个每个国家都能『坐到桌前』而不是『出现在菜单上』的世界」。

这正是即便面对 AI 也能保住主权的方式。把所有东西都圈在自己国境内来获得「主权 AI」——我认为它给的是一种虚假的安全感。

首先,这做不到;其次,对许多国家而言——最先进的应用很大程度上来自别处——这反而把你切断开,使你无法前进,让你更落后。

那「主权」如何融入?它必须是一个被认真处理的话题。它不是一个可以随便挥舞的词。

英文原文

MDDI 官网原始记录 · 抓取日期:2026-05-02

Moderator, Lee Tiedrich, Senior Advisor to the 2026 International AI Safety Report: For Minister Teo, Singapore has been at the forefront of AI governance from the ASEAN AI Governance Guide to the Singapore Consensus on AI Safety. One of the things that Yoshua (Bengio) highlighted that the report talks about is the need to translate some of the evaluation for different cultures and different norms, and also to be able to put it into practice. Based on Singapore's experience, what does it look like to take the science and actually put that into tools and practice that people around the world can use?

Minister Josephine Teo: Perhaps I will offer a perspective as a small state in a part of the world that has a lot of interest in the adoption of AI technologies but perhaps is still only becoming much more aware of the extent of the risks.

In my interactions with my counterparts, I often share with them a perspective – They would have visited Singapore; they would have travelled in and out of our air hub. And I explained to them that Singapore does not own aircraft technologies. Boeing does not belong to us, neither does Airbus. But we have to be concerned about the safety of how these aircraft are manufactured. We have to be concerned about maintenance, repair, and overhaul. We have to be concerned about air traffic management. If we didn't have all these elements in place, it's very hard to see how you can have a thriving air hub and be responsible for the lives of millions of people passing through the airport.

So that's the reason why we think we have to be invested in the conversations and the efforts to bring about AI safety. If we want to see wide adoption in our region, then we must equally be aware of how the risks can be mitigated. So that's the starting point.

The second point I'd like to make is that ultimately, as policymakers, our objective in understanding the safety aspects must translate into how we can put them into operable guardrails. And very often, this would mean standards that are being imposed. This would mean regulations and laws.

But we have to do it in a thoughtful way, because we still do want to benefit from this technology. So if we are not targeted in the way we implement these requirements, then what we might achieve is not just an impact to the pace of innovation.

What we could end up with is a situation where we have given a false promise to our citizens, giving them the impression that we have protected them when in fact we haven't actually done so.

That's why I think we need to be thoughtful. Part of Singapore's interest is also that when there is clarity about what needs to be done, we want to be able to move very quickly.

Yoshua has talked about the misuse of AI, for example, to use it for generating images that often target women and children. What we did was that last year we introduced a new law. It imposes statutory obligations on the services that bring these images and make this content available to vast numbers of people. They've always said that we are not responsible for the generation of such content. And so that's something that we take on board. But having been notified of the existence of such harmful content, then there is an obligation for you to remove it. So this new law that we passed imposes such an obligation.

Yoshua also talked about the findings in the reports – how AI and cybersecurity are intersecting in very, very concerning ways. For example, AI being used to target systems, and so AI is a threat. Now, however, we also see that AI itself can be a target of cyber-attacks. And when AI becomes a target of cyber-attacks, particularly for multi-agent systems, those kinds of risks can easily go out of control.

So even as the Singapore Government is experimenting with the use of AI, we want to be very thoughtful about how these AI agent systems are being architected and what exactly goes into the decision-making process regarding the agency that is being granted. Is there a way to put guardrails around it?

So I would just say that AI as a threat, AI as a target, and where we really need to cooperate and do much better is in using AI as a tool to fight these threats. Those are the kinds of things that within the ASEAN community we hope to be able to make progress on.

Moderator: In addition to the policymakers being able to use this information -- through my work, I end up talking to a lot of organisations, nonprofits, small-and medium-sized businesses. What I hear a lot is, it's great -- you have to start with the science, and that is ground zero. But then for some of those other organisations, they need the tooling. They're not going to have a whole scientific staff to figure out how to put that into practice? And I'm just wondering, from the government's perspective, Minister Teo, what are your thoughts on how we might be able to advance some of the tooling to take this great learning and make it easier for companies and other organisations to actually deploy?

Minister: I was at a similar session recently, and this topic came up. The way I think about it is that I use IKEA as an example. You know, when you go to IKEA, you buy furniture, and IKEA promises you that this furniture has been tested. So, if it's a couch, it has been jumped on, perhaps 25,000 times, and it didn't break, you know that your kids are not going to be hurt if they jump on it too – well up to 25,000 times.

If you think about a user on the receiving end of this technology, it is quite unreasonable to expect them to have to impose safety conditions on their own. They're simply not in a position to do so, and they don't have the power to decide what gets sold to them and what does not.

So, we as policymakers must recognise that there is a huge gap between those that we are encouraging to adopt AI tools and technology in various contexts. We must think about where the right points are to make these requirements mandatory, and where it might be more useful for industries to come together, rather than imposing strict mandatory requirements. For example, in Davos, we discussed the possibility of insurance schemes and creating the right incentives for AI model developers. And I think that there is no easy landing point just yet, but if we fail to engage in these conversations in a rational way, then I think we are even further behind in trying to manage the risks. So I would say that the thoughtfulness has to be applied at many different levels.

There needs to be continued research in AI safety. And so I'm very happy that we are continuing to have this conversation through the second edition of the International Scientific Exchange for AI Safety in Singapore. We hope to update which areas of safety research that should be prioritised. I think this year, I certainly agree that multi-agent systems are going to come up quite prominently.

But we cannot just stop there. We also have an ongoing program. We started by setting aside commitments under our own National AI R&D Plan. In fundamental research, one of the areas that we are very interested in is responsible AI, so you need the two to go hand in hand. But can we not have some testing frameworks and toolkits to begin with? We think that that is also not helpful. It is more pragmatic to try to recognise the shortcomings of those testing tools, and then to invest further effort in promoting more thoughtful ways of looking at the risk of these systems and how to mitigate against them.

Ultimately, we should try to get to a point where the end user has assurance of safety so that they don't have to be thinking so hard about whether the proper tests have been applied. We're not there yet, but I think we need to find a way to work out the roadmap.

Moderator: I'm interested, and I think it touches on some of the themes of “how do we take the science and bring it to practice?”, “how do we actually create this evaluation ecosystem?” So step one is developing the science. Step 2 is then figuring out, “how do we actually evaluate this?” And then there's “by whom?” How do you see an evaluation ecosystem emerging? Do you see governments being the evaluator? Do you see this going more like we have with accounting, where you have third-party certified auditors of doing the evaluations? I'd be interested in each of your thoughts. Maybe start with Minister Teo.

Minister: Well, certainly in the ASEAN context, I would advocate for an approach that addresses near and present dangers that everyone is dealing with.

The risk of not focusing on what's most prominent in people's minds today, and policymakers' minds today, is that the conversation may feel too theoretical, and we may lose interest and momentum, and we won’t even build the foundations of cooperation in a meaningful way.

What are some of those areas where AI intersects? AI being used, or misused, for harming people in terms of content creation. I think that's one area.

Almost every single policymaker that I come across is very, very upset by the fact that they have to address their constituents' concerns about all these harmful images that are being created with the use of, or with the help of AI. It's very offensive to our societies.

And if we are not able to work on these areas in a meaningful way, in a practical way, then I think we risk losing my colleagues' attention.

So what can we do? We have to then seriously ask: Is watermarking the correct approach to dealing with it? Is there some other way of labelling AI-generated content? Is that even the right direction that we should be moving in?

The other area that I think it will be very prominent, and that is the use of AI in cybersecurity. I don't think at this point in time AI as a threat is adequately addressed. AI as a target is even further from people's minds. A pickup of the conversation in the areas that my colleagues care about, I think, stands a better chance of anchoring their attention and creating meaningful opportunities for us to say, “Here are the ways you can test for it”, and “Here are the tools that can be applied”.

They won't be perfect, but they are an important start.

Audience Member: Now we hear a lot about the rise of digital sovereignty like everywhere, and like a lot of more countries are trying to claim it in some ways or another. And I would be really curious to hear like how, at least in the AI safety field, how are you perceiving that impact and which are the safety concerns that are most pressing that get thrown out of the window based on that first?

Minister: Yeah, I'm so glad that Yoshua has offered a view that to me is a very sound approach. You (Yoshua) said earlier that what we want is a world where every country can be at the table, not on the menu.

That's exactly how you can preserve sovereignty, even with AI developments. The idea that you get sovereign AI by confining everything to your own shores, I think it gives a false sense of security.

Firstly, it's not achievable. Secondly, the idea that you can do so would, I think, mean that for many countries, where the most sophisticated applications will have to originate from elsewhere, it just cuts you off from being able to make progress, and that puts you even further behind.

So how does sovereignty fit in? It has to be a topic that is dealt with thoughtfully. It's not a term to be bandied about too easily.