MDDI 演讲稿 · 2026-02-20
杨莉明部长在「AI:全球背景」炉边对话中的发言
Remarks by Minister Josephine Teo at the Fireside Chat on AI: The Global Context
要点
- • 在中美技术「巴尔干化」的图景下,新加坡的立场是「成为可被信任的节点」(trusted node)——以一致而有原则的方式行事。
- • 想用 AI 监管来解决「社会不平等」是不现实的——监管要做的是安全护栏。社会团结要靠其他工具:再就业、住房、医疗、教育。
- • Josephine 的 15 年后愿景:是否成功的关键词是「信任」(trust)。如果普通公民相信 AI 没有夺走生计、没让他们被误导、没破坏家庭——那就算走得很远了。
完整译文(中文)
MDDI 英文原文译文 · 翻译日期:2026-05-02
主持人 Mariano-Florentino Cuellar(卡内基国际和平基金会主席):我们谈论新兴技术——或者说已经现身的技术——以及它将如何影响大大小小的国家。Teo 部长,您在其中扮演关键角色——我知道这一点,是因为我在世界上每一场 AI 峰会都见到您。这很惊人。
像新加坡这样的国家,凭什么能驾驭这股海啸般的变化?你认为我们能从新加坡的策略里学到什么?我看到新加坡走在 AI 治理的前沿——比如《Model AI Governance Framework》——同时还要在「围绕技术栈被中美巴尔干化」的世界里穿行。
Josephine Teo 部长:非常感谢。Tino,您一连串问了好多问题。我尽力一一回应。您的话里其实蕴含了一个共同关切:技术「脱钩」的风险,以及一个小国在这种语境下要做什么、又要如何在大国博弈中穿行。
我们的思考方式是——对新加坡来说,保持作为「可信节点」(trusted node)来运作的能力非常重要。所谓「可信」是指:我们可以被你信任,能够托付你的技术;这样你的公司与人,才能继续访问这些最先进的技术——因为它们不会被滥用,被误用的风险也被压到最低。
问题是,我们怎样保持「被信任」?我认为唯一的办法,是行事一致、有原则。一致而有原则不取决于体量——新加坡并不是唯一一个在这件事上有良好纪录的小国。
我们在「以新加坡为本」(pro-Singapore)这件事上始终一致。有时我们的选择会与这个国家或那个国家一致,有时与许多国家一致,有时只与少数国家一致——但它们始终与我们自身在技术上的利益对齐。比如 5G——我们一贯按原则操作:第一,这是移动网络运营商必须做的商业决策,他们要根据性能、安全、韧性,以及在我们这个语境里的全部规则做判断。这就是我们的大方向,并不容易,但这条路一直对我们有用。
主持人:AI 带来巨大可能,但伴随机会而来的,往往是某种破坏,是劳动力市场快速变化的国家所面临的真切政策难题。问题是——我们如何制定正确的策略,让世界可能获得的生产力收益真正转化为「共享繁荣」?您认为我们能在这件事上做些什么?
部长:有时候我们会倾向于通过「监管」AI 来减缓它的推进、试图把风险挡住。我并不否认我们需要为 AI 安全设立护栏,这些当然重要。但若指望 AI 监管去解决另一些重要问题——比如更严重的社会不平等——我认为是不现实的。
处理这件事的方式,是看还有哪些方法能加强社会团结。比如:我们准备了什么样的安排,帮助人们从一份工作换到另一份?我们准备了什么样的安排,确保即便收入不高的人也有机会拥有自己的房子、获得良好的医疗、把孩子供到很高的教育水平?这些才是不能回避的对话——不能光指望监管把问题解决。
主持人:想象 15 年后,您回望过去。在那一刻,您坐在印度同样的舞台上接受访谈,您说——「世界在 AI 这项新兴技术的关系处理上很不错」,「结果很好,是因为某某」。我想请您说出那个您觉得对这次过渡最关键的「某某」。各位刚才提到了不少东西,但我想听到那个最重要、您最希望留给观众的「主」要因素。
部长:对我来说,那个词是「信任」(trust)。15 年后,如果我们去问所有 AI 被广泛部署的国家的公民——「你信任这项技术吗?」如果他们的回答是「不」,那我们一定在某些方面是失败的。如果他们相信这项技术的部署没有夺走他们的生计、没有让他们对世界完全被误导、让他们能安全地过自己的日子、没有摧毁家庭——那就算成功。如果他们仍能看到「这是一项在妥善设置护栏后能合理运作的技术」——我想我们就走了相当远了。
英文原文
MDDI 官网原始记录 · 抓取日期:2026-05-02
Moderator, Mariano-Florentino Cuellar, President, Carnegie Endowment for International Peace: We talk about emerging or rather emerged technology, and how much it's going to affect countries, large and small. Minister Teo, you are playing a critical role – and I know this because I see you at every single AI summit in the world. It's amazing.
How are countries like Singapore in a position to navigate this tsunami and these changes, and what do you think we can learn from Singapore's strategy. As I see it being at the forefront on AI Governance, like the Model AI Governance Framework for example, but also navigating a world that some people see as balkanised between China and the United States around the technology stack.
Minister Josephine Teo: Thank you very much. Tino, that's a lot of questions packed into one. I'll do my best to address them. I think embedded in what you're saying is that there is the risk of technology decoupling, and what does a small state do in this kind of context, and how do we navigate the big power contestation?
The way we think about it is that for Singapore, it's very important for us to maintain this ability to operate as a trusted node. Trusted node means that we can trust you with our technology, so your companies and people, can continue to access these technologies that are the most sophisticated, because they will not be abused, and the risk of them being misused is also minimised.
The question is, how do we remain trusted? And I think the only way to do so is if we act in a consistent and principled way. Being consistent and principled is not a matter of size – Singapore is not the only small state that has a good track record of maintaining this discipline.
We are consistent in being pro-Singapore, and sometimes our choices may align with this country or that country. Sometimes they will align with many countries. Sometimes they only align with a few countries, but they always align with our own interests in technology choice. For example, 5G – we are always operating on the basis of principles – number one: that these are commercial decisions that have to be undertaken by the operators of the mobile networks, and they have to decide on the basis of what works for them, in terms of performance, security and resilience – keeping in mind what are all the rules that are in place in our context. So those are the broad directions in which we operate in, and it's not easy, but it's a path that has served us well.
Moderator: There are enormous possibilities for AI, but along with that opportunity, will probably come some disruption, some real policy difficulties in some countries that are experiencing rapid changes in the labour market. The question then is how we might develop the right strategy so that the productivity gains that the world can experience would actually translate into shared prosperity. What do you think we can do on that score?
Minister: I think sometimes there is a tendency to want to think of ways of regulating AI in order to slow down its advance, and perhaps, to try and forestall the risk. I'm not underestimating the need, to make sure that there are guardrails on AI safety. I just want to say that these are important, but to over-expect AI regulations to deliver on the other important issues, such as the potential for greater social inequality, I think, is unrealistic.
The way to deal with it is to look at what other methods there are to strengthen social solidarity. For example, what provisions do we put in place to help people move from one job to the next? What provisions do we put in place to ensure that even people who don't earn a lot have the prospect of owning their own homes, access to good healthcare, and educating their children to a very high level? I think these are the other things, and you cannot run away from those conversations just by expecting regulations to solve the problem.
Moderator: Imagine yourselves 15 years in the future, looking back at the past. At that point, you're being interviewed on the same stage here in India, and you're saying it's been a very good thing to see how well the world has handled its relationship with this emerging technology of AI. And it's turned out very well, because of “blank”. I want you to mention one thing that you think in particular would have been so critical to make that transition. Well, you've all mentioned a bunch of things, but I'm interested in the main, most important takeaway that you'd like to leave with the audience.
Minister: For me, that one word is trust. In 15 years, if we go and ask citizens in all the countries where AI is being deployed widely: “do you trust this technology?” If their answer is no, then I believe that we must have failed in some way. If they believe that this technology has been implemented in a way that didn't rob them of their livelihood, didn’t leave them being totally misinformed about the world, allowed them to carry out their lives in a safe and secure manner, and that it didn't destroy families, I think that would be a success. I think if they can still see that this is a technology that can work reasonably well when you put in place the safeguards, I think we would have come a long way.