MDDI 演讲稿 · 2024-10-16

Janil Puthucheary 高级政务部长在新加坡国际网络周 AI 高级别讨论会上的主旨演讲

Keynote Address by SMS Janil Puthucheary at the SICW High Level Panel on AI

Janil Puthucheary · MDDI 前高级政务部长 · 新加坡国际网络周 AI 高级别讨论会

要点

  • 题眼问题:「AI 能不能被安全地使用?」自 ChatGPT 2022 年爆发以来——这个问题对政府、产业、学界、用户、采用都关键。
  • 新加坡的国际足迹:2023 年英国 AI 安全峰会参与;与英 NCSC、美 CISA 共同盖章《Guidelines for Secure AI System Development》;2024 年发布生成式 AI 治理框架(9 个维度)。
  • 「信任」是 Smart Nation 2.0 的核心原则。AI 既要面对「经典网络安全风险」(开源组件后门、模型操纵、对支撑软件的攻击),也要面对 AI 独有的风险(数据抽取、模型操纵)。
  • CSA 发布首版《保护 AI 指南与配套指南》——指南列出长期适用的核心原则、配套指南是社群协作产物(不是规定式,是「实务参考」)。
  • CSA × Resaro(新加坡 AI 保障公司)联合发表论文——探讨「AI 安全到底是什么」以及各方角色。
  • 7–9 月新加坡承办「Global Challenge for Safe and Secure LLMs」——300+ 国际参赛者、100+ 队伍——来自中、德、日、马、新、美。

完整译文(中文)

MDDI 英文原文译文 · 翻译日期:2026-05-03

本文已从早期版本的网站迁移过来——格式可能有不一致之处。

数码发展及新闻部高级政务部长 Janil Puthucheary 博士在「新加坡国际网络周」(SICW)AI 高级别讨论会上的主旨演讲(2024 年 10 月 16 日)

AI 能被安全地使用吗?

各位阁下、

各位嘉宾、

各位女士、先生:

早安。

1. 我很高兴出席这场关于人工智能的「高级别讨论会」。

AI 安全是一项国际性关切

2. 「AI 能不能被安全地使用?」

a. 这是我们许多人一直在思考的问题——尤其自 2022 年 ChatGPT 进入大众视野以来——它对政府、产业、学界的工作都关键——对作为用户的我们也关键——对我们对 AI 采用的信任也关键。

b. 我们关注的是——AI 能否被「做安全」、被「做可靠」、并成为「善的力量」。

3. 过去两年——许多伙伴与朋友主办了关于这一话题的国际讨论。新加坡是其中的积极参与者——把我们在 AI 治理上的既有工作往前推。

a. 2023 年——我们在英国主办的 AI 安全峰会上加入同行。这是跨边界 AI 安全与安保对话中的重要里程碑。

b. 去年 11 月——新加坡也受邀与英国国家网络安全中心(NCSC)与美国网络安全与基础设施安全局(CISA)共同盖章《Guidelines for Secure AI System Development》。这份文件勾勒了系统持有者「在 AI 决策与 AI 安全框架」上应使用的原则。

c. 今年——在 AI 安全方面——经过国际咨询过程——新加坡推出了首份生成式 AI 的《Model AI Governance Framework》。这是首个针对生成式 AI 治理的全面框架——它有 9 个维度——以确保这些模型被「整体地看待与回应」。

对 AI 的信任会促成采用

4. 在回应新兴技术时——这是我们必须有的关键对话。云计算、人工智能、量子计算等技术——为我们的产业与经济带来重大收益。但我们必须对风险与「如何管理它们」保持清醒——而不是「以艰难方式学到教训」、然后在事后追赶——通过危机与「事情出错」来给所发现的漏洞打补丁。

5. 这就是为什么我们把「信任」作为核心原则——作为新加坡「Smart Nation 2.0」计划的核心一部分。所有用户——大型组织与个人——都必须能信任:技术(包括新兴技术)会安全可靠,他们的安全与福祉会得到保障。

6. 这给到信心——去尝试新用例、在下一波增长中部署新技术——并接到我们 Smart Nation 2.0 计划中的更宏大目标——我们能为社区做什么、能为我们的社会与机会带来什么样的成长。

7. 因此——这不仅关于增长与生产力——这两者是必要结果——但这同样关于「把 AI 落地得好」。我们知道——在医疗等特定领域——必须采用更高的 AI 安全标准——以应对关键风险。

a. 我们必须保护 AI 系统不受恶意网络攻击。我们知道——威胁行为者能在开源 AI 组件中插入后门、最终模型可能被操纵或扰乱、威胁行为者也可能对 AI 的支撑软件发起经典攻击。旧风险并未消失——它们只是被「叠加」上来——所以这些系统都需要更新与打补丁。

b. 我们必须保护 AI 模型——防止数据抽取的尝试。所有这些都是「在 AI 方案上加强长期信任」的必要努力。

8. 这要求服务提供者、产业玩家与公共部门技术伙伴之间紧密合作——比如新加坡——医疗科技机构 Synapxe 与政府科技局(GovTech)。

9. AI 也在许多行业、并跨整个生态成长。因此——除了行业特定风险——我们还面临系统性风险。这些已上升为我们的思考前沿——这是必须共同应对的国际挑战。

a. 一旦关键 AI 基础设施的关键部分受到扰动——许多公司可能失去对模型、工具与服务的访问。

b. 用户也将难以继续他们的活动——如果他们围绕 AI 构建了业务模型、流程模型——而 AI 方案被破坏。修复与恢复这些服务——可能需要一些时间——取决于已就位的安全与韧性措施。

c. 当这件事发生时——它会影响许多人——无论他们居住在何处、AI 模型部署在何处。

10. 这就是为什么——把 AI 做安全、做可信——必须成为我们的优先。如果我们要持续担心这些风险——任何人都很难采用 AI。

11. 这些是 AI 采用的必要条件——我们必须采取实操步骤——为「信任」打底。

我们在 AI 安全上的进展

12. 在描绘了「半空玻璃杯」之后——其实我们看到了不少进展。AI 在全球范围以越来越快的速度被采用。这是「半满玻璃杯」。在一些国家——我们也看到 AI 在关键基础设施中的采用。

13. 我们知道——这种采用会放大许多经典的网络安全风险——可能影响 AI 的「机密性、完整性、可用性」(C/I/A)。还有 AI 模型与系统独有的、未被授权的新风险。

14. 但我们并不是从零开始。AI 安全相对经典网络安全还很年轻——但已有许多既有努力——帮助开发者把合适的护栏放就位、保护他们的模型与系统。我所突出的许多对话——都是这一过程的一部分。

a. 例子继续——美国国家标准与技术研究院(NIST)发布了《AI 风险管理框架》——帮助用户管理潜在 AI 风险。

b. 韩国科技部宣布其计划——以《实现可信 AI 战略》(Strategy to Realise Trustworthy AI)让 AI 更安全、更可信。

15. 昨天——在 SICW 开幕仪式上——尚达曼资深部长(应为 Teo Chee Hean)宣布——新加坡网络安全局(CSA)发布我们《保护 AI 指南与配套指南》(Guidelines and Companion Guide for Securing AI)的首版。

a. 这些指南列出了系统持有者应使用的、关键且长期适用的原则——指引其对 AI 安全的方法——包括如何落地安全控制与最佳实践。

b. 配套指南是一项「社群努力」——为系统持有者提供实操措施与控制项。它不是规定式的——而是支持系统持有者「在这一年轻空间中穿行」的资源。

c. 我要感谢我们的国际伙伴、产业玩家与专业人士的意见。我们就最初稿收到了正面反馈,以及如何改进的建议。

d. 我们做了努力来回应反馈——并把这些文件作为「社群主导的资源」推出。我们希望继续协作——让 AI 在实践中更安全。

16. 我也很高兴宣布——CSA 与新加坡 AI 保障空间的公司 Resaro 合作——共同撰写一份关于「AI 安全风险」的论文。这份论文探讨了「AI 的安全意味着什么」、以及各方利益相关者应在这个空间中扮演什么角色。指南、配套指南以及这份讨论论文——都已链接到各位座位上的卡片上。论文与配套指南可在线获取。

17. 我们也在发展本地的网络安全专业社群——发现保护 AI 的新技术。

a. 比如——7 月到 9 月——新加坡承办了「Global Challenge for Safe and Secure Large Language Models」(面向安全可靠 LLM 的全球挑战赛)。我自豪地分享——这一挑战赛吸引了 300+ 名国际参赛者——开发稳健的安全措施与创新方法——缓释对 LLM 的越狱(jailbreaking)攻击、把 LLM 做得更安全。

b. 此次挑战赛有 100+ 支队伍——包括来自中国、德国、日本、马来西亚、新加坡、美国等地——这反映了应对 AI 挑战的全球努力。

c. 我们的顶级队伍今天就在场。请与我一起祝贺并感谢他们——他们让 AI 更安全的努力。

d. 接下来会有专题讨论——讨论之后——获奖者将领奖。请留下来祝贺、支持并鼓励他们——为他们卓越的工作——也鼓励他们继续做下去。

促进 AI 上的「公私对话」

18. 今天的专题讨论——是我们作为政府官员与产业专业人士、研究者进行开放对话的重要机会——共同探索如何让 AI 更安全。我也期待嘉宾们就「我们应优先关注的关键措施」与「迄今为止最有效的事情」分享意见。

19. 更重要的是——我期待这一对话能讨论——利益相关方应如何继续协作、改善他们在「政府机构、供应商、系统持有者、产业玩家、用户」之间的关系——以守护 AI 的开发、部署与使用。

a. 每个人都有「在 AI 信任建设上」的利害关系。我们在一条船上——目前正处于 AI 开发、部署与采用的关键期。我们将协作——跨行业、跨司法辖区——尽早回应这些议题。

20. 感谢邀请我今天来。祝大家会议富有成效、网络周愉快。

英文原文

MDDI 官网原始记录 · 抓取日期:2026-05-02

This article has been migrated from an earlier version of the site and may display formatting inconsistencies.

Keynote Address by SMS Janil Puthucheary at the Singapore International Cyber Week (SICW) High-Level Panel on AI on 16 Oct 2024

CAN AI BE SECURE?

Your excellencies,

Distinguished guests,

Ladies and gentlemen,

Good morning.

1. It is my pleasure to be here with you at this High-Level Panel on Artificial Intelligence.

AI Security is an International Concern

2. “Can AI be Secure?”

a. This is a question many of us have grappled with, more so since the explosion of ChatGPT into our consciousness in 2022, but it’s been relevant to our work in the government, industry, academia, and it is relevant to us as users, and relevant in our trust in the adoption of AI.

b. We are concerned with whether AI can be made safe, can be made secure, and be a force for good.

3. Many of our partners and friends have hosted international discussions on this topic in the past two years. Singapore has been an active participant in this space, building on our existing work in AI governance.

a. In 2023, we joined our counterparts at the AI Safety Summit, hosted by the UK. This was an important milestone in dialogues on AI safety and security, across borders.

b. In November of last year, Singapore was also invited to co-seal the “Guidelines for Secure AI System Development”, developed by the UK’s National Cyber Security Centre, or the NCSC, and the US’s Cybersecurity and Infrastructure Security Agency. This document outlines principles that system owners should use to guide their decision-making about AI, and their frameworks for AI safety.

c. This year, for AI safety, Singapore has launched a Model AI Governance Framework for Generative AI, following an international consultation process. This is the first comprehensive framework for the governance of Generative AI. It has nine dimensions to ensure that these models are seen and addressed in totality.

Trust in AI will Enable Adoption

4. In addressing emerging technologies, these are the sorts of critical conversations we need to have. Technologies like cloud computing, Artificial Intelligence, and quantum computing promise significant benefits to our industries and economies. However, we must be clear-eyed about the risks, and how we will manage them, rather than learn the lessons the hard way, subsequently try to play catch up, and patch the vulnerabilities that we discover through crisis and something going wrong.

5. This is why we have made trust a core principle, a core part of Singapore’s plan for Smart Nation 2.0. All users – large organisations and individuals – must be able to trust that technology, including emerging technology, will be secure and reliable, that their safety and well-being are assured.

6. This provides the confidence to try new use cases and deploy new technologies in the next bound of growth, and this feeds into some of our larger aims in developing this Smart Nation 2.0 plan—what we can do for our communities, and what we can do to grow our society and our opportunities.

7. So, this is not just about growth and productivity—those are necessary outcomes, but this is also about implementing AI well. We know that we must adopt a higher standard for safety and security of AI in certain domains, such as healthcare, to address key risks.

a. We must protect our AI systems against malicious cyberattacks. We know threat actors can insert backdoors into open-source AI components, final models can be manipulated or disrupted. Threat actors could also mount classical attacks on the software supporting AI. The old risks haven’t gone away. They’ve just been added to, so these systems all need to be updated. They need to be patched.

b. We must protect AI models against attempts to extract data. And all these are necessary efforts to strengthen long-term trust in AI-based solutions.

8. This requires close partnership between service providers, industry players, and public sector technology partners, such as what we have in Singapore, Synapxe, our Healthcare Technology Agency and the Government Technology Agency of Singapore.

9. AI has also grown in many sectors, and across our entire ecosystem. Therefore, there are not just the sector-specific risks, we also face systemic risks. These have come to the fore in our thinking, and this is really an international challenge that we have to tackle together.

a. If there are disruptions to key parts of our critical AI infrastructure, many companies can lose access to their models, tools, and services.

b. And users will find it difficult to continue with their activities, if they build their business model, process model around AI and the AI solutions have been corrupted. Efforts to repair and restore these services could take some time, depending on the security and resilience measures that are in place.

c. And as this happens, it affects many people, regardless of where they reside and where the AI models have been deployed.

10. This is why it must be a priority for us to make AI secure, and trustworthy. If we had to worry constantly about such risks, it would be difficult for any of us to adopt AI.

11. These are the necessary conditions for AI adoption, and we need to take practical steps to provide a base of trust.

Our Progress in AI Security

12. Having painted the “glass half-empty” picture, actually we are seeing quite a bit of progress. There is a lot of AI adoption across the globe at an increasing pace. This is now the “glass half-full”. In some countries, we are seeing AI adoption in critical infrastructure.

13. Now we know that this adoption increases many classical cybersecurity risks. This can affect the confidentiality, integrity, and availability of AI. There are new risks unique to AI models and systems which are not authorised.

14. But we are not starting from zero. AI security is relatively nascent compared to the classical cybersecurity, but there are many existing efforts to help developers put the right guardrails in place, and to secure their models, secure their systems, and the many conversations that I’ve highlighted are part of this process.

a. The examples continue: the US National Institute of Standards and Technology has released an AI Risk Management Framework. This will help users to manage potential AI risks.

b. The Ministry of Science and Technology in South Korea has also announced its plans to make AI more safe, secure, and trustworthy, in its “Strategy to Realise Trustworthy AI”.

15. Yesterday, at the SICW Opening Ceremony, Senior Minister Teo Chee Hean announced that the Cybersecurity Agency of Singapore is publishing the first edition of our Guidelines and Companion Guide for Securing AI.

a. These guidelines set out key, evergreen principles that system owners should use to guide their approach to security of AI, including how they implement security controls and best practices.

b. The companion guide is a community effort to provide system owners with practical measures, and controls. This is not meant to be prescriptive, but is a resource to support system owners to navigate this nascent space.

c. I would like to thank our international partners, industry players, and professionals for their comments. We received positive feedback on our initial draft, along with suggestions on how to improve it.

d. We have taken effort to address the feedback, and have put the documents out as a community-led resource. We hope to continue working together to make AI more secure in practice.

16. I am also pleased to announce that CSA has worked with Resaro, a Singaporean company in the AI assurance space, to co-author a paper on AI security risks. This paper explores what security of AI means, and discusses the role that all stakeholders should play in this space. The Guidelines, the Companion Guide, and this discussion paper are all linked on the card that you may have seen on your seats. The paper and the companion guide are available online.

17. We are also developing our local community of cybersecurity professionals, to discover new techniques for securing AI.

a. For example, from July to September, Singapore hosted a Global Challenge for Safe and Secure Large Language Models, or LLMs. I am proud to share that this challenge saw more than 300 international participants to developing robust security measures and innovative approaches to mitigate jailbreaking attacks on LLMs, and to make LLMs more secure.

b. We had more than 100 teams in this challenge, including groups from China, Germany, Japan, Malaysia, Singapore, and the United States. This is a reflection of the global effort to tackle the challenges of AI.

c. Our top teams are in the audience with us today. Please join me in congratulating them and thanking them for their effort to make AI more secure.

d. We will have the panel discussion after this and after the panel discussion, the winners will receive their prizes. Please stay on and congratulate them, support them and encourage them for the great work that they have done, which we will encourage them to keep on doing.

Facilitating a Public-Private Conversation on AI

18. The panel today is an important opportunity for us as government officials to have an open dialogue with industry professionals, together with researchers, and explore how AI can be made more secure. I look forward to our panellists’ comments on the key measures we should prioritise, and what has been most effective so far.

19. More importantly, I look forward to how the dialogue can discuss how stakeholders should continue to work together and improve their relationships across government agencies, vendors, system owners, industry players, and users, to safeguard the development, deployment and the use of AI.

a. Everyone has a stake in building trust in AI. We are in this together and we are at a critical period for the development, deployment, and adoption of AI. We will work together to address these issues early – across sectors, and across jurisdictions.

20. Thank you for inviting me to be with you today. I wish you all a fruitful session, and a wonderful Cyber Week.