MDDI 演讲稿 · 2024-07-03
Janil Puthucheary 高级政务部长在 AiSP AI 安全峰会上的开幕致辞
Opening Address by SMS Janil Puthucheary at the AiSP AI Security Summit
要点
- • AI 引入新的安全风险——对抗性机器学习(MIT 让 AI 误把 3D 打印乌龟当作步枪;McAfee 通过改限速牌让特斯拉早期 Mobileye 系统出错)+ 经典威胁(微软研究员误暴露 38 TB 包含私钥与 30,000+ 条 Teams 消息的数据;ChatGPT 被诱导复现包含敏感信息的训练数据)。
- • 新加坡 CSA 在 2023 年 11 月与英国 NCSC、美国 CISA 共同盖章《Guidelines for Secure AI System Development》;本月将公开咨询《保护 AI 系统的技术指南》。
- • 「AI 安全」与「AI 服务网络安全」并存:暗网式 AI(WormGPT 等)能造高质量恶意软件、个性化钓鱼邮件、深度伪造;同样——防御方也能用 AI 做异常检测、自主响应。
- • Deep Instinct 2024 年 6 月:美国 97% 受访网络安全专家担忧本组织会因 AI 恶意使用而遭遇安全事件。
- • 今天 AiSP 启动「AI 特别兴趣组」(AI SIG)——成员讨论 AI 进展、分享洞察。
完整译文(中文)
MDDI 英文原文译文 · 翻译日期:2026-05-03
本文已从早期版本的网站迁移过来——格式可能有不一致之处。
Tam Huynh 助理秘书长,
AiSP 成员、
各位女士、先生:
早安。我很高兴出席今天 AiSP 首届「AI 安全峰会」。
过去几年——AI 快速扩散,并被部署到广泛的空间。这显著影响了威胁格局。我们知道——AI 的快速发展与采用——让我们暴露在许多新风险之下。
其中包括「对抗性机器学习」——攻击者可借此破坏模型的功能。
a. 一个广为人知的例子是——MIT 的研究者能让 AI 把一只 3D 打印的乌龟误以为是步枪——即便从不同角度去看。
b. McAfee 的研究者也能通过对「AI 已被训练识别的限速牌」做小改动——破坏特斯拉早期的 AI 系统 Mobileye。
c. 这一类风险相对新——我们需要做更多去更好理解它们。包括新加坡政府科技局(GovTech)在内的公私机构——一直在发展能力,模拟这种针对 AI 系统的攻击——更好地理解它们如何影响 AI 安全。这样做——也帮助我们把「正确的安全护栏」放就位。
AI 也容易受到经典网络威胁——包括对数据隐私的威胁。AI 的广泛采用——使「数据被暴露、外泄或损坏」的威胁面增长。
a. 你们当中有人可能听过——38 TB 数据被微软 AI 研究人员意外暴露——他们当时正尝试分享一个 AI 数据集。文件中包含私钥、密码、以及超过 30,000 条用户在 Microsoft Teams 上发送的消息。
b. 在「持续性攻击」中——ChatGPT 也能被操纵以重现其训练数据——这些数据可能包含敏感信息——比如姓名、地址、电话号码。
类似事件——会侵蚀公众对「AI 模型安全、可靠」的信任与信心。
a. 没有信任——个人与组织可能担忧——这些工具会输出错误、不一致或有害的结果。
b. 这反过来影响——产业能否最大化 AI 的好处——以及我们能否借力 AI 来推动新加坡数字经济与社会的进一步增长。
我很高兴看到——产业玩家——包括 AiSP 与其伙伴——在领导关于「如何让 AI 更安全」的讨论。我们都能在「培育一个保护用户与系统、同时促进成长与创新的可信 AI 环境」上扮演角色。
在政府方面——新加坡网络安全局(CSA)一直与产业伙伴及外国同行合作——开发系统持有者在「AI 采用方法」决策上应使用的清晰指南。
a. 比如——2023 年 11 月——CSA 与英国国家网络安全中心(NCSC)、美国网络安全与基础设施安全局(CISA)——共同盖章《Guidelines for Secure AI System Development》。
我很高兴宣布——CSA 本月也将就《保护 AI 系统的技术指南》(Technical Guidelines for Securing AI Systems)公开征求意见。
a. 这一组「自愿性」的指南——意在补充关于 AI 安全的既有资源——为新加坡的系统持有者提供可应用的实操措施——回应对系统与用户的潜在风险。
b. 我们邀请生态中的所有成员——包括 AiSP 成员、国际伙伴——为「我们如何改进这些指南」提供反馈。我们也理解——AI 在多种语境与用例中被使用。我们希望——这些指南务实有用。
c. CSA 将在接下来几周内分享更多关于公开咨询的细节。携手——我们能为想加强 AI 工具安全的安全专业人士——提供一个有用的参考。
我希望产业伙伴与专业人士——继续尽自己一份——确保 AI 工具与系统在恶意威胁面前保持安全——即便技术持续演化。
「AI 服务网络安全」
在这些努力同时——许多组织也在思考——如何保护自己免受「AI 滥用驱动的攻击」。
a. 我们许多人担心——生成式 AI 会被滥用——产生有说服力的、个性化的钓鱼邮件——诱使用户点击钓鱼链接与附件。威胁行为者也能创造令人信服的深度伪造——让用户信以为真而被误导。
b. 在网络安全特定层面——我们已经看到「暗网级 AI」(dark AI,如 WormGPT)的出现与上升——它显示 AI 能被用来制造精密的恶意软件——这些威胁——既有系统可能难以侦测。
这是国际级关切——比如 2024 年 6 月——Deep Instinct 报告——美国 97% 的受访网络安全专家担心——本组织会因 AI 的恶意使用而遭遇安全事件。
对「AI 被滥用」的担忧——是自然的。但同样重要的——是思考——AI 如何能成为「服务网络安全行业之善」的力量。就像威胁行为者把这一技术整合进自身行动——防御者也必须学会掌握 AI 给工作带来的好处。
我们许多人已经看到——AI 能成为安全运营的「价值倍增器」。若使用得当——AI 能帮防御者以更高速度、更大规模、更高精度识别风险——帮我们更快回应。这能让我们的团队在面对网络威胁时——更高效、更有效。
即便是更精密的威胁——AI 也能帮助「拉平比赛场」。我们已经看到——机器学习算法在「检测异常、对潜在威胁挂载自主响应」的方案中越来越多被使用。我也期待——产业如何用 AI 增强我们今天拥有的网络安全工具——并如何帮我们获得「决定性的优势」。
「AI 特别兴趣组」启动
AI 仍是一项演化中的新兴技术——未来几年我们仍将看到用例数量增长。同时——我们也将发现需要管理的新风险。我们必须在这两个优先事项之间——把「微妙的平衡」拿捏好——才能在「安全的领域里创新」。
为此——我们的科技专业人士必须紧跟这项技术的发展与演化——尤其是从事信任、安全、网络安全工作的同行。
a. 这能帮我们做出关于「如何采用 AI、如何管理已知风险」的好建议。
b. 它也能帮我们塑造——「AI 应当依据安全的原则被开发」这一更广义的对话。
今天——AiSP 启动其「AI 特别兴趣组」(AI SIG)。
a. 这一小组将为成员提供一个平台——讨论 AI 进展、交换关键洞察与经验、与社群分享他们的知识。
b. 成员可以用这一平台——讨论网络安全行业如何在「与 AI 共存共发展」的同时——继续确保数字领域可信而安全。
c. 当 AI 成为数字基础设施的不可或缺组成时——这些将是关键议题。
感兴趣的成员——可以联系 AiSP 秘书处——了解如何加入 SIG。我祝他们未来的对话顺利。
结尾
我们都应当尽自己一份——确保 AI 能持续安全。这会影响我们的组织与用户——在它们尝试充分利用 AI 时——的信心。
a. 对需要更多指引的人——CSA 的指南会是一个有用的起点。请留意未来几周——公众如何获取该指南——以及如何向 CSA 提供反馈。
同时——我们也可以意识到——AI 给网络安全带来的机会。我们要紧跟这一空间的发展——并倡导采用「在对抗对手中证明有效」的 AI 工具。
我们也可以维持一个专家与同行的网络——在威胁格局演化时——可以彼此咨询。AiSP 成员可以从「AI SIG」开始——它将由 Tam Huynh 先生主持。
祝大家今天的会议讨论富有成效。非常感谢。
英文原文
MDDI 官网原始记录 · 抓取日期:2026-05-02
This article has been migrated from an earlier version of the site and may display formatting inconsistencies.
Assistant Secretary Mr. Tam Huynh,
AiSP Members,
Ladies and gentlemen,
Good morning. I am happy to be joining you today for AiSP’s first AI Security Summit.
Over the past couple of years, AI has proliferated rapidly and been deployed in a wide variety of spaces. This has significantly impacted the threat landscape. We know that this rapid development and adoption of AI has exposed us to many new risks.
This includes adversarial machine learning, which allows attackers to compromise the function of the model.
a. A well-known example is how researchers at MIT were able to trick AI to think that a 3D-printed turtle was a rifle, even if the turtle was viewed from different angles.
b. Researchers at McAfee were also able to compromise Tesla’s former AI system, Mobileye, by making small changes to the speed limit signs that the AI had been taught to recognise.
c. This class of risks is relatively new and we need to do more to understand them better. Both public and private entities, including the Government Technology Agency of Singapore, have been developing capabilities to simulate such attacks on AI systems, to understand better how they can affect the security of AI. And by doing so, this will help us to put the right safeguards in place.
AI is also vulnerable to classic cyber threats, including those to data privacy. In particular, the widespread adoption of AI has led to a growth in the threat surface for data to be exposed, exfiltrated, or damaged.
a. Some of you may have heard how 38 terabytes of data were accidentally exposed by Microsoft AI researchers, who were trying to share an AI dataset. The files included private keys, passwords, and more than 30,000 messages sent by users, on Microsoft Teams.
b. In other types of attacks - persistent attacks - ChatGPT can also be manipulated to reproduce its training data, which may contain sensitive information – like names, addresses, and phone numbers.
Incidents like this undermine public trust and confidence that AI models are safe, secure, and reliable.
a. Without trust, individuals and organisations might fear that tools will produce incorrect, inconsistent, or harmful output.
b. This, in turn, affects whether the industry can maximise the benefits of AI, and whether we can leverage the use of artificial intelligence to drive further growth in the digital economy and society in Singapore.
I am heartened to see that industry players, including AiSP and its partners, are leading discussions on how we can make AI more secure. We can all play a part in fostering a trusted AI environment that protects users and systems, while facilitating growth and innovation.
On the government front, the Cyber Security Agency of Singapore has been working with industry partners and foreign counterparts to develop clear guidelines that system owners should use, when making decisions on the approach to adopt AI.
a. For example, in Nov 2023, CSA co-sealed the “guidelines for secure AI system development”, which was developed with the UK’s National Cyber Security Centre (NCSC) and US’s Cybersecurity and Infrastructure Security Agency (CISA).
I am pleased to announce that CSA will also be releasing its “Technical Guidelines for Securing AI Systems” for public consultation this month.
a. This set of voluntary guidelines are intended to complement existing resources on the security of AI, and provide practical measures that system owners in Singapore can use to address potential risks to systems and users.
b. We invite all members of the ecosystem, including Members of AiSP, and international partners, to provide their feedback on how we can improve the guidelines. We understand that AI is used in a wide range of contexts, and across multiple use-cases. We want to ensure that these guidelines are practical and useful.
c. CSA will release more details on the public consultation in the following weeks. Together, we can provide a useful reference for security professionals looking to enhance the security of their AI tools.
I hope that industry partners and professionals will continue to do their part to ensure that AI tools and systems are kept safe and secure against malicious threats, even as techniques evolve.
AI for Cybersecurity
In parallel to these efforts, many organisations are also thinking about how to secure ourselves against attacks that are driven by the misuse of AI.
a. Many of us are concerned about how generative AI can be misused to generate convincing, personalised emails that trick our users into clicking phishing links and attachments. Threat actors can also create convincing deepfakes and trick our users into believing misinformation and disinformation.
b. Specific to cybersecurity, we have seen the use and the rise of dark AI like WormGPT, which shows that AI can be used to create sophisticated malware. These threats may be difficult for existing systems to detect.
The concern is international – for example, in Jun 2024, Deep Instinct reported that 97% of cybersecurity experts surveyed in the US were concerned that their organisations would suffer a security incident, caused by malicious use of AI.
It is natural that we are concerned about how AI can be misused. However, it is just as important for us to consider how AI can be a force for the good of the cybersecurity sector. Just as threat actors integrate this technology into their operations, defenders need to learn to master the benefits that AI can bring to their work.
Many of us have seen how AI can be a valuable force multiplier for security operations. If used properly, AI can help defenders identify risks at greater speed, scale, and precision, which can help us to address risks more quickly. This can help us to make our teams more efficient, and effective, as we defend against cyber threats.
Even for more sophisticated threats, AI can help to level the playing field. We have already seen an increase in the use of machine-learning algorithms in solutions to detect anomalies, or to mount an autonomous response to potential threats. I look forward to hearing how the industry can use AI to improve the range of cybersecurity tools we have today, and how this can help us to gain a decisive advantage.
Launch of AI Special Interest Group
AI is still an evolving, emerging technology, and we will continue to see a growth in the number of use cases in these next few years. At the same time, we will discover new risks, which will need to be managed. We will need to strike a careful balance between these two priorities, to ensure that we innovate in a safe domain.
And in doing so, our tech professionals need to stay up to date on how this technology develops and evolves – especially for those of us who work in trust, safety and cybersecurity.
a. This will help us to make good recommendations on how AI is adopted, and how we can manage the known risks.
b. It will also help us shape the wider conversation on how AI should be developed, in line with the principles of safety and security.
Today, AiSP will be launching its AI Special Interest Group.
a. This group will provide a platform for members to discuss AI developments, exchange key insights and experiences, and share their knowledge with the community.
b. Members can use this platform to discuss how the cybersecurity sector can continue to ensure the digital domain is trusted, and secure, while co-existing and co-developing with AI.
c. These will be critical topics as AI becomes an integral part of digital infrastructure.
Interested members can reach out to the AiSP Secretariat for details on how to join the SIG. I wish them the best in their future conversations.
Conclusion
We should all play our part in ensuring that AI can continue to be safe, and secure. This will affect the confidence of our organisations and users as they try to make full use of what AI can offer.
a. For those who need more guidance, CSA’s guidelines will be a useful place to start. Please keep an eye out for details on how the public can access the guidelines in the coming weeks, and how to provide your feedback to CSA.
At the same time, we can be aware of the opportunities that AI can bring for cybersecurity. We can keep ourselves abreast of developments in this space, and advocate for the adoption of AI tools that prove to be effective against our adversaries.
We can also maintain a network of experts and peers that we can consult, especially as the threat landscape develops. AiSP members can start with the AI SIG, which will be chaired by Mr. Tam Hyunh.
I wish you a series of fruitful discussions at the conference today. Thank you very much.