口头答复 · 2023-05-09 · 第 14 届国会
确保伦理AI标准发展维护
Ensuring Development and Maintenance of Ethical Artificial Intelligence Standards
质询关注如何推动符合人类价值观的伦理AI发展及其标准维护。政府回应介绍了新加坡已有的AI治理框架、自测工具及个人数据使用指导,强调责任制和数据安全。质询方关注责任归属及AI身份标识,核心争议在于如何落实责任和透明度。
关键要点
- • 推广负责任AI发展
- • 引入AI治理框架
- • 强调责任与透明
支持负责任且安全的AI发展
关注AI责任与透明度
强化AI责任与数据保护
"Singapore supports the responsible development and deployment of artificial intelligence (AI), so that its benefits may be enjoyed in a trusted and safe manner."
参与人员(3)
- Janil Puthucheary
- Senior Minister of State for Communications and Information
- Tan Wu Meng
完整译文(中文)
Hansard 英文原文译文 · 翻译日期:2026-05-02
16 陈武明医生询问通讯及资讯部长,鉴于人工智能(AI)领域的进展,如ChatGPT和GPT-4架构,目前在促进和确保(i)开发符合人类价值观的伦理AI,以及(ii)在AI的开发、数据集训练和部署中维护伦理标准方面,给予了哪些关注。
通讯及资讯高级国务部长(陈志南医生)(代通讯及资讯部长) :女士,新加坡支持负责任地开发和部署人工智能(AI),以便公众能够在值得信赖和安全的环境中享受其带来的益处。我们的做法已于2023年4月19日的会议中作出说明。
除其他措施外,我们推出了《人工智能治理示范框架》和AI Verify自测工具包,以展示负责任的AI部署。今年晚些时候,我们计划根据《个人数据保护法》发布《人工智能系统中个人数据使用的咨询指南》。我们定期与业界及国际同行交流,例如通过我们的《人工智能与数据伦理使用咨询委员会》和《全球人工智能伙伴关系》,以紧跟最新发展。
在必要和有益的情况下,我们将更新措施,以考虑诸如Chat生成预训练变换器(ChatGPT)和GPT-4等发展的影响。例如,公共服务部门已为使用类似技术起草文件的公务员制定了指导方针。该指导方针明确指出,公务员对其工作负有责任,需对AI生成的内容进行事实核查和审查。指导方针还旨在通过提醒公务员不要将敏感信息输入这些应用程序来保障数据安全。
副议长女士 :陈武明医生。
陈武明医生(裕廊选区) :副议长,我感谢高级国务部长的回答。我有两个追加问题。高级国务部长在举公共服务部门使用ChatGPT和GPT-4积累经验的例子时,基本上提到了在AI被用于某一目的时应有责任人的概念。请问高级国务部长,这种做法是否会在其他场合推广,包括私营部门?例如,如果一家公司使用AI做出人力资源决策,当AI导致对求职者的歧视行为时,是否可以有适当的问责机制。
其次,我也想请教高级国务部长,相关机构是否考虑研究为某些AI应用和平台设立“人类身份标识”的作用?这样,新加坡民众和公众在使用在线门户时,可以知道聊天内容是由AI生成还是由真人交流。
陈志南医生 :女士,感谢议员的提问。私营部门的角色非常重要。我们的《人工智能与数据伦理使用咨询委员会》中有大量私营部门代表。我们希望通过该平台,公共部门的最佳实践和举措也能影响私营部门的标准和做法;反之亦然,我们也会从私营部门学习最佳实践,以支持公共部门的职责和使命。
关于第二个问题,即“人类身份标识”的作用,我认为值得研究。但我想提醒,这不一定适合作为所有AI机制的普遍规则。议员会理解,有相当多的任务可以安全合理地自动化完成,而无需证明有人类参与。在我们日常工作和与众多设备的互动中,我们知道这类技术已被安全有效地部署。
中午12时30分
副议长女士 :问答时间结束。秩序。问答时间结束。政府法案介绍。
英文原文
SPRS Hansard 原始记录 · 抓取日期:2026-05-02
16 Dr Tan Wu Meng asked the Minister for Communications and Information considering the advances in artificial intelligence (AI) with platforms such as ChatGPT and the GPT-4 architecture, what attention is being given to promoting and ensuring (i) the development of ethical AI which responds in a manner consistent with human values and (ii) maintenance of ethical standards in the development, dataset training and deployment of AI.
The Senior Minister of State for Communications and Information (Dr Janil Puthucheary) (for the Minister for Communications and Information) : Madam, Singapore supports the responsible development and deployment of artificial intelligence (AI), so that its benefits may be enjoyed in a trusted and safe manner. Our approach was explained at the 19 April 2023 Sitting.
Among other measures, we introduced the Model AI Governance Framework and AI Verify, a self-testing toolkit to demonstrate responsible deployment of AI. Later this year, we plan to issue the Advisory Guidelines on the Use of Personal Data in AI Systems under the Personal Data Protection Act. We regularly engage industry and international counterparts, such as through our Advisory Council on Ethical Use of AI and Data and the Global Partnership on Artificial Intelligence, to keep abreast of developments.
Where necessary and useful, we will update our measures to take into account the impact of developments like Chat Generative Pre-trained Transformer (ChatGPT) and GPT-4. For example, the Public Service has introduced guidelines for public officers using similar technologies to draft documents. These guidelines make clear that public officers are accountable for their work and are responsible for fact-checking and vetting AI-generated content. The guidelines also aim to safeguard data security by reminding officers not to input sensitive information into these applications.
Mdm Deputy Speaker : Dr Tan Wu Meng.
Dr Tan Wu Meng (Jurong) : Deputy Speaker, I thank the Senior Minister of State for his answer. I have got two supplementary questions. The Senior Minister of State, in his example of the Public Service gaining experience with the use of ChatGPT and GPT-4, essentially alluded to the idea of a responsible person when AI is being deployed for a purpose. Can I first ask the Senior Minister of State whether this approach will be encouraged in other settings, including in the private sector? So that, for example, if a firm uses AI to make decisions on human resources, there can be appropriate accountability if the AI results in discriminatory behaviour against job applicants.
Secondly, can I also ask the Senior Minister of State if agencies might wish to study the role of a proof of human marker for certain AI applications and platforms? This is so that Singaporeans and members of the public interacting with an online portal can know whether the chat is being generated by an AI or by a fellow human being.
Dr Janil Puthucheary : Madam, I thank the Member for the questions. The role of the private sector is important. We have significant representation from the private sector in our Advisory Council on the Ethical Use of AI. We hope through that platform, these best practices as well as the moves that the public sector is making can also influence the standards and approaches that occur in the private sector; and vice versa, as we learn best practices from the private sector for our role and mission in the public sector.
On his second question about the role of a proof of human marker, I think it is something worth studying. But I would caveat that it is not necessarily something that you may want to introduce as a general rule across all AI mechanisms. I think the Member would appreciate there are a fair number of tasks which can be very safely and reasonably automated without the need for demonstrating a human in the loop. In our day-to-day work and our interactions with our many devices, we know that this is the type of technology that is being deployed safely and to good effect.
12.30 pm
Mdm Deputy Speaker : We are out of Question Time. Order. End of Question Time. Introduction of Government Bills.