口头答复 · 2018-02-06 · 第 13 届国会
人工智能法规引入讨论
Introduction of Regulations with Advent of Artificial Intelligence and Autonomous Machines
议员质询随着人工智能和自动机器的发展,是否会制定新法规或修订现有法律以涵盖伦理、道德、紧急停止装置及责任问题。政府回应强调将采取领域特定的监管策略,举例交通领域的自动驾驶车辆安全测试和保险要求,以及金融领域的算法监管。政府还提到将提升相关技术能力,推动智慧国建设。核心争议在于如何平衡技术发展与风险管理,确保公众利益和安全。
关键要点
- • 领域特定监管策略
- • 自动驾驶严格安全测试
- • 持续提升技术能力
采取领域专属监管,保障安全与责任
关注伦理、道德及责任问题
推动领域差异化AI监管
"The regulatory approach would thus also have to be domain-specific."
参与人员(4)
- Janil Puthucheary
- Patrick Tay Teck Guan
- Senior Minister of State for Communications and Information and Education
- Tan Wu Meng
完整译文(中文)
Hansard 英文原文译文 · 翻译日期:2026-05-02
7号议员Patrick Tay Teck Guan问总理,随着人工智能和自主机器的兴起和推动,是否会颁布新的法律法规或审查现有法律法规,以确保我们解决伦理、道德、紧急停止装置(kill switches)以及责任归属等问题。
8号议员Tan Wu Meng博士问总理,正在制定哪些政策原则和法律框架,以更新新加坡的治理和法律,解决自主算法和设备带来的问题。
通讯与信息部及教育部高级国务部长(Janil Puthucheary博士)(代表总理) :议长先生,请允许我将第7和第8号问题一并回答。
议长 :可以,请讲。
Janil Puthucheary博士 :先生,随着人工智能(AI)应用日益普及,它被应用于不同领域,每个领域都有其特定的风险。因此,监管方法也必须针对具体领域。
例如,在交通领域,陆路交通管理局(LTA)和交警要求自动驾驶车辆(AV)必须先在受控环境中通过严格的安全测试,才能在特定时间和地点的公共道路上进行测试。这种渐进式测试允许开发者推进技术,同时避免对其他道路使用者造成安全风险。从概念上讲,这类似于人类司机必须经过培训和考试后才能获得驾驶执照。此外,LTA要求自动驾驶车辆必须投保强制机动车保险,以涵盖第三方责任和财产损失,保护所有道路使用者和财产所有者的利益。开发者还需与当局共享关键数据,以便监控试验进展。
在政府内部,我们开发了机器学习算法,能够检测和识别涉嫌联合活动的交易账户。这与自动驾驶车辆涉及的物理安全问题不同。然而,我们仍需在使用前严格测试软件,并持续维护和更新AI引擎,定期对结果进行人工核查。例如,在暂停任何账户之前,仍会有人为流程进行核查,确保处罚措施合理。
在其他领域,信息通信媒体发展局(IMDA)正与各行业监管机构合作,讨论AI治理问题并研究政策及法律影响。若发现有必要介入以保护公众利益,我们将实施针对具体领域的保障措施。
人工智能是智慧国的重要推动力。为了充分利用其优势并有效管理风险,我们需要具备深刻技术理解的人才。我们计划继续提升政府、产业及研发机构的相关能力。
议长 :Patrick Tay议员。
Patrick Tay Teck Guan议员(西海岸选区):感谢高级国务部长的回答。我想问政府是否考虑进行一项全国性的详细研究,评估人工智能和技术的潜在影响,以促进相关讨论,并为技术的伦理发展铺路。因为公众对责任归属、隐私、同意、安全、保障、多样性以及透明度等问题日益关注。
Janil Puthucheary博士 :议长先生,议员可以放心,许多行业特定机构,无论是贸易还是专业协会,已经对这一进程表现出兴趣,并开始通过公开咨询或与智慧国办公室及其他部门合作,展开相关讨论。
这很可能是一个持续的讨论过程,旨在审视新技术(如人工智能)带来的问题。关键是确保这一过程适应现有领域的具体法律、操作问题和监管要求。因此,我们面临多重问题。
智慧国办公室将继续研究此事,尤其是在新技术和技术机遇出现时,我们将持续与各监管机构及领域合作伙伴保持互动。
议长 :Tan Wu Meng博士。
Tan Wu Meng博士(裕廊选区):感谢高级国务部长的回答。我想问这些研究和框架是否具有前瞻性,是否包括针对尚未进入市场但可能带来重大颠覆性影响的技术的预案和情景规划。我有三个例子供高级国务部长参考。
首先,当多个自主算法相互作用时会发生什么?如果交易算法开始串通,这对竞争法有何影响?
其次,例如,当涉及人工智能时,刑法及“犯罪意图”(mens rea)的概念会有何影响?
第三,是否会考虑采用更具理想性的人工智能原则,比如研究艾萨克·阿西莫夫的机器人三定律?
Janil Puthucheary博士 :议长先生,感谢议员提出的问题。
关于第一个问题,我们确实持续研究各种框架并尝试制定预案。但这仍属假设性,我们必须随着技术的发展进行观察。人工智能涵盖多种技术和机遇,没有单一具体的计划。研发投资方面有策略,但在监管和立法领域,我们需根据能力及其对各领域的影响进行考量,因此我们将继续努力。
议员提到的第二个问题,关于多个算法串通,监管框架与现行无异,因为这些算法必须由某个人拥有、编写、监督并从串通及犯罪活动中获益。因此,现有法律应适用于拥有和运营此类人工智能的人或企业实体。
这引出议员的第二个补充问题,即如果出现所谓的“强人工智能”——即具备感知和意识的通用人工智能,能够将其为某一任务开发的能力转向可能的犯罪任务——刑法如何适用。我们距离这一点还很遥远。当前使用的人工智能属于“弱人工智能”或任务专用型人工智能。任何刑事问题,我们需咨询法律专家,但我推测将针对拥有、运营或从该技术工具中获益的个人。
议员的第三个问题涉及人工智能的理想性原则及阿西莫夫的机器人三定律。我们确实对人工智能抱有理想,认为它有望在未来20年内推动智慧国愿景并为经济社会带来重大益处。阿西莫夫的定律是科幻小说中的虚构设定,但对我们思考相关问题具有启发意义。我们距离阿西莫夫文学中设想的强人工智能——具备感知、意识,能理解伤害、人类、社会与道德优先级的情况——还很远。
因此,现阶段我们无需采用阿西莫夫定律,因为我们尚无感知型强人工智能,也无技术能力将其编程入系统。但这些定律是思考该领域监管的有益思想实验。
值得注意的是,阿西莫夫作品中的机器人三定律常被用来描述当这些定律失效或监管不当时,机器人和人工智能带来的问题。许多故事中,最终由才华横溢的工程师苏珊·卡尔文博士解决危机。我认为重要的教训是,如果我们能吸引更多女性、更多年轻女性投身工程领域,将比单纯依赖这三条定律更有助于实现我们的智慧国和人工智能愿景。[掌声]
议长 :回到非虚构话题,Zainal Sapari议员。
英文原文
SPRS Hansard 原始记录 · 抓取日期:2026-05-02
7 Mr Patrick Tay Teck Guan asked the Prime Minister with the advent and drive towards the use of artificial intelligence and autonomous machines, whether new laws and regulations will be promulgated or existing ones reviewed to ensure we address the issues of ethics, morality, provision of kill switches as well as liability.
8 Dr Tan Wu Meng asked the Prime Minister what policy principles and legal frameworks are being developed to update Singapore's governance and laws to address the issues arising from autonomous algorithms and devices.
The Senior Minister of State for Communications and Information and Education (Dr Janil Puthucheary) (for the Prime Minister) : Mr Speaker, may I take Question Nos 7 and 8 together, please?
Mr Speaker : Yes, please.
Dr Janil Puthucheary : Sir, as artificial intelligence (AI) applications become more pervasive, it is being applied to different domains with very different types of risks specific to the domain concerned. The regulatory approach would thus also have to be domain-specific.
For example, in transportation, the Land Transport Authority (LTA) and the Traffic Police require autonomous vehicles (AVs) to first pass a rigorous safety test within a controlled environment before they can be tested on public roads at specific times and places. This progressive testing allows developers to advance the technology without exposing other road users to safety risks. Conceptually, this is similar to how a human driver has to be trained and tested before being given a driving licence. In addition, LTA requires mandatory motor insurance for AVs to cover third-party liability and property damage to protect the interests of all road users and property owners. Developers are also required to share key data with authorities to allow them to monitor the progress of the trials.
In the Government, we have developed machine learning algorithms that can detect and identify trading accounts suspected of syndicated activities. This is unlike AVs, where there are issues of physical safety. However, we still need to rigorously test the software before use, as well as maintain and update the AI engine continuously, by performing human checks on the results from time to time. For example, before any account is suspended, there will still be a human process to check through and ensure that the punitive action is justified.
In other sectors, the Infocommunications Media Development Authority (IMDA) is working with sectoral regulators to discuss issues of AI governance and study the policy and legal implications. Where we see the need to step in to protect the public’s interest, we will implement domain-specific safeguards.
AI is a key enabler of Smart Nation. To exploit its use and manage its risks well, we need people with a deep understanding of the technology. We intend to continue to raise such capabilities within the Government, industry and our research and development (R&D) institutions.
Mr Speaker: Mr Patrick Tay.
Mr Patrick Tay Teck Guan (West Coast) : I thank the Senior Minister of State for the answer. I would like to ask if the Government would consider doing an overall national detailed study on the potential impact of AI and technology to get the conversations going and also pave the way for the ethical developments of technology. This is because there are growing concerns on various issues, such as liability, privacy, consent, safety, security, diversity as well as transparency.
Dr Janil Puthucheary : Mr Speaker, the Member will be reassured to know that many of the sectoral-specific bodies, whether they are trade or professional associations, are already interested in this process and have begun such discussions either in terms of their public consultation or in collaboration internally with the Smart Nation Office and other offices.
This is likely to be an ongoing discussion and attempt to look at the issues that are thrown up by new forms of technology, such as AI. The issue is to make sure that we make this process fit into the existing domain's specific laws, operational issues and regulations. So, we have a multitude of issues going on.
The Smart Nation Office will continue to study the matter and, especially as new technology and technological opportunities become available, we will continue to become engaged with the various regulations and domain-specific partners.
Mr Speaker: Dr Tan Wu Meng.
Dr Tan Wu Meng (Jurong) : I thank the Senior Minister of State for his answers. I would like to ask whether these studies and frameworks will be forward-looking and include drawer plans and scenarios for technologies that may not have reached the market yet but which would have a significant disruptive impact. I have three illustrations for the Senior Minister of State's consideration.
Firstly, what happens when multiple autonomous algorithms interact? What is the impact for competition law if trading algorithms start to collude?
Secondly, for example, what are the implications for the criminal law and the idea of mens rea ‒ the idea of guilty intention ‒ when you have AI?
Thirdly, would there be a role for more aspirational principles of AI, such as studying, say, Isaac Asimov's Laws of Robotics?
Dr Janil Puthucheary : Mr Speaker, I thank the Member for the questions.
For the first question, indeed, we are continually studying various frameworks and trying to draw up drawer plans. But it is hypothetical and we will have to look at the technology as it emerges. There is no specific individual plan for AI. AI is a big basket of different types of technologies and opportunities. There is a plan or strategy for the investment around R&D. But in terms of the regulatory and legislative space, we will have to look at the capability and its impact on the various domains. So, we will continue to do so.
The example the hon Member brought up in the second question around multiple algorithms colluding, the regulatory framework would be no different from today, because those algorithms would have to have someone who had owned them, written them, supervised them and benefited from the collusion and criminal activity and so on and so forth. So, the existing legislation and laws would need to apply to the humans or the corporate entities that own and operate such AI.
This then leads us to the hon Member's second supplementary question, which is that criminal law for AI would be an issue if we had what is described as "strong AI" ‒ AI that is sentient, conscious ‒ that is, a general purpose AI where it is able to take its capabilities developed for one task and direct it to another possibly criminal task. We are a long way away from that. AI, in today's use, is what we call "weak AI" or task-specific AI. So, any criminal issue, we would have to ask the legal faculty around us, but I presume will be directed at the person who owns, operates or benefits from such a device or tool of technology.
The hon Member's third question was about the aspirational issues around AI and Asimov's Laws of Robotics. Indeed, we are aspirational around AI. We think it has a significant possibility of enabling both the Smart Nation vision as well as significant benefits for our economy and society over the next 20 years. As for Asimov's Laws, they are a fictional device used in science fiction, but nevertheless inspiring for how we should think about these types of problems. We are a long way away from the situation predicated in the Asimov literature which is where a strong AI is sentient, conscious and able to understand issues around harm, human beings, the rank order of relative issues between society and humanity and morality.
So, today, as far as Asimov's Laws are concerned, we have no need because we do not have sentient-strong AI. Neither have we the technical capability to programme in such laws today. But they are a useful thought experiment about how we should think about the regulations around this space.
Of note, the literary device of Asimov's Laws in his writing was often used as a way to describe what happens when they fail and what happens when poor regulations or poorly thought-out laws fail in terms of the opportunities around robotics and AI. So, in much of the literature, the day was rescued by Dr Susan Calvin, who was a brilliant engineer. I think the salient lesson is that if we can recruit more females, more young women into engineering, it would serve our Smart Nation and AI vision far more than the use of his three laws today. [ Applause. ]
Mr Speaker : Coming back to non-fictional issues, Mr Zainal Sapari.