Oral Answer · 2026-02-27 · 第 15 届国会

AI聊天机器人用于青少年心理咨询的风险与监管

Use of AI Chatbots for Counselling and Mental Health Support by Teenagers and Young Adults

AI Ethics & SafetyAI 治理与监管AI 与医疗 争议度 2 · 温和质询

议员Dr Charlene Chen质询政府如何监测青少年使用AI聊天机器人进行心理咨询的趋势,以及保护弱势用户的措施。卫生部高级政务部长许宝琨回应:AI聊天机器人已无处不在,追踪其在心理健康方面的使用已不切实际。他明确指出,生成式AI聊天机器人不适合替代合格的心理健康服务提供者,因其存在误导信息和不当回应风险,可能造成伤害。但年轻人使用这些工具是因为匿名性和24/7可用性。政府的策略是推广合法替代资源(如mindline 1771、mindline.sg、CHAT),并通过《在线安全实践守则》要求应用商店实施年龄验证措施(2026年3月底前)。

关键要点

  • AI聊天机器人已无处不在,追踪使用不切实际
  • 生成式AI不适合替代合格心理健康服务
  • 年轻人使用因匿名性和24/7可用性
  • 推广mindline等合法替代资源
  • 应用商店须实施年龄验证措施
政府立场

不适合监管禁止,转而推广合法替代与教育

政策信号

AI心理健康应用采取引导而非禁止策略

参与人员(2)

完整译文(中文)

Hansard 英文原文译文 · 翻译日期:2026-05-02

2 陈嘉玲医生问协调社会政策部长兼卫生部长,鉴于青少年和年轻人越来越多地使用人工智能(AI)聊天机器人进行咨询和心理健康支持,(a) 卫生部如何监测这一趋势;(b) 有哪些措施引导用户在适当情况下寻求合格的心理健康服务;(c) 正在考虑哪些保障措施以保护易受影响的用户?

卫生部高级国务部长(许宝琨医生)(代表协调社会政策部长兼卫生部长) :先生,人工智能(AI)聊天机器人现已无处不在,跟踪其在咨询或心理健康支持中的使用已不切实际。

一般来说,使用生成式人工智能(GenAI)聊天机器人替代合格的心理健康护理提供者是不合适的。AI聊天机器人并非设计用来处理心理健康问题或提供心理健康状况的治疗,在处理严重心理健康危机时可能提供错误信息或不当回应,反而可能造成伤害。

但根本上,年轻人和许多有心理健康问题的患者有时会寻求这些在线聊天机器人,因为它们提供匿名性,并且全天候24小时易于获得和访问。我们的做法是鼓励寻求合格心理健康服务的个人接触我们的“心理健康第一站”服务,如全国心理热线1771、mindline.sg、社区外展团队和CHAT。我们提供这些资源,使其成为合法的替代选择,让那些寻求匿名和易访问优势的人们能够转向合法渠道,获得我们确认合法的同类服务,并能获得适当的后续转介,获得CHAT之外所需的护理。

我们在线提供的这些资源也更符合本地需求。我相信这具有明显优势。但我们需要做的是加强教育,使这些资源更易获得,并提升公众对这些资源的认知,让他们转向这些合法资源,而不是依赖他们能找到的在线聊天机器人。

虽然执法不可行,但已有保障措施保护年轻用户的网络安全。根据《网络安全行为准则——应用分发服务》,必须最大限度减少用户接触这些服务上的有害内容。指定的应用商店也须在今年3月31日前实施年龄验证措施。数字内容开发者也应遵守《生成式人工智能模型治理框架》,确保对青少年和儿童负责任地开发和应用AI。

此外,信息通信媒体发展局(IMDA)的“数字技能生活框架”包括如何使用生成式人工智能及管理其潜在风险的内容。个人可通过现有资源按自己的节奏学习。

议长 :陈医生。

陈嘉玲医生(淡滨尼) :感谢高级国务部长的答复。我有三个补充问题。鉴于大型语言模型(LLMs)可能用于训练数据、帮助人们更好理解数据,卫生部如何评估潜在的数据隐私风险?是否会考虑保障措施或知情同意标准以保护易受影响的用户?

第二,鉴于AI系统可能过度肯定用户观点,实际上可能强化有害思维,且过度依赖可能减少实际寻求帮助的行为,卫生部是否评估过这类风险?

最后,我很高兴高级国务部长提到跨文化差异。卫生部是否愿意支持研究,以了解如何更好地使用AI咨询工具及其对新加坡心理健康结果的影响?

许宝琨医生 :先生,感谢议员提出的相关且深思熟虑的问题,这确实是一个越来越多公众,尤其是年轻人关心的重要议题。

关于第一个问题,如何评估数据隐私及是否需要知情同意,我认为首先,我们上线的许多资源,如mindline.sg,是基于匿名原则运作的。因此,在匿名的情况下,实际上无法进行知情同意,这样寻求帮助的人无需担心其数据被泄露。我们想给他们的保证是,首先,匿名性确保你在线向咨询师或聊天机器人透露的任何信息都无法追溯到个人。这是第一点。我们也希望确保获得此类护理的门槛尽可能低。

关于第二个问题,这些聊天机器人是否过度肯定用户观点,尤其是有自杀倾向者,是否会强化其采取行动的想法,我想介绍一下我们在mindline.sg使用的服务,这是“心理健康第一站”之一,是一个数字平台,使用基于Wysa的聊天机器人。它是一个专门的心理健康AI聊天机器人。该机器人引导用户进行数字治疗练习,如正念、深呼吸技巧、睡眠卫生实践,这些均受认知行为疗法原则启发。它旨在补充我们现有的专业咨询和治疗服务,作为一个全天候可用的“口袋治疗师”,消除寻求帮助的障碍,并引导求助者到本地人工资源。

但为了让议员放心,与生成式AI聊天机器人不同,Wysa设计为通过基于规则的模型提供数字治疗练习。它不是那种极具创造性、能提出新建议的聊天机器人。它遵循一个结构化的决策树,由临床医生开发并持续验证。Wysa聊天机器人已通过临床评估,验证其有效性、安全性和影响。

因此,我希望这能让议员放心,我们提供的资源是合法的,风险已得到管理。我们将继续探索如何改进这些资源。

关于第三个问题,是否支持研究评估结果,正如我所说,这些资源刚推出约一年左右。我们将随着时间推移收集数据,分析我们公开提供的这些干预措施的影响,以便更好地了解如何提升它们。 [ 请参阅《卫生部高级国务部长澄清》,官方报告,2026年2月27日,第96卷,第21期,书面声明更正部分。 ]

英文原文

SPRS Hansard 原始记录 · 抓取日期:2026-05-02

2 Dr Charlene Chen asked the Coordinating Minister for Social Policies and Minister for Health in view of the increasing use of artificial intelligence (AI) chatbots for counselling and mental health support by teenagers and young adults (a) how the Ministry is monitoring this trend; (b) what measures are in place to guide users towards qualified mental health services where appropriate; and (c) what safeguards are being considered to protect vulnerable users.

The Senior Minister of State for Health (Dr Koh Poh Koon) (for the Coordinating Minister for Social Policies and Minister for Health) : Sir, artificial intelligence (AI) chatbots have become so ubiquitous now that it is no longer practical to track its use for counselling or mental health support.

In general, it is not appropriate to use generative AI (GenAI) chatbots as a replacement for a qualified mental health care provider. AI chatbots are not designed to address mental health issues or provide treatment for mental health conditions and risk providing misinformation or inappropriate responses when dealing with serious mental health crises, and may cause harm instead.

But fundamentally, young people and many patients with mental health issues sometimes seek out these online chatbots because of the anonymity it offers and also because it is easily available and accessible, 24/7. Our approach is to encourage individuals seeking qualified mental health services to approach our First Stop for Mental Health services such as national mindline 1771, mindline.sg, Community Outreach Teams and CHAT. We put forth these resources so that they become the legitimate alternatives that those seeking the same advantages of anonymity and easy accessibility can now go to a legitimate source to get the same kind of services for which we know is legitimate, and they can get proper referrals onwards as well, to the care that they need beyond CHAT.

These resources which we put online are also more contextualised to our local needs. I believe there is a distinct advantage. But what we need to do is to do a lot more education, and make these resources be more available and make awareness for these resources be elevated amongst the public so that they go to these legitimate resources, rather than rely on the online chatbots that they can find.

While enforcement is not practical, there are safeguards in place to protect younger users online. Under the Code of Practice for Online Safety – App Distribution Services, users’ exposure to harmful content on these services must be minimised. Designated app stores are also required to implement age assurance measures by 31 March this year. Digital content developers are also expected to comply with the Model AI Governance Framework for Generative AI to ensure responsible development and application of AI for youths and children.

Additionally, the Infocomm Media Development Authority's (IMDA's) Digital Skills for Life framework includes content on how to use GenAI and manage its potential risks. Individuals can learn at their own pace through the available resources.

Mr Speaker : Dr Chen.

Dr Charlene Chen (Tampines) : I thank the Senior Minister of State for his responses. I just have three supplementary questions. Given that Large Language Models (LLMs) may be used to train the data, train and help people understand the data better, how does the Ministry assess potential data privacy risks, and are there going to be safeguards or informed consent standards being considered to protect vulnerable users?

The second one is, given that AI systems may overly affirm users' views, which may actually reinforce harmful thinking, and also over reliance may reduce actual help seeking behaviours, has the Ministry assessed this risk?

And lastly, I am glad that the Senior Minister of State has mentioned cross-cultural differences. Is the Ministry willing to support studies to understand how AI counselling tools can be used better and how their impact will be on mental health outcomes in Singapore?

Dr Koh Poh Koon : Sir, I thank the Member for her pertinent questions and thoughtful questions on this very important issue that increasingly many people in public, especially young people, are concerned about.

On her first question on how can we assess data privacy and whether informed consent is needed, I think in the first instance, many of these resources we put online, such as mindline.sg, works on the basis of anonymity. So, you cannot really do an informed consent when you want it to be anonymous, so that the person seeking help does not have to worry about his or her data being exposed. The assurance we want to give to them is that first of all, the anonymity already ensures that none of these things that you actually divulge to the counsellor online or to the chatbot can be traced back to the individual. So, that is the first thing. And we want to make sure that the barrier to access this care is something that is as low as possible.

On the second question on whether these chatbots overly affirm users' views, especially those who may have suicidal ideation, and whether that will reinforce the person to end up taking action, let me just give a little bit more insight on what we use in mindline.sg, one of the First Stops for Mental Health services, which is a digital platform that uses a chatbot. It is a chatbot based on Wysa. It is a specialised mental health AI-enabled chatbot. This chatbot then guides users through digital therapeutic exercises such as mindfulness, deep breathing techniques, sleep hygiene practices that are inspired by cognitive behavioural therapy principles. It aims to supplement our existing professional counselling and therapy services by serving as a 24/7 available pocket therapist that removes any barriers towards help seeking and signpost help seekers to local human-based resources.

But just to reassure the Member, unlike GenAI chatbots, Wysa is designed to deliver such digital therapeutic exercises via a rule-based model. It is not something like the chatbot itself can be extremely creative and come up with a new suggestions. There is a rule-based model in it, so the conversation follows a structured decision tree, which is developed and continuously validated by clinicians. The Wysa chatbot has been clinically evaluated for its efficacy, safety and impact.

So, I hope this gives the Member assurance that the resources that we make available are legitimate. It has got its risks managed. We will continue to see how we can improve such resources.

The third question on whether we will support any studies to measure the outcomes, like I said, these resources were just started just about a year or so ago. We will see over time how we can collect data and then, analyse the impact of some of these interventions we put in the public domain, to make sure that we have a better insight on how to enhance them. [ Please refer to ​ " Clarification by Senior Minister of State for Health ", Official Report, 27 February 2026, Vol 96, Issue 21, Correction By Written Statement section. ]