书面答复 · 2024-08-07 · 第 14 届国会

深伪检测与政治讽刺界定

Accuracy of Deepfake Detection Technologies and Differentiating Between Harmful Deepfakes and Legitimate Political Satire or Memes

AI 安全与伦理AI 经济与产业AI 与国家安全AI 基础设施与研究 争议度 3 · 实质辩论

议员质询政府当前深伪检测技术的准确率、如何区分有害深伪与合法政治讽刺及误判处理。政府回应技术不断更新且不公开准确率,强调依据《反网络虚假信息法》处理有害内容,讽刺不构成违法。政府关注国际经验,研究是否需进一步保障选举安全。核心争议在于技术透明度与言论自由的平衡。

关键要点

  • 深伪检测技术持续更新
  • 讽刺不自动违法
  • 误判可司法申诉
政府立场

技术保密,依法打击虚假信息

质询立场

关注技术准确率与误判风险

政策信号

加强AI虚假信息监管

"We do not publish their accuracy levels as our tools are constantly being updated to keep up with technology."

参与人员(2)

完整译文(中文)

Hansard 英文原文译文 · 翻译日期:2026-05-02

29 何庭如女士问数字发展与信息部长:(a) 政府用于检测人工智能生成内容的深度伪造技术目前的准确率是多少;(b) 政府将如何区分有害的深度伪造内容与使用类似技术的合法政治讽刺或表情包;(c) 如果视频被错误地识别为深度伪造,将会怎样处理?

张玉娟女士:政府拥有多种工具和技术来检测、识别和评估被篡改的内容,包括人工智能(AI)生成的内容如深度伪造。这些工具可能是商业采购的、内部开发的,或与研究人员合作开发,例如与在线安全先进技术中心合作。我们不公开这些工具的准确率,因为我们的工具会不断更新以跟上技术发展。同时,公开全部能力细节也不符合公众利益,因为恶意行为者可能会利用这些信息。

当满足一定门槛时,政府可以针对网络虚假信息采取行动,包括利用人工智能生成的虚假信息。如果此类内容为虚假且损害公共利益,政府可根据《防止网络虚假信息及操纵法》(POFMA)采取行动。讽刺或恶搞本身不构成POFMA采取行动的条件,除非其中包含损害公共利益的虚假信息。对发给个人的POFMA指令(包括针对深度伪造内容的指令)有异议者,可以向法院提出上诉。

许多国家已认识到减轻人工智能使用及应用带来的危害和风险的必要性,包括深度伪造的恶意使用。一些国家已在选举期间实施了保障措施,以保护选举过程的完整性。我们正在研究是否需要进一步的保障措施,并将在准备好时提供更新。

英文原文

SPRS Hansard 原始记录 · 抓取日期:2026-05-02

29 Ms He Ting Ru asked the Minister for Digital Development and Information (a) what is the current accuracy rate of the Government’s deepfake detection technologies for AI-generated content; (b) how will the Government differentiate between harmful deepfakes and legitimate political satire or memes using similar technologies; and (c) what happens if videos are wrongly identified as deepfakes.

Mrs Josephine Teo : There are a variety of tools and techniques available to the Government to detect, identify and assess manipulated content, including artificial intelligence (AI)-generated content such as deepfakes. These may be sourced commercially, developed in-house or in partnership with researchers such as those at the Centre for Advanced Technologies in Online Safety. We do not publish their accuracy levels as our tools are constantly being updated to keep up with technology. It is also not in the public interest to reveal the full extent of capabilities as malicious actors may exploit it.

The Government can take action against online falsehoods when certain thresholds are met, including falsehoods generated with the help of AI. Action may be taken under the Protection from Online Falsehoods and Manipulation Act (POFMA) if such content is false and against the public interest. Satire or parody do not by themselves meet the criteria for POFMA action, unless they contain falsehoods that harm public interest. Individuals who disagree with POFMA directions issued to them, including those for deepfake content, can file an appeal in court.

Many countries have recognised the need to mitigate the harms and risks from AI use and application, including the malicious use of deepfakes. Some countries have already put in place safeguards, especially during elections, in order to protect the integrity of the electoral process. We are studying if further safeguards are required and will provide an update when ready.