首页 500强 活动 榜单 商业 科技 商潮 专题 品牌中心
杂志订阅

公众人物该如何应对深度伪造危机?

Jessica Coacci
2025-11-04

AI骗局也可能发生在工作场所。

文本设置
小号
默认
大号
Plus(0条)

在这个充斥着网络冒名顶替者、深度伪造内容、交易诈骗、ChatGPT代写论文及AI合成图像的世界里,越来越多人开始质疑:“这是真实的,还是AI伪造的?”图片来源:Pier Marco Tacca-Getty Images

在AI时代,现实或许会在一夜之间被改写。某天你还在公开支持某位心仪的市长候选人,第二天就有AI生成的言论声称你持相反立场。

前纽约市长白思豪(Bill de Blasio)就遭遇了这种情况。纽约市长选举前夕,《伦敦时报》记者向其认为是白思豪的邮箱发送邮件,询问白思豪对民主党领跑者佐兰·曼达尼(Zohran Mamdani)政策的看法。

回复内容出人意料:“依我之见,其主张经不起推敲,且面临巨大的政治障碍。”在其他媒体及社交平台纷纷转载后,真正的白思豪出面澄清,称该报道纯属捏造,并未反映本人观点。

冒充者承认使用ChatGPT撰写了批评曼达尼税收计划的回复,声称该计划难以筹集足够资金实现目标。

这一事件虽已解决,却引发了另一个问题:如果遭遇AI钓鱼、深度伪造内容或网络冒名顶替,该如何应对?

“我们面临的问题是,未来伪造声音或编造故事将变得何等容易,记者、编辑乃至公众都可能因此受骗。”这位前纽约市长向《财富》杂志表示。

对公众人物而言,被“克隆”的风险极高。作为“超现实”冒名事件的当事人,我就如何应对该情况,以及他认为在AI时代必须采取哪些关键措施,采访了真正的白思豪。

快速回应并确认身份

白思豪表示,由于此前没有记者就此事联系过他,且他与该媒体并无往来,当时最佳的应对方式是立即在网上——即X平台上——回应,称该内容为虚假信息。

“通过网络要求对方道歉并删除内容,确实起到了引起他们关注的效果。”他说道。

OpenAI的Sora和谷歌的Veo 3等工具使AI生成内容更易制作出逼真的虚假图像和视频,包括骚乱、犯罪、政治虚假信息、不实指控、欺诈等场景。尽管Sora视频带有动态水印以标识其AI创作属性,但部分专家指出,该标记可通过一定技术手段删除。

“你唯一能做的就是上网否认其真实性,”白思豪表示。“如果有人发布我抢劫商店的视频——而我根本没抢劫——要快速回应,立即向全世界声明这是伪造的,而不是试图让别人来处理。”

AI骗局也可能发生在工作场所

深度伪造技术对公众人物构成明显威胁,同样可能波及职场。

人力资源管理平台isolved的欺诈防范主管史蒂夫·伦德曼(Steve Lenderman)指出:“在工作场所,诈骗并不总是一眼就能识破。”

“诈骗者常冒充高管或同事,利用AI合成声音或伪造邮件,向人力资源、薪资或财务人员索要紧急付款或员工信息。在防范欺诈时,好奇心并非多疑,而是自我保护。”伦德曼向《财富》杂志表示。

伦德曼的建议是:迅速行动,并记录一切。向雇主或IT团队报告时,截图、链接和信息会发挥重要作用。他们可以控制损失、重置密码、锁定受影响账户,并启用多因素认证。“行动越迅速,就越有可能在恶意行为者造成严重危害之前阻止他们。在这种情况下,透明度和速度是最佳防御手段。”他补充道。

采取法律行动的必要性

遭遇身份冒用事件,促使白思豪反思:针对新兴技术带来的安全风险,亟需采取更强有力的应对措施。2023年,他在哈佛大学的会议上就AI监管政策缺失问题发表了看法。

“认为AI理应成为规则例外、成为史上唯一不受监管的技术,这种想法简直荒谬至极。”他表示。

“若有人通过技术手段捏造他人犯罪的内容,这本身就构成犯罪——任何科技公司都不应协助或纵容发布不当及非法内容的人。”(财富中文网)

译者:中慧言-王芳

在AI时代,现实或许会在一夜之间被改写。某天你还在公开支持某位心仪的市长候选人,第二天就有AI生成的言论声称你持相反立场。

前纽约市长白思豪(Bill de Blasio)就遭遇了这种情况。纽约市长选举前夕,《伦敦时报》记者向其认为是白思豪的邮箱发送邮件,询问白思豪对民主党领跑者佐兰·曼达尼(Zohran Mamdani)政策的看法。

回复内容出人意料:“依我之见,其主张经不起推敲,且面临巨大的政治障碍。”在其他媒体及社交平台纷纷转载后,真正的白思豪出面澄清,称该报道纯属捏造,并未反映本人观点。

冒充者承认使用ChatGPT撰写了批评曼达尼税收计划的回复,声称该计划难以筹集足够资金实现目标。

这一事件虽已解决,却引发了另一个问题:如果遭遇AI钓鱼、深度伪造内容或网络冒名顶替,该如何应对?

“我们面临的问题是,未来伪造声音或编造故事将变得何等容易,记者、编辑乃至公众都可能因此受骗。”这位前纽约市长向《财富》杂志表示。

对公众人物而言,被“克隆”的风险极高。作为“超现实”冒名事件的当事人,我就如何应对该情况,以及他认为在AI时代必须采取哪些关键措施,采访了真正的白思豪。

快速回应并确认身份

白思豪表示,由于此前没有记者就此事联系过他,且他与该媒体并无往来,当时最佳的应对方式是立即在网上——即X平台上——回应,称该内容为虚假信息。

“通过网络要求对方道歉并删除内容,确实起到了引起他们关注的效果。”他说道。

OpenAI的Sora和谷歌的Veo 3等工具使AI生成内容更易制作出逼真的虚假图像和视频,包括骚乱、犯罪、政治虚假信息、不实指控、欺诈等场景。尽管Sora视频带有动态水印以标识其AI创作属性,但部分专家指出,该标记可通过一定技术手段删除。

“你唯一能做的就是上网否认其真实性,”白思豪表示。“如果有人发布我抢劫商店的视频——而我根本没抢劫——要快速回应,立即向全世界声明这是伪造的,而不是试图让别人来处理。”

AI骗局也可能发生在工作场所

深度伪造技术对公众人物构成明显威胁,同样可能波及职场。

人力资源管理平台isolved的欺诈防范主管史蒂夫·伦德曼(Steve Lenderman)指出:“在工作场所,诈骗并不总是一眼就能识破。”

“诈骗者常冒充高管或同事,利用AI合成声音或伪造邮件,向人力资源、薪资或财务人员索要紧急付款或员工信息。在防范欺诈时,好奇心并非多疑,而是自我保护。”伦德曼向《财富》杂志表示。

伦德曼的建议是:迅速行动,并记录一切。向雇主或IT团队报告时,截图、链接和信息会发挥重要作用。他们可以控制损失、重置密码、锁定受影响账户,并启用多因素认证。“行动越迅速,就越有可能在恶意行为者造成严重危害之前阻止他们。在这种情况下,透明度和速度是最佳防御手段。”他补充道。

采取法律行动的必要性

遭遇身份冒用事件,促使白思豪反思:针对新兴技术带来的安全风险,亟需采取更强有力的应对措施。2023年,他在哈佛大学的会议上就AI监管政策缺失问题发表了看法。

“认为AI理应成为规则例外、成为史上唯一不受监管的技术,这种想法简直荒谬至极。”他表示。

“若有人通过技术手段捏造他人犯罪的内容,这本身就构成犯罪——任何科技公司都不应协助或纵容发布不当及非法内容的人。”(财富中文网)

译者:中慧言-王芳

In the age of AI, reality could be rewritten overnight. One day, you’re publicly supporting one of your favorite mayoral candidates—the next, an AI-generated quote states the opposite.

That situation happened to former New York City mayor Bill de Blasio (important distinction the d is lower case not uppercase). Days leading up to the New York City mayor’s race, a reporter from The Times of London sent an email to what he thought was Bill de Blasio on his thoughts on Zohran Mamdani’s policies, who is a democrat leading in the race.

The results were unexpected, as the respondent wrote, “In my view, the math doesn’t hold up under scrutiny, and the political hurdles are substantial.” After picking up on other outlets and social media, the real Bill de Blasio stood up, saying the story was entirely false and fabricated, and doesn’t reflect his views.

The person impersonating him admitted to using ChatGPT to compose a response criticizing Mamdani’s tax plans, saying it was unlikely to raise the amount of money to reach his goals.

The scenario, now resolved, raises another question: What do you do if you are catfished by AI, a deepfake, or someone online?

“We have a question here about how easy it might be, going forward, to fake a voice or fabricate a story and have a journalist or an editor be victimized that way—and the public be victimized,” the former New York City mayor tells Fortune.

For high-profile figures, the stakes of being cloned couldn’t be higher. As someone who encountered the “surrealism” of being impersonated firsthand, I asked the real Bill de Blasio on how he handled the scenario, and what steps he believes to be crucial going forward in the age of AI.

Rapid response and confirm identity

De Blasio said since there were no journalists that had previously reached out about the incident, and he had no contacts at the publication, his best recourse at the time was to immediately respond to the post online—on X—saying it was false.

“Going online and demanding an apology and demanding it be taken down did have the effect of getting their attention,” he said.

Tools like OpenAI’s Sora and Google’s Veo 3 have made it easier for AI-generated content to produce realistic imagery and videos of things that are not real—including riots, crimes, political misinformation, false claims, fraud, and more. Though Sora videos feature a moving watermark identifying them as AI creations, some experts say it could be edited out with some effort.

“All you can do is go online and deny what it is,” de Blasio said. “If someone puts up something on me robbing a store—and I have not robbed the store—rapid response, immediately say that’s a fake to the world, rather than try and get someone to address it.”

AI scams could happen in the workplace, too

Deepfakes are an obvious threat for public figures, but it could have ramifications in the workplace, too.

“In the workplace, scams don’t always look like scams,” said Steve Lenderman, Head of Fraud Prevention at HCM platform isolved.

“Fraudsters often target HR, payroll or finance employees by pretending to be executives or coworkers and using AI-generated voices or lookalike emails to request urgent payments or employee information. In fraud prevention, curiosity isn’t paranoia—it’s protection,” Lenderman tells Fortune.

Lenderman’s advice: Act fast, and document everything. Screenshots, links, and messages will be useful when you report it to your employer or IT team. They can contain the damage, reset passwords, lock down affected accounts, and enable multifactor authentication.“The faster you act, the more likely you are to stop bad actors before they can cause serious harm. In these cases, transparency and speed are your best defenses,” he added.

The need for legal action

The experience of being impersonated led de Blasio to reflect on the need for stronger action around the safety risks of emerging technology. In 2023, he spoke at a Harvard conference on the lack of policy addressing AI regulation.

“The notion that somehow AI should be the exception to the rule and be the only technology that was ever not regulated is insane,” he said.

“If you portray someone committing a crime, that should be a crime—and no tech company should aid and abet the person who puts up that inappropriate and illegal content.”

财富中文网所刊载内容之知识产权为财富媒体知识产权有限公司及/或相关权利人专属所有或持有。未经许可,禁止进行转载、摘编、复制及建立镜像等任何使用。
0条Plus
精彩评论
评论

撰写或查看更多评论

请打开财富Plus APP

前往打开