立即打开
生成式人工智能擅自克隆他人肖像,一些人莫名其妙地出名了

生成式人工智能擅自克隆他人肖像,一些人莫名其妙地出名了

Alexandru Voica 2024-03-15
目前我们无法阻止人工智能程序盗用用户肖像。

图片来源:GETTY IMAGES

几周前的一个周五傍晚,我在家乡罗马尼亚探亲,参加一个葬礼,这时我发现自己在思考:是时候开始教我的孩子们说罗马尼亚语了吗?在过去的15年里,我一直在英国生活,我的孩子们也是在英国出生和长大的。他们喜爱身在罗马尼亚的祖父母,但却很难与他们进行交流,我想为此做点什么。

于是我开始寻找解决方案。我在网上搜索了大约一个小时,但没有找到任何有用的东西,于是我继续参加那晚的活动。

几天后,当我正在浏览Instagram时,出现了一个语言学习应用程序的广告。在一家社交媒体公司工作过的我知道发生了什么:这家公司追踪了我的在线活动,发现我对语言学习应用程序很感兴趣,于是决定向我投放广告。这并无大碍:我过去也有过类似的经历,甚至决定根据这种定向广告购买产品。

在接下来的几天里,我收到越来越多的来自同一语言应用程序的广告。但当我开始仔细研究时,我意识到还有更加令人不安的事情。

虽然其中一些广告里的真人兴奋不已,鼓励我下载这款应用程序,并进行试用,而且强调使用过程没有任何风险,但其他广告看起来却异常熟悉。这些广告的特点是有人直接用法语或中文跟我对话,声称由于该应用程序的神奇功能,他们在短短几周内就掌握了一门外语。然而,真实发生的情况并非如此神奇,而是令人担忧:这些视频是通过深度伪造技术操纵的,可能并未得到视频里的人物的同意。

虽然人工智能生成的媒体可以用于娱乐、教育或创意表达,并无恶意,但深度伪造却有可能被用于恶意目的,例如传播虚假信息、伪造证据,或者在这种情况下实施诈骗。

因为我在人工智能领域工作了近十年,所以我很容易就能够发现这些广告中出现的人物实际上并不是真实的,他们的语言技能也不是真实的。相反,多亏了索菲亚·史密斯·加勒的一项调查,我才了解到,有人在未经本人知情或许可的情况下,利用一款应用程序克隆了真人,侵犯了他们的自主权,并可能损害其声誉。

这些深度伪造广告令人担忧的一点是,其在创作过程中没有征得用户的同意。该语言应用程序很有可能使用了一家生成式人工智能公司开发的视频克隆平台的服务,而这家公司在过去三年里四次更名,没有采取任何措施来防止未经授权的克隆人的出现,而且显而易见的是,也没有建立任何机制以从数据库中删除某人的肖像。

这种利用行为不仅有违道德标准,还破坏了人们对数字环境的信任,而数字环境本来就缺乏真实性和透明度。以乌克兰学生奥尔加·洛伊克为例,她拥有一个与健康知识相关的YouTube频道。最近,洛伊克的粉丝提醒她,她的视频已经出现在中国的社交媒体平台上。在中国的互联网上,洛伊克的肖像已经变成了一名俄罗斯女人的头像,正在寻求嫁给中国男人。她发现自己在YouTube上的内容被输入到视频克隆平台上(生成我在Instagram上看到的诈骗广告的同一个平台),一个与她相似的虚拟形象如今正在中国的社交媒体应用程序上宣扬自己爱上了中国男人。由于俄乌冲突,这不仅在个人层面上冒犯了洛伊克,而且如果她有选择权的话,她也绝不会同意参与这类内容制作。

我联系了洛伊克,想听听她对自己遭遇的看法。她是这样表述的:“操纵我的形象来发表我绝不会宽恕的言论,这侵犯了我的个人自主权,也意味着我们需要进行严格监管来保护像我这样的人免受身份盗用的影响。”

同意是一项基本原则,是我们在物理和数字领域进行互动的基础。它是道德行为的基石,肯定了个人控制自己形象、声音和个人数据的权利。未经同意使用,就有可能侵犯他人的隐私、尊严和自主权,为操纵、剥削和伤害敞开大门。

作为一家人工智能公司的企业事务主管,我曾经参与过一项名为#我的形象我做主(#MyImageMyChoice)的活动,试图提高人们对深度伪造应用程序未经同意生成的图像如何毁掉成千上万女孩和妇女的生活的认识。在美国,每12个成年人中就有一个报告说她们是基于图像的虐待的受害者。我读过一些受害者的悲惨故事,她们分享了自己的生活是如何被人工智能应用程序生成的图像或视频摧毁的。当她们根据《数字千年版权法》(DMCA)试图向这些应用程序发出图像删除请求时,却没有得到任何回复,或者被告知这些应用程序背后的公司不受任何此类法律的约束。

我们正在进入一个互联网时代,在这个时代里,我们看到的越来越多的内容将由人工智能生成。在这个新世界中,同意变得更为重要。随着人工智能的能力不断提高,我们也必须健全道德框架和加强监管保障。我们需要建立健全的机制,确保在创建和传播人工智能生成的内容时,征得个人的同意并尊重其意愿。这包括为面部和语音识别技术的使用提出明确的指南,并建立验证数字媒体真实性的机制。

此外,我们必须追究那些试图利用深度伪造技术达到欺诈或欺骗目的的人的责任,以及那些发布深度伪造应用程序却没有采取适当的防范措施来避免出现滥用现象的人的责任。这就需要科技公司、政策制定者和公民社会通力合作,制定并执行法规,阻止恶意行为者行恶,保护用户免受现实世界的伤害,而不是只关注科幻电影里虚构的世界末日场景。比如,我们不应该允许视频或语音克隆公司在未经同意的情况下发布与个人相关的深度伪造产品。在征得同意的过程中,也许我们还应该强制要求这些公司引入信息标签,告诉用户他们的肖像将如何使用、存储在哪里,以及存储多长时间。许多消费者可能会浏览这些标签,但在俄罗斯或白俄罗斯等国家的服务器上存储某人的深度伪造信息可能会产生严重后果,因为在这些国家,深度伪造技术滥用的受害者缺乏实际的追索权。最后,我们需要为人们提供相关机制,让他们能够做出选择,不让自己的肖像在网上被使用,尤其是在他们无法控制肖像使用方式的情况下。在洛伊克的案例中,当记者向研发平台(未经其同意克隆洛伊克)背后的公司请求置评时,该公司没有做出任何回应,也没有采取任何行动。

在更健全的监管措施出台之前,我们需要加大努力,提高公众意识和数字素养,使个人能够识别操纵行为并保护其在线生物识别数据。我们必须增强消费者的能力,让他们在使用应用程序和平台时做出更明智的决定,并认识到在数字空间以及与易受政府监控或出现数据泄露的公司共享个人信息(尤其是生物识别数据)的潜在后果。

生成式人工智能应用程序具有不可否认的吸引力,尤其是对年轻人而言。但是,当人们在这些平台上上传包含自己肖像的图片或视频时,就会在不知不觉中把自己暴露在无数风险里,包括隐私侵犯、身份盗用和潜在的剥削。

虽然我希望有一天我的孩子可以在实时机器翻译的帮助下与他们的祖父母交流,但我对深度伪造技术对下一代的影响深感忧虑,尤其是当我看到泰勒·斯威夫特的遭遇,或者#我的形象我做主活动分享自己故事的受害者的遭遇,或者其他无数遭受性骚扰和性虐待的女性被迫保持沉默的遭遇。

我的孩子们正在一个数字欺骗日益复杂的世界中成长。要帮助他们驾驭这一复杂局面,维护自主权和人格完整,就必须教给他们关于同意、批判性思维和媒体素养的知识。但这还不够:我们需要让开发这项技术的公司承担责任。我们还必须推动各国政府快速采取行动。例如,英国将很快开始执行《在线安全法案》(Online Safety Bill),该法案把深度伪造定为刑事犯罪,也将迫使科技平台采取行动并删除相关内容。更多的国家应该效仿英国的做法。

最重要的是,人工智能行业里的工作人士必须毫不畏惧地公开表达意见,并提醒我们的同行,这种随意构建生成式人工智能技术的方法是不可接受的。(财富中文网)

亚历山大·沃伊卡(Alexandru Voica)是Synthesia公司的企业事务和政策主管,也在穆罕默德·本·扎耶德人工智能大学(Mohamed bin Zayed University of Artificial Intelligence)担任顾问。

Fortune.com上发表的评论文章中表达的观点,仅代表作者本人的观点,不代表《财富》杂志的观点和立场。

译者:中慧言-王芳

几周前的一个周五傍晚,我在家乡罗马尼亚探亲,参加一个葬礼,这时我发现自己在思考:是时候开始教我的孩子们说罗马尼亚语了吗?在过去的15年里,我一直在英国生活,我的孩子们也是在英国出生和长大的。他们喜爱身在罗马尼亚的祖父母,但却很难与他们进行交流,我想为此做点什么。

于是我开始寻找解决方案。我在网上搜索了大约一个小时,但没有找到任何有用的东西,于是我继续参加那晚的活动。

几天后,当我正在浏览Instagram时,出现了一个语言学习应用程序的广告。在一家社交媒体公司工作过的我知道发生了什么:这家公司追踪了我的在线活动,发现我对语言学习应用程序很感兴趣,于是决定向我投放广告。这并无大碍:我过去也有过类似的经历,甚至决定根据这种定向广告购买产品。

在接下来的几天里,我收到越来越多的来自同一语言应用程序的广告。但当我开始仔细研究时,我意识到还有更加令人不安的事情。

虽然其中一些广告里的真人兴奋不已,鼓励我下载这款应用程序,并进行试用,而且强调使用过程没有任何风险,但其他广告看起来却异常熟悉。这些广告的特点是有人直接用法语或中文跟我对话,声称由于该应用程序的神奇功能,他们在短短几周内就掌握了一门外语。然而,真实发生的情况并非如此神奇,而是令人担忧:这些视频是通过深度伪造技术操纵的,可能并未得到视频里的人物的同意。

虽然人工智能生成的媒体可以用于娱乐、教育或创意表达,并无恶意,但深度伪造却有可能被用于恶意目的,例如传播虚假信息、伪造证据,或者在这种情况下实施诈骗。

因为我在人工智能领域工作了近十年,所以我很容易就能够发现这些广告中出现的人物实际上并不是真实的,他们的语言技能也不是真实的。相反,多亏了索菲亚·史密斯·加勒的一项调查,我才了解到,有人在未经本人知情或许可的情况下,利用一款应用程序克隆了真人,侵犯了他们的自主权,并可能损害其声誉。

这些深度伪造广告令人担忧的一点是,其在创作过程中没有征得用户的同意。该语言应用程序很有可能使用了一家生成式人工智能公司开发的视频克隆平台的服务,而这家公司在过去三年里四次更名,没有采取任何措施来防止未经授权的克隆人的出现,而且显而易见的是,也没有建立任何机制以从数据库中删除某人的肖像。

这种利用行为不仅有违道德标准,还破坏了人们对数字环境的信任,而数字环境本来就缺乏真实性和透明度。以乌克兰学生奥尔加·洛伊克为例,她拥有一个与健康知识相关的YouTube频道。最近,洛伊克的粉丝提醒她,她的视频已经出现在中国的社交媒体平台上。在中国的互联网上,洛伊克的肖像已经变成了一名俄罗斯女人的头像,正在寻求嫁给中国男人。她发现自己在YouTube上的内容被输入到视频克隆平台上(生成我在Instagram上看到的诈骗广告的同一个平台),一个与她相似的虚拟形象如今正在中国的社交媒体应用程序上宣扬自己爱上了中国男人。由于俄乌冲突,这不仅在个人层面上冒犯了洛伊克,而且如果她有选择权的话,她也绝不会同意参与这类内容制作。

我联系了洛伊克,想听听她对自己遭遇的看法。她是这样表述的:“操纵我的形象来发表我绝不会宽恕的言论,这侵犯了我的个人自主权,也意味着我们需要进行严格监管来保护像我这样的人免受身份盗用的影响。”

同意是一项基本原则,是我们在物理和数字领域进行互动的基础。它是道德行为的基石,肯定了个人控制自己形象、声音和个人数据的权利。未经同意使用,就有可能侵犯他人的隐私、尊严和自主权,为操纵、剥削和伤害敞开大门。

作为一家人工智能公司的企业事务主管,我曾经参与过一项名为#我的形象我做主(#MyImageMyChoice)的活动,试图提高人们对深度伪造应用程序未经同意生成的图像如何毁掉成千上万女孩和妇女的生活的认识。在美国,每12个成年人中就有一个报告说她们是基于图像的虐待的受害者。我读过一些受害者的悲惨故事,她们分享了自己的生活是如何被人工智能应用程序生成的图像或视频摧毁的。当她们根据《数字千年版权法》(DMCA)试图向这些应用程序发出图像删除请求时,却没有得到任何回复,或者被告知这些应用程序背后的公司不受任何此类法律的约束。

我们正在进入一个互联网时代,在这个时代里,我们看到的越来越多的内容将由人工智能生成。在这个新世界中,同意变得更为重要。随着人工智能的能力不断提高,我们也必须健全道德框架和加强监管保障。我们需要建立健全的机制,确保在创建和传播人工智能生成的内容时,征得个人的同意并尊重其意愿。这包括为面部和语音识别技术的使用提出明确的指南,并建立验证数字媒体真实性的机制。

此外,我们必须追究那些试图利用深度伪造技术达到欺诈或欺骗目的的人的责任,以及那些发布深度伪造应用程序却没有采取适当的防范措施来避免出现滥用现象的人的责任。这就需要科技公司、政策制定者和公民社会通力合作,制定并执行法规,阻止恶意行为者行恶,保护用户免受现实世界的伤害,而不是只关注科幻电影里虚构的世界末日场景。比如,我们不应该允许视频或语音克隆公司在未经同意的情况下发布与个人相关的深度伪造产品。在征得同意的过程中,也许我们还应该强制要求这些公司引入信息标签,告诉用户他们的肖像将如何使用、存储在哪里,以及存储多长时间。许多消费者可能会浏览这些标签,但在俄罗斯或白俄罗斯等国家的服务器上存储某人的深度伪造信息可能会产生严重后果,因为在这些国家,深度伪造技术滥用的受害者缺乏实际的追索权。最后,我们需要为人们提供相关机制,让他们能够做出选择,不让自己的肖像在网上被使用,尤其是在他们无法控制肖像使用方式的情况下。在洛伊克的案例中,当记者向研发平台(未经其同意克隆洛伊克)背后的公司请求置评时,该公司没有做出任何回应,也没有采取任何行动。

在更健全的监管措施出台之前,我们需要加大努力,提高公众意识和数字素养,使个人能够识别操纵行为并保护其在线生物识别数据。我们必须增强消费者的能力,让他们在使用应用程序和平台时做出更明智的决定,并认识到在数字空间以及与易受政府监控或出现数据泄露的公司共享个人信息(尤其是生物识别数据)的潜在后果。

生成式人工智能应用程序具有不可否认的吸引力,尤其是对年轻人而言。但是,当人们在这些平台上上传包含自己肖像的图片或视频时,就会在不知不觉中把自己暴露在无数风险里,包括隐私侵犯、身份盗用和潜在的剥削。

虽然我希望有一天我的孩子可以在实时机器翻译的帮助下与他们的祖父母交流,但我对深度伪造技术对下一代的影响深感忧虑,尤其是当我看到泰勒·斯威夫特的遭遇,或者#我的形象我做主活动分享自己故事的受害者的遭遇,或者其他无数遭受性骚扰和性虐待的女性被迫保持沉默的遭遇。

我的孩子们正在一个数字欺骗日益复杂的世界中成长。要帮助他们驾驭这一复杂局面,维护自主权和人格完整,就必须教给他们关于同意、批判性思维和媒体素养的知识。但这还不够:我们需要让开发这项技术的公司承担责任。我们还必须推动各国政府快速采取行动。例如,英国将很快开始执行《在线安全法案》(Online Safety Bill),该法案把深度伪造定为刑事犯罪,也将迫使科技平台采取行动并删除相关内容。更多的国家应该效仿英国的做法。

最重要的是,人工智能行业里的工作人士必须毫不畏惧地公开表达意见,并提醒我们的同行,这种随意构建生成式人工智能技术的方法是不可接受的。(财富中文网)

亚历山大·沃伊卡(Alexandru Voica)是Synthesia公司的企业事务和政策主管,也在穆罕默德·本·扎耶德人工智能大学(Mohamed bin Zayed University of Artificial Intelligence)担任顾问。

Fortune.com上发表的评论文章中表达的观点,仅代表作者本人的观点,不代表《财富》杂志的观点和立场。

译者:中慧言-王芳

One Friday evening a few weeks ago, I was in my home country of Romania, visiting family for a funeral, when I found myself thinking: Was it time for me to start teaching my kids how to speak Romanian? For the past 15 years, I have built a life in the U.K., where my kids were born and raised. They love their Romanian grandparents but struggle to communicate with them, and I wanted to do something about it.

So I started looking for solutions. I searched the internet for about an hour but couldn’t find anything useful, so I went back to my evening.

A few days later, I was scrolling through my Instagram feed when an ad appeared for a language learning app. Having worked for a social media company, I knew what had happened: The company had tracked my activity online, saw I was interested in language learning apps, and decided to target me with an ad. And that’s okay: I’ve had similar experiences in the past and even decided to buy products based on this type of targeted advertising.

Over the next few days, I kept getting more and more ads from the same language app. But once I started to pay closer attention, I realized there was something more troubling going on.

While some of the ads had real people excitedly encouraging me to download the app and try it out “risk free,” other ads looked eerily familiar. They featured people speaking directly to me in French or Chinese, claiming to have mastered a foreign language in mere weeks, thanks to the app’s miraculous capabilities. However, what was really going on was not actually miraculous but alarming: The videos were manipulated through deepfake technology, potentially without the consent of the people featured in them.

While AI-generated media can be used for harmless entertainment, education, or creative expression, deepfakes have the potential to be weaponized for malicious purposes, such as spreading misinformation, fabricating evidence, or, in this case, perpetrating scams.

Because I’ve been working in AI for almost a decade, I could easily spot that the people in these ads weren’t actually real, nor were their language skills. Instead, I came to learn thanks to an investigation by Sophia Smith Galer that an app had been used to clone real people without their knowledge or permission, eroding their autonomy and potentially damaging their reputations.

A troubling aspect of these deepfake ads was the lack of consent inherent in their creation. The language app likely used the services of a video cloning platform developed by a generative AI company that has changed its name four times in the last three years and does not have any measures in place to prevent the unauthorized cloning of people or any obvious mechanisms to remove someone’s likeness from their databases.

This exploitation is not only unethical but also undermines trust in the digital landscape, where authenticity and transparency are already in short supply. Take the example of Olga Loiek, a Ukrainian student who owns a YouTube channel about wellness. She was recently alerted by her followers that videos of her had been appearing in China. On the Chinese internet, Loiek’s likeness had been transformed into an avatar of a Russian woman looking to marry a Chinese man. She found that her YouTube content had been fed into the same platform that was used to generate the scam ads I’d been seeing on Instagram, and an avatar bearing her likeness was now proclaiming love for Chinese men on Chinese social media apps. Not only was this offensive to Loiek on a personal level because of the war in Ukraine, but it was the type of content she would have never agreed to participate in if she had had the option of withholding her consent.

I reached out to Loiek to get her thoughts on what happened to her. Here’s what she had to say: “Manipulating my image to say statements I would never condone violates my personal autonomy and means we need stringent regulations to protect individuals like me from such invasions of identity.”

Consent is a fundamental principle that underpins our interactions in both the physical and digital realms. It is the cornerstone of ethical conduct, affirming individuals’ rights to control their own image, voice, and personal data. Without consent, we risk violating people’s privacy, dignity, and agency, opening the door to manipulation, exploitation, and harm.

In my job as the head of corporate affairs for an AI company, I’ve worked with a campaign called #MyImageMyChoice, trying to raise awareness of how nonconsensual images generated with deepfake apps have ruined the lives of thousands of girls and women. In the U.S., one in 12 adults have reported that they have been victims of image-based abuse. I’ve read harrowing stories from some of these victims who have shared how their lives were destroyed by images or videos generated by AI apps. When they tried to issue DMCA takedowns to these apps, they received no reply or were told that the companies behind the apps were not subject to any such legislation.

We’re entering an era of the internet where more and more of the content we see will be generated with AI. In this new world, consent takes on heightened importance. As the capabilities of AI continue to advance, so too must our ethical frameworks and regulatory safeguards. We need robust mechanisms to ensure that individuals’ consent is obtained and respected in the creation and dissemination of AI-generated content. This includes clear guidelines for the use of facial and voice recognition technology, as well as mechanisms for verifying the authenticity of digital media.

Moreover, we must hold accountable those who seek to exploit deepfake technology for fraudulent or deceptive purposes and those who release deepfake apps that have no guardrails in place to prevent misuse. This requires collaboration between technology companies, policymakers, and civil society to develop and enforce regulations that deter malicious actors and protect users from real-world harm, instead of focusing only on imaginary doomsday scenarios from sci-fi movies. For example, we should not allow video or voice cloning companies to release products that create deepfakes of individuals without their consent. And during the process of obtaining consent, perhaps we should also mandate that these companies introduce informational labels that tell users how their likeness will be used, where it will be stored, and for how long. Many consumers might glance over these labels, but there can be real consequences to having a deepfake of someone stored on servers in countries such as Russia, or Belarus where there is no real recourse for victims of deepfake abuse. Finally, we need to give people mechanisms of opting out of their likeness being used online, especially if they have no control over how it is used. In the case of Loiek, the company that developed the platform used to clone her without her consent did not provide any response or take any action when it was approached by reporters for comment.

Until better regulation is in place, we need to build greater public awareness and digital literacy efforts to empower individuals to recognize manipulation and safeguard their biometric data online. We must empower consumers to make more informed decisions about the apps and platforms they use and to recognize the potential consequences of sharing personal information, especially biometric data, in digital spaces and with companies that are prone to government surveillance or data breaches.

Generative AI apps have an undeniable allure, especially for younger people. But when people upload images or videos containing their likeness to these platforms, they unknowingly expose themselves to a myriad of risks, including privacy violations, identity theft, and potential exploitation.

While I am hopeful that one day my children can communicate with their grandparents with the help of real-time machine translation, I am deeply concerned about the impact of deepfake technology on the next generation, especially when I look at what happened to Taylor Swift, or the victims who have shared their stories with #MyImageMyChoice, or countless other women suffering from sexual harassment and abuse who have been forced into silence.

My children are growing up in a world where digital deception is increasingly sophisticated. Teaching them about consent, critical thinking, and media literacy is essential to helping them navigate this complex landscape and safeguard their autonomy and integrity. But that’s not enough: We need to hold the companies developing this technology accountable. We also must push governments to take action faster. For example, the U.K. will soon start to enforce the Online Safety Bill, which criminalizes deepfakes and should force tech platforms to take action and remove them. More countries should follow their lead.

And above all, we in the AI industry must be unafraid to speak out and remind our peers that this freewheeling approach to building generative AI technology is not acceptable.

Alexandru Voica is the head of corporate affairs and policy at Synthesia, and a consultant for Mohamed bin Zayed University of Artificial Intelligence.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

热读文章
热门视频
扫描二维码下载财富APP