订阅

多平台阅读

微信订阅

杂志

申请纸刊赠阅

订阅每日电邮

移动应用

商业

“移花接木”色情视频令女性人人自危,应该如何阻止?

Jeff John Roberts 2019年01月18日

近年来,随着人工智能软件的高速发展,将女明星甚至普通女性的面部移植到AV女优的身上,已经是一件相当容易的事了。

插图:Andrew Nusca/Fortune

如今在互联网的一些阴暗角落里,你能够找到艾玛·沃特森和萨尔玛·海耶克等名人主演的A片。这些作品当然不是由沃特森等人本人出演的,然而要分辨这些伪作却是相当困难的。近年来,随着人工智能软件的高速发展,将女明星甚至普通女性的面部移植到AV女优的身上,搞“移花接木”的A片,骗过普通观众的眼睛,已经是一件相当容易的事了。

这种移花接木的A片只是“深度造假”技术衍生出的用途之一。这些影片都经过了精心处理,看上去十分真实。它们的出现,甚至对现代民主制度都构成了一定威胁。不法分子可以甚至已经在使用这种手段炮制假新闻。深度造假技术的另一大风险,就是它可以成为一种骚扰甚至侮辱女性的工具。

色情网站上已经有很多名人的伪AV作品了。还有不少互联网论坛搞起了深度定制的造假服务。比如有些男人为了满足自己的阴暗思想,在未经对方允许的情况下,花钱给这些互联网论坛,请他们制作其前女友或同事等人的伪AV视频。由于人工智能软件的普及,加上在社交媒体上下载别人的照片十分简单,因此制作这些深度造假的AV作品并不困难,费用也不算很高。

然而受害者要想删除这些伪AV作品,却面临不小的法律挑战。她们的境遇与面临其他形式的网络性骚扰的女性一样,在依法维权上面临着巨大阻碍。

《宪法第一修正案》与深度伪造技术

加利福尼亚州的作家夏洛特·劳斯深知非法色情作品对人的毁灭性。有人曾经在一个知名色情网站上发布过她年仅十几岁的女儿的裸照。受此事影响,劳斯成功组织了一场呼吁将“色情报复”入罪化的活动。她本人也对深度造假行为深恶痛绝。

她表示:“深度造假给人带来的痛苦不亚于色情报复作品。深度造假的作品看起来十分真实,而我们生活的这个充斥着假新闻的世界又进一步恶化了它的影响。”

劳斯补充道,深度造假已经成为了羞辱或恐吓女性的一种常见方式。她对500名色情报复作品的受害者调查发现,其中有12%的人也是深度造假作品的受害者。

那么我们应该如何解决这一问题?推动各州立法禁止色情报复行为,或许是一种可行的解决方案。目前,美国有41个州已经制定了相关法律。这些法律都是近期出台的,它们也标志着政客们开始转变对非自愿色情作品的态度。

劳斯说道:“在我刚开始呼吁的时候,它并不是一个人们关心的问题。听说了这件事的人,从媒体、立法者到执法部门都不重视受害者。事情实际上是在朝着另一个方向发展。但现在大家开始注重保护受害者了。”

推动刑事立法,可能也是打击深度伪造作品的一种有效方法。另一种办法则是对伪造者提起民事诉讼。正如美国电子前沿基金会在一篇博客文章中指出的那样,深度伪造的受害者可以以诽谤罪或“歪曲形象”为名提起诉讼。受害者也可以主张自己的“形象权”受到侵犯,主张深度造假者未本人允许,通过传播自己的形象获取利益。

然而,所有这些潜在解决方案都可能遇到一个强大的阻碍,即美国的言论自由权。任何因涉嫌深度伪造被起诉的人,都可能主张这些视频作品是一种文化或政治表达方式,受美国宪法第一修正案保护。

至于这个观点能否说服法官,则是另一回事了。深度造假是一种新生事物,对于哪种深度造假可以受言论自由权保护,法院也尚未有任何决定性裁决。而美国有关形象权的法律莫衷一是,又使这个问题变得更加复杂。

对此,洛约拉法学院的教授詹妮弗·罗斯曼表示:“第一修正案的尺度应该在形象权案件中保持一致,但现实中却并非如此,不同法院会给出不同裁决。”罗斯曼著有一本关于隐私和形象权的书。

罗斯曼认为,在涉及色情作品的深度造假案件中,多数法官可能不会支持第一修正案的主张,尤其是在受害者并不出名的案件中。她指出,要主张深度造假作品侵犯了自己的形象权或涉嫌诽谤,就要证明不法分子将深度造假作品当成了真实作品进行宣传。而且法律对公众人物的裁判尺度也不一样,如果受害者是一位知名人士,她要想打赢官司,还必须证明对方有“真实恶意”,也就是证明不法分子明知道视频材料是假的,但仍然出于真实恶意进行传播。

任何针对深度造假的刑事法律,如果只是狭义地涵盖性剥削因素,而不包括出于艺术和政治讽刺目的创作的材料,则还是能够经受住第一修正案的审查的。

简而言之,言论自由权不太可能成为打击深度造假色情作品的阻碍。然而不幸的是,即使法律站在他们这一边,受害者也没有什么切实可行的途径去删除那些视频,或者惩罚相关责任人。

建立新的违规视频下架体系

如果你在网上发现了你的不雅视频,或是移花接木的剪辑作品,你想去纠正这种情形,那么你可能还会遭到更多挫折——现在我们几乎没有什么实际可行的方法来解决这个问题。

好莱坞女星斯嘉丽·约翰逊最近在接受《华盛顿邮报》(Washington Post)采访时表示:“如果你想让自己不受互联网和它的堕落文化的影响,基本上是徒劳的……互联网是一个巨大的黑暗虫洞,会吞噬掉它自己。”

斯嘉丽·约翰逊为何如此偏激呢?因为互联网的基本设计是分布式的,没有一个统一的中央监管机构,人们可以很容易地通过互联网匿名发布深度造假作品以及其他令人反感的内容。虽然我们可以动用通过法律手段来识别和惩罚这些网络恶魔,但这个过程是十分缓慢而繁琐的——尤其是对那些无权无势的人。

劳斯表示,在美国,提起相关法律诉讼的成本大约在5万美元左右,由于被告往往一文不名,或是居住地十分遥远,因此这些诉讼成本往往很难收回。最后一个选择是追究发布不雅视频的网站的责任,但这样做往往也不会有什么实际收获。

美国有一条强大的法律条文叫做“230条款”,它为网站运营商提供了一个有力的法律屏障。比如对于Craigslist这样的网站,如果用户使用他们的分类广告功能撰写诽谤信息,网站是不用承担法律责任的。

对于8Chan和Mr. Deepfakes这种储存有大量深度造假视频的网站,运营商可以主张豁免,因为上传视频的不是他们,而是他们的用户。

这层法律屏障不是绝对的,有一个例外情况,就是侵犯知识产权。根据法律,如果网站收到了版权所有者的通知,就必须删除侵权内容。(如果网站反对,可以出具通知书告知请求人,并恢复相关材料。)

这条规定有助于深度造假色情作品的受害者打破网站的豁免权,尤其是在受害者主张维护形象权的时候。但是在这方面,美国的法律依然是混乱的。罗斯曼表示,很多法院并不清楚知识产权例外条款是否适用于各州的知识产权法——如适用于形象权案件,抑或只适用于关于版权和商标等争议物的联邦法律。

所有这些都提出了一个问题;国会和美国司法系统是否应该修改法律,使深度造假色情作品的受害者能更容易地删除这些形象,虽然近年来,美国司法体系已经在对“230条款”进行零敲碎打的修订。劳斯认为,修法将是一个有用的举措。

劳斯表示:“我的看法和斯嘉丽·约翰逊不一样,在过去的五年里,我见证了我们在报复色情领域的巨大进步,我对法律的持续进步和修改抱有很大希望,相信我们最终能控制住这些问题。”

事实上,随着很多人越来越看不惯互联网平台拥有的“不负责任的权力”(法律学者丽贝卡·图什内特语),支持劳斯的观点的人也变得越来越多。在最近的一起受到密切关注的涉及约会软件Grindr的案件中,法院也正在权衡是否应该要求网站运营商更加积极地净化网站上的不法行为。

然而,并不是所有人都认为这是一个好主意。“230条款”被很多人视为一项充满远见的立法,它保障了美国互联网公司能够在不受法律威胁的情况下蓬勃发展。美国电子前沿基金会也警告称,削弱网站的豁免权,很可能会扼杀美国的商业和言论自由。

那么,美国国会能否起草一部专门法律,在不造成这种意外后果的情况下,维护深度造假色情作品受害者的权利呢?爱达荷大学的法学教授安玛丽·布里迪指出,在现实中,有些企业和个人曾经恶意利用版权下架体系的漏洞,删除网络上的合法评论和其他合法内容。

尽管如此,布里迪认为,考虑到深度造假色情作品的危害性,美国仍然有必要起草一项新的法律。

她表示:“在我看来,深度造假色情作品的严重危害表明,美国有必要迅速采取补救措施。但为了正确处理问题,我们还有必要设置一种即时的、有意义的上诉权,以免有人滥用通知权,以虚假借口删除合法内容。”(财富中文网)

译者:朴成奎

In the darker corners of the Internet, you can now find celebrities like Emma Watson and Selma Hayek performing in pornographic videos. The clips are fake, of course—but it’s distressingly hard to tell. Recent improvements in artificial intelligence software have made it surprisingly easy to graft the heads of stars, and ordinary women, to the bodies of X-rated actresses to create realistic videos.

These explicit movies are just one strain of so-called “deepfakes,” which are clips that have been doctored so well they look real. Their arrival poses a threat to democracy; mischief makers can, and already have, used them to spread fake news. But another great danger of deepfakes is their use as a tool to harass and humiliate women.

There are plenty of celebrity deepfakes on pornographic websites, but Internet forums dedicated to custom deepfakes—men paying to create videos of ex-partners, co-workers, and others without their knowledge or consent—are proliferating. Creating these deepfakes isn’t difficult or expensive in light of the proliferation of A.I. software and the easy access to photos on social media sites like Facebook.

Yet the legal challenges for victims to remove deepfakes can be daunting. While the law may be on their side, victims also face considerable obstacles—ones that are familiar to those who have sought to confront other forms of online harassment.

The First Amendment and Deepfakes

Charlotte Laws knows how devastating non-consensual pornography can be. A California author and former politician, Laws led a successful campaign to criminalize so-called “revenge porn” after someone posted nude photos of her teenage daughter on a notorious website. She is also alarmed by deepfakes.

“The distress of deepfakes is as bad as revenge porn,” she says. “Deepfakes are realistic, and their impact is compounded by the growth of the fake news world we’re living in.”

Laws adds that deepfakes have become a common way to humiliate or terrorize women. In a survey she conducted of 500 women who had been victims of revenge porn, Laws found that 12% had also been subjected to deepfakes.

One way to address the problem could involve lawmakers expanding state laws banning revenge porn. These laws, which now exist in 41 U.S. states, are of recent vintage and came about as politicians began to change their attitudes to non-consensual pornography.

“When I began, it wasn’t something people addressed,” Laws says. “Those who heard about it were against the victims, from media to legislators to law enforcement. But it’s really gone in the other direction, and now it’s about protecting the victims.”

New criminal laws could be one way to fight deepfakes. Another approach is to bring civil lawsuits against the perpetrators. As the Electronic Frontier Foundation notes in a blog post, those subjected to deepfakes could sue for defamation or for portraying them in a “false light.” They could also file a “right of publicity” claim, alleging the deepfake makers profited from their image without permission.

All of these potential solutions, however, could bump up against a powerful obstacle: free speech law. Anyone sued over deepfakes could claim the videos are a form of cultural or political expression protected by the First Amendment.

Whether this argument would persuade a judge is another matter. Deepfakes are new enough that courts haven’t issued any decisive ruling on which of them might count as protected speech. The issue is even more complicated given the messy state of the law related to the right of publicity.

“The First Amendment should be the same across the country in right of publicity cases, but it’s not,” says Jennifer Rothman, a professor at Loyola Law School and author of a book about privacy and the right of publicity. “Different circuit courts are doing different things.”

In the case of deepfakes involving pornography, however, Rothman predicts that most judges would be unsympathetic to a First Amendment claim—especially in cases where the victims are not famous. A free speech defense to claims of false light or defamation, she argues, would turn in part on whether the deepfake was presented as true and would be analyzed differently for public figures. A celebrity victim would have the added hurdle of showing “actual malice,” the legal term for knowing the material was fake, in order to win the case.

Any criminal laws aimed at deepfakes would likely survive First Amendment scrutiny so long as they narrowly covered sexual exploitation and did not include material created as art or political satire.

In short, free speech laws are unlikely to be a serious impediment for targets of deepfake pornography. Unfortunately, even if the law is on their side, the victims nonetheless have few practical options to take down the videos or punish those responsible for them.

A New Takedown System?

If you discover something false or unpleasant about you on the Internet and move to correct it, you’re likely to encounter a further frustration: There are few practical ways to address it.

“Trying to protect yourself from the Internet and its depravity is basically a lost cause … The Internet is a vast wormhole of darkness that eats itself,” actress Scarlett Johansson, whose face appears in numerous deepfakes, recently told the Washington Post.

Why is Johansson so cynical? Because the fundamental design of the Internet—distributed, without a central policing authority—makes it easy for people to anonymously post deepfakes and other objectionable content. And while it’s possible to identify and punish such trolls using legal action, the process is slow and cumbersome—especially for those who lack financial resources.

According to Laws, it typically takes $50,000 to pursue such a lawsuit. That money may be hard to recoup since defendants are often broke or based in a far-flung location. This leaves the option of going after the website that published the offending material, but this, too, is likely to prove fruitless.

The reason is because of a powerful law known as Section 230, which creates a legal shield for website operators as to what users post on their sites. It ensures a site like Craigslist, for instance, isn’t liable if someone uses their classified ads to write defamatory messages.

In the case of sites like 8Chan and Mr. Deepfakes, which host numerous deepfake videos, the operators can claim immunity because it is not them but their users that are uploading the clips.

The legal shield is not absolute. It contains an exception for intellectual property violations, which obliges websites to take down material if they receive a notice from a copyright owner. (A process that lets site operators file a counter notice and restore the material if they object).

The intellectual property exception could help deepfake victims defeat the websites’ immunity, notably if the victim invokes a right of publicity. But here again the law is muddled. According to Rothman, courts are unclear on whether the exception applies to state intellectual property laws—such as right of publicity—or only to federal ones like copyright and trademark.

All of this raises the question of whether Congress and the courts, which have been chipping away at Section 230’s broad immunity in recent years, should change the law and make it easier for deepfake victims to remove the images. Laws believes this would be a useful measure.

“I don’t feel the same as Scarlett Johansson,” Laws says. “I’ve seen the huge improvements in revenge porn being made over the past five years. I have great hope for continual improvement and amendments, and that we’ll get these issues under control eventually.”

Indeed, those who share Laws’ views have momentum on their side as more people look askance at Internet platforms that, in the words of the legal scholar Rebecca Tushnet, enjoy “power without responsibility.” And in a closely watched case involving the dating app Grindr, a court is weighing whether to require website operators to be more active in purging their platforms of abusive behavior.

Not everyone is convinced this a good idea, however. The Section 230 law is regarded by many as a visionary piece of legislation, which allowed U.S. Internet companies to flourish in the absence of legal threats. The Electronic Frontier Foundation has warned that eroding immunity for websites could stifle business and free expression.

This raises the question of whether Congress could draft a law narrow enough to help victims of deepfakes without such unintended consequences. As a cautionary tale, Annemarie Bridy, a law professor at the University of Idaho, points to the misuse of the copyright takedown system in which companies and individuals have acted in bad faith to remove legitimate criticism and other legal content.

Still, given what’s at stake with pornographic deep fake videos, Bridy says, it could be worth drafting a new law.

“The seriousness of the harm from deep fakes, to me, justifies an expeditious remedy,” she says. “But to get the balance right, we’d also need an immediate, meaningful right of appeal and safeguards against abusive notices intended to censor legitimate content under false pretenses.”

我来点评

  最新文章

最新文章:

500强情报中心

财富专栏