立即打开
虚假信息泛滥,Facebook靠什么解决?

虚假信息泛滥,Facebook靠什么解决?

JEREMY KAHN 2021-04-09
Facebook上虚假信息过于严重,已经超出了人类的监管能力。

去年11月的总统选举以及随后发生的美国国会大厦冲击事件,不仅对美国民主是一次大考,也让社交媒体面临考验。Facebook及其竞争对手花费数年时间研发能够打击虚假信息、暴力言论和仇恨言论传播的新技术。从某种程度上说,这些系统能够过滤掉数以亿计的煽动性帖子,确实比以往任何时候做得都好。但这项技术最终失败了,因为仍有许多类似帖子得以逃脱。

据数据分析公司CrowdTangle称,大选前几天,Facebook上到处充斥着普遍存在违规投票行为这一未经证实的说法。最热门的是时任总统唐纳德•特朗普发布的帖子,谎称内华达州有数千张“假选票”,以及他赢得了佐治亚州。与此同时,选举前,Facebook上发布的头条新闻来自极右翼新闻网站,如Breitbart和Newsmax,这些网站大肆渲染貌似有理的选民欺诈指控。这些谎言为国会大厦冲击事件埋下了伏笔。

没有哪家公司像Facebook一样大声疾呼使用人工智能来管理负面言论。Facebook 的CEO马克•扎克伯格曾多次表示,正如他在2018年国会作证时所说的,“从长远来看,构建人工智能工具将是识别和根除大部分负面言论的可扩展方式。”

翻译:问题如此严重,仅凭人类无法监管这项服务

Facebook已斥巨资兑现其提出的以技术为中心的解决方案,而且已取得一定进展。例如,Facebook表示,在其删除的与恐怖主义有关的所有内容中,其中99.8%在用户标记前已被人工智能技术识别。图片和暴力内容识别率是99.5%。仇恨言论识别率是97%。与三年前相比,识别率明显提高,这主要得益于机器学习的改进。

图片由布拉迪斯拉夫•米伦科维奇提供

但成功其实很主观。例如,Facebook已针对裸露内容制定了一系列政策。但该公司的独立监督委员会(负责接收用户对Facebook审核政策提出的异议)最近指责Facebook删除乳腺癌宣传活动中的露点照片。监管人员希望Facebook能屏蔽那些被用于诱使年轻人误入歧途的恐怖主义视频,但不能屏蔽新闻节目中使用的视频。人工智能对此很难进行区分。

语言的具体涵义也离不开语境。牛津大学(University of Oxford)互联网学院(Internet Institute)的技术法教授桑德拉•瓦赫特称,研究表明,人类只能识别60%的嘲讽话语,因此期待人工智能做得更好实在有些牵强。

圣塔克拉拉大学(Santa Clara University)法学教授埃里克•戈德曼对此做出了另一种解释:“对于内容本身无法呈现的语境问题,人工智能永远无法解决。”

Facebook并非未做尝试。Facebook目前正在举办一场竞赛,鼓励计算机科学家开发出能够识别带有仇恨性质段子的人工智能技术。段子难以识别,因为需要理解图像和文本,而且往往包含大量文化信息。“我们已经认识到这是一个棘手的问题,这就是我们公布数据集和所面临挑战的原因所在,因为我们要让整个行业能够有所创新,”Facebook人工智能审核工具产品经理科妮莉亚·卡拉普切亚这样说道。

虚假信息是最近备受美国人关注的有害内容,对人工智能来说是一大挑战,因为需要通过外部信息来证实这些说法。目前,仍需要人工进行事实核查。但一旦识别虚假信息,人工智能就可以帮助控制其传播。Facebook已经开发出了最前沿的人工智能系统,能够识别与已被揭穿真相的内容基本相同的内容,即使是那些为了避免被识别而进行过剪辑或截图的内容。该系统现在还可以发现以往可能会避开自动过滤器的类似图像和同义语言。

在2020年3月1日至大选日期间,人工智能系统帮助Facebook识别了全美超1.8亿条恶意内容。如果这标志着人工智能的成功,那么也意味着虚假信息问题已达到一定规模。当所分析的数据随时间推移不会发生太大变化时,人工智能的识别效果最好。但仇恨言论或虚假信息并非此类数据。因此,恶意信息传播者与Facebook人工智能系统在玩猫捉老鼠的游戏。

一些人指责Facebook提升了公众对人工智能的预期。圣塔克拉拉大学的戈德曼表示,“如果夸大这项技术的成效能够避开进一步监管,那么这恰好符合Facebook自身的利益。”

其他人则认为,根本问题在于:Facebook让用户留在自己的平台上,广告商可以向他们推销产品,Facebook就能从中获利。而有争议的内容更能提升参与度。也就是说,如果Facebook未能识别有害帖子,该公司的其他算法就会扩大这些帖子的影响力。“商业模式是核心问题,”公民自由非营利组织电子前沿基金会(Electronic Frontier Foundation)的研究员吉利安•约克说道。

在11月大选后几天里,随着政治紧张局势不断升温,Facebook确实略微调整了其信息流算法,不再突出传播虚假内容的信息源,而是突出显示来自高质量媒体的信息。但几周后,这一变化就不复存在。

前,Facebook不再突出显示其识别为虚假内容的信息,向那些试图分享已知虚假内容的人发出警告,并通知人们其之前分享的信息后来是否被揭穿真相。该公司表示,多次分享虚假信息的用户一般不会被取消这项服务,但他们“将会发现其总体帖子推送量减少,而且在一段时间内无法发布广告或提现”。

Facebook的卡拉普切亚表示,公司正在考虑对其他有害内容采取类似措施。但何时应用人工智能技术,是由我们人类决定的。

卡拉普切亚表示:“当然,最理想的结果是人工智能能够100%识别有害内容,但这一设想最终可能无法实现。”

人工智能在行动

Facebook人工智能系统在用户标记前识别和删除有害内容的成绩喜忧参半。

以下为Facebook在无用户标记时删除的各类识别内容的比例:

99.8%:恐怖主义内容

97.1%:仇恨言论

92.8%:宣扬自杀和自残

90%:选举压制、谣言和威胁(2018年选举)

48.8%:网络欺凌

资料来源:Facebook(2020年第4季度,除非另有说明)

本文发表在《财富》4月/5月刊,标题为《Facebook的复杂清理》(Facebook's complicated cleanup)。

翻译:郝秀

审校:汪皓

去年11月的总统选举以及随后发生的美国国会大厦冲击事件,不仅对美国民主是一次大考,也让社交媒体面临考验。Facebook及其竞争对手花费数年时间研发能够打击虚假信息、暴力言论和仇恨言论传播的新技术。从某种程度上说,这些系统能够过滤掉数以亿计的煽动性帖子,确实比以往任何时候做得都好。但这项技术最终失败了,因为仍有许多类似帖子得以逃脱。

据数据分析公司CrowdTangle称,大选前几天,Facebook上到处充斥着普遍存在违规投票行为这一未经证实的说法。最热门的是时任总统唐纳德•特朗普发布的帖子,谎称内华达州有数千张“假选票”,以及他赢得了佐治亚州。与此同时,选举前,Facebook上发布的头条新闻来自极右翼新闻网站,如Breitbart和Newsmax,这些网站大肆渲染貌似有理的选民欺诈指控。这些谎言为国会大厦冲击事件埋下了伏笔。

没有哪家公司像Facebook一样大声疾呼使用人工智能来管理负面言论。Facebook 的CEO马克•扎克伯格曾多次表示,正如他在2018年国会作证时所说的,“从长远来看,构建人工智能工具将是识别和根除大部分负面言论的可扩展方式。”

翻译:问题如此严重,仅凭人类无法监管这项服务

Facebook已斥巨资兑现其提出的以技术为中心的解决方案,而且已取得一定进展。例如,Facebook表示,在其删除的与恐怖主义有关的所有内容中,其中99.8%在用户标记前已被人工智能技术识别。图片和暴力内容识别率是99.5%。仇恨言论识别率是97%。与三年前相比,识别率明显提高,这主要得益于机器学习的改进。

但成功其实很主观。例如,Facebook已针对裸露内容制定了一系列政策。但该公司的独立监督委员会(负责接收用户对Facebook审核政策提出的异议)最近指责Facebook删除乳腺癌宣传活动中的露点照片。监管人员希望Facebook能屏蔽那些被用于诱使年轻人误入歧途的恐怖主义视频,但不能屏蔽新闻节目中使用的视频。人工智能对此很难进行区分。

语言的具体涵义也离不开语境。牛津大学(University of Oxford)互联网学院(Internet Institute)的技术法教授桑德拉•瓦赫特称,研究表明,人类只能识别60%的嘲讽话语,因此期待人工智能做得更好实在有些牵强。

圣塔克拉拉大学(Santa Clara University)法学教授埃里克•戈德曼对此做出了另一种解释:“对于内容本身无法呈现的语境问题,人工智能永远无法解决。”

Facebook并非未做尝试。Facebook目前正在举办一场竞赛,鼓励计算机科学家开发出能够识别带有仇恨性质段子的人工智能技术。段子难以识别,因为需要理解图像和文本,而且往往包含大量文化信息。“我们已经认识到这是一个棘手的问题,这就是我们公布数据集和所面临挑战的原因所在,因为我们要让整个行业能够有所创新,”Facebook人工智能审核工具产品经理科妮莉亚·卡拉普切亚这样说道。

虚假信息是最近备受美国人关注的有害内容,对人工智能来说是一大挑战,因为需要通过外部信息来证实这些说法。目前,仍需要人工进行事实核查。但一旦识别虚假信息,人工智能就可以帮助控制其传播。Facebook已经开发出了最前沿的人工智能系统,能够识别与已被揭穿真相的内容基本相同的内容,即使是那些为了避免被识别而进行过剪辑或截图的内容。该系统现在还可以发现以往可能会避开自动过滤器的类似图像和同义语言。

在2020年3月1日至大选日期间,人工智能系统帮助Facebook识别了全美超1.8亿条恶意内容。如果这标志着人工智能的成功,那么也意味着虚假信息问题已达到一定规模。当所分析的数据随时间推移不会发生太大变化时,人工智能的识别效果最好。但仇恨言论或虚假信息并非此类数据。因此,恶意信息传播者与Facebook人工智能系统在玩猫捉老鼠的游戏。

一些人指责Facebook提升了公众对人工智能的预期。圣塔克拉拉大学的戈德曼表示,“如果夸大这项技术的成效能够避开进一步监管,那么这恰好符合Facebook自身的利益。”

其他人则认为,根本问题在于:Facebook让用户留在自己的平台上,广告商可以向他们推销产品,Facebook就能从中获利。而有争议的内容更能提升参与度。也就是说,如果Facebook未能识别有害帖子,该公司的其他算法就会扩大这些帖子的影响力。“商业模式是核心问题,”公民自由非营利组织电子前沿基金会(Electronic Frontier Foundation)的研究员吉利安•约克说道。

在11月大选后几天里,随着政治紧张局势不断升温,Facebook确实略微调整了其信息流算法,不再突出传播虚假内容的信息源,而是突出显示来自高质量媒体的信息。但几周后,这一变化就不复存在。

目前,Facebook不再突出显示其识别为虚假内容的信息,向那些试图分享已知虚假内容的人发出警告,并通知人们其之前分享的信息后来是否被揭穿真相。该公司表示,多次分享虚假信息的用户一般不会被取消这项服务,但他们“将会发现其总体帖子推送量减少,而且在一段时间内无法发布广告或提现”。

Facebook的卡拉普切亚表示,公司正在考虑对其他有害内容采取类似措施。但何时应用人工智能技术,是由我们人类决定的。

卡拉普切亚表示:“当然,最理想的结果是人工智能能够100%识别有害内容,但这一设想最终可能无法实现。”

人工智能在行动

Facebook人工智能系统在用户标记前识别和删除有害内容的成绩喜忧参半。

以下为Facebook在无用户标记时删除的各类识别内容的比例:

99.8%:恐怖主义内容

97.1%:仇恨言论

92.8%:宣扬自杀和自残

90%:选举压制、谣言和威胁(2018年选举)

48.8%:网络欺凌

资料来源:Facebook(2020年第4季度,除非另有说明)

本文发表在《财富》4月/5月刊,标题为《Facebook的复杂清理》(Facebook's complicated cleanup)。

翻译:郝秀

审校:汪皓

In addition to testing American democracy, November’s election and the subsequent storming of the U.S. Capitol put social media to the test. Facebook and its rivals have spent years creating technology to combat the spread of disinformation, violent rhetoric, and hate speech. By some measure, the systems did better than ever in filtering out hundreds of millions of inflammatory posts. But ultimately the technology failed, allowing many similar posts to slip through.

In the days leading up to the election, unsubstantiated claims of widespread voting irregularities were the most shared content on Facebook, according to data analytics company CrowdTangle. At the top of the list were then-President Donald Trump’s posts falsely claiming there had been thousands of “fake votes” in Nevada and that he had won Georgia. Meanwhile, the top news stories on Facebook preceding the election were from far-right news sites such as Breitbart and Newsmax that played up specious voter fraud claims. Such falsehoods set the stage for the Capitol’s storming.

No company has been as vocal a champion of using artificial intelligence to police content as Facebook. CEO Mark Zuckerberg has repeatedly said, as he did in 2018 congressional testimony, that “over the long term, building A.I. tools is going to be the scalable way to identify and root out most of this harmful content.”

Translation: The problem is so big that humans alone can’t police the service.

Facebook has invested heavily to try to make good on its tech-centric solution. And there is some evidence of progress. For instance, of all the terrorism-related content it removes, Facebook says its A.I. helps find 99.8% of those posts before users flag them. For graphic and violent content, the number is 99.5%. And for hate speech, it’s 97%. That’s significantly better than three years ago, largely because of improvements in machine learning.

But success can be subjective. Facebook has a blanket policy against nudity, for instance. Yet the company’s independent Oversight Board, a sort of appeals court for users unhappy with Facebook’s moderating decisions, recently faulted it for blocking images in breast cancer awareness campaigns. Regulators want Facebook to block terrorist videos that are being used to radicalize young recruits, but not block those same videos when used on news programs. It’s a distinction A.I. struggles to make.

The meaning of language depends on context too. Studies show humans can identify sarcasm only about 60% of the time, so expecting A.I. to do better is a stretch, says Sandra Wachter, a tech law professor at the University of Oxford’s Internet Institute.

Eric Goldman, a Santa Clara University law professor, puts it another way: “One problem A.I. can never fix is the problem of context that doesn’t come from within the four corners of the content itself.”

Not that Facebook isn’t trying. It’s currently running a competition encouraging computer scientists to develop A.I. capable of detecting hateful memes. Memes are difficult because they require understanding of both images and text, and often a large amount of cultural information. “We recognize it is a tricky problem, which is why we published the data set and challenge, because we need to see innovation across the industry,” says Cornelia Carapcea, a product manager who works on Facebook’s A.I. moderating tools.

Misinformation—the harmful content that has most preoccupied Americans lately—is a challenge for A.I. because outside information is required to verify claims. For now, that requires human fact-checkers. But once misinformation is identified, A.I. can help check its spread. Facebook has developed cutting-edge A.I. systems that identify when content is essentially identical to something that’s already been debunked, even if it has been cropped or screenshotted in an attempt to evade detection. It can also now spot similar images and synonymous language, which in the past may have eluded automated filters.

These systems helped Facebook slap warnings on over 180 million pieces of content in the U.S. between March 1, 2020, and Election Day. If that’s a sign of A.I.’s success, it is also an indication of the problem’s scale. A.I. works best when the data it’s analyzing changes little over time. That’s not the case for hate speech or disinformation. What results is a cat-and-mouse game between those disseminating malicious content and Facebook’s systems.

Some blame Facebook for raising public expectations of what A.I. can achieve. “It is in their self-interest to overstate the efficiency of the technology if it will deflect further regulation,” Santa Clara University’s Goldman says.

It is in their self-interest to overstate the efficiency of the technology if it will deflect further regulation.

ERIC GOLDMAN, SANTA CLARA UNIVERSITY

Others say the problem is more fundamental: Facebook makes money by keeping users on its platform so advertisers can market to them. And controversial content drives higher engagement. That means if harmful posts slip through Facebook’s dragnet, the company’s other algorithms will amplify them. “The business model is the core problem,” says Jillian York, a researcher at civil liberties nonprofit the Electronic Frontier Foundation.

In the days after the November election, with political tensions at a fever pitch, Facebook did tweak its News Feed algorithm to de-emphasize sources that were spreading misinformation and to boost news from higher-quality media outlets. But it rolled back the change weeks later.

Currently Facebook reduces the prominence of content it identifies as misinformation, shows warnings to those trying to share known misinformation, and notifies people if a story they have previously shared is later debunked. Users who repeatedly share misinformation are only rarely kicked off the service, but they “will see their overall distribution reduced and will lose the ability to advertise or monetize within a given time period,” the company says.

Facebook’s Carapcea says the company is considering similar measures for other harmful content. But humans will continue to play a big role in deciding when to apply them.

Says Carapcea: “Getting to 100% is a good North Star, but it may not ultimately be what happens here.” 

A.I. in action

Facebook’s A.I. has had a mixed track record with helping identify and remove harmful content before users flag it.

The following shows how much of the content in various categories Facebook removes that it finds without user input:

99.8%: Terrorism content

97.1%: Hate speech

92.8%: Glorification of suicide and self-harm

90%: Election suppression, misinformation, and threats (2018 election)

48.8%: Online bullying

Source: Facebook (Q4 2020, unless otherwise noted)

This article appears in the April/May issue of Fortune with the headline, "Facebook's complicated cleanup."

热读文章
热门视频
扫描二维码下载财富APP