立即打开
Facebook怎么了?技术越来越牛,却解决不了任何问题

Facebook怎么了?技术越来越牛,却解决不了任何问题

JEREMY KAHN 2020-11-23
只要Facebook继续给政治强权人物和流行的极端主义媒体组织特殊待遇,技术上的进步根本不可能改变该公司糟糕的公共形象。

Facebook表示,其管理社交媒体网站所使用的人工智能系统现在已经足够强大,可以自动识别网站上超过94%的仇恨言论,并且可以发现超过96%与有组织仇恨团体有关的内容。

这意味着Facebook管理其网站的能力大幅提升,在个别情况下,其人工智能系统发现违规内容的效率比一年前提升了五倍。

然而,只要Facebook继续给政治强权人物和流行的极端主义媒体组织特殊待遇,技术上的进步根本不可能改变该公司糟糕的公共形象。

最近几周,Facebook遭到抨击,因为在美国总统唐纳德·特朗普发布有关大选的言论之后,Facebook没有采取更多措施阻止这些言论传播,而特朗普的前顾问史蒂夫·班农在Facebook上传播要求将两位曾激怒总统的美国高级官员斩首的播客,Facebook也没有将其封禁。

Facebook后来确实将特朗普的帖子标记为误导性信息,比如他声称自己赢得大选的帖子,Facebook还在一些帖子上附上了“计票工作需要几天或几周才能完成”的备注。但批评者认为,Facebook应该直接将特朗普的帖子删除或者屏蔽。另外一家社交媒体公司Twitter临时屏蔽了特朗普竞选团队的官方账号发布的新帖子,以及特朗普的顾问在大选之前发布的帖子。Facebook称特朗普的帖子符合其标准政策中有关“新闻价值”的豁免条款。

Facebook CEO马克·扎克伯格表示,班农的帖子已经被删除,但这个右翼煽动者并没有频繁违反公司的规定,因此不足以从Facebook平台上将其封杀。

Facebook首席技术官迈克·斯科洛普夫承认,增强公司的人工智能系统,使其能够监测甚至在许多情况下可以自动屏蔽违反公司规定的内容,并不能完全解决公司面临的有害内容问题。

斯科洛普夫说:“我在这方面没有天真的想法。我不会说技术是所有问题的解决方案。”他表示,Facebook对于社交网络的管理分为三个方面:能够识别违反公司政策的内容的技术,能够根据信息快速采取行动从而防止内容产生影响的能力,以及政策本身。他补充说,在前两点上,技术可以有所帮助,但它无法决定政策。

公司越来越多地借助自动化系统,为15,000名人类内容审查员提供帮助。许多审查员属于公司在全球聘请的承包商。今年,Facebook首次开始使用人工智能确定将内容交由人类审查员审查的顺序,由审查员确定将其保留还是删除。人工智能软件会根据内容可能违反政策的严重性以及内容在Facebook的社交网络上传播的可能性,确定内容的优先顺序。

斯科洛普夫表示,该系统的目的是尽量限制Facebook所说的“流行度”,这个指标大体上可以理解成会有多少用户可能看到一条内容或者与内容互动。

该公司还迅速在其内容审查系统中部署了多种由其内部研究人员开发的尖端人工智能技术。其中有一款软件可以翻译100种语言,无需使用一种通用中间语言。这款软件帮助公司的人工智能打击仇恨言论和失实信息,尤其是以不常见的语言书写的言论,因为这些语言的人类内容审查员远远不足。

斯科洛普夫表示,公司在“相似度匹配”方面取得了巨大进步。相似性匹配会确定一条新内容与因为违反Facebook政策已经被删除的另一条内容是否基本相似。他列举了与新冠疫情有关的失实信息的例子。当时有帖子谎称外科医用口罩中含有已知的致癌物,经过人类事实核查员评估之后这些帖子被删除,第二条帖子使用了略有不同的语言和类似但并不完全相同的口罩图片,也被人工智能系统识别出来并自动屏蔽。

他还表示,许多系统现在扩充到了“多模态”领域,可以结合文本与图片或视频进行分析,有时候也会结合语音。Facebook有专门的软件用于监测具体类型的恶意内容,比如识别广告垃圾邮件的软件和识别仇恨言论的软件等,同时它还推出了一个新系统“整个帖子完整性嵌入”(简称“WPie”),这款软件可以识别各种各样不同类型的政策违规情况,不需使用各类违规的大量实例进行培训。

另外,Facebook还利用研究竞赛,帮助其开发更强大的内容审查人工智能。去年,Facebook公布的竞赛结果显示,研究人员开发出可自动识别深度伪造视频的软件。深度伪造视频本身也是使用机器学习技术开发的高度逼真的虚假视频。Facebook目前的竞赛,旨在找到检测仇恨模因的最佳算法。这次竞赛极具挑战性,因为一个成功的系统需要理解模因中的图片和文字对意义的影响,甚至需要理解模因本身不存在的大量语境。(财富中文网)

翻译:刘进龙

审校:汪皓

Facebook表示,其管理社交媒体网站所使用的人工智能系统现在已经足够强大,可以自动识别网站上超过94%的仇恨言论,并且可以发现超过96%与有组织仇恨团体有关的内容。

这意味着Facebook管理其网站的能力大幅提升,在个别情况下,其人工智能系统发现违规内容的效率比一年前提升了五倍。

然而,只要Facebook继续给政治强权人物和流行的极端主义媒体组织特殊待遇,技术上的进步根本不可能改变该公司糟糕的公共形象。

最近几周,Facebook遭到抨击,因为在美国总统唐纳德·特朗普发布有关大选的言论之后,Facebook没有采取更多措施阻止这些言论传播,而特朗普的前顾问史蒂夫·班农在Facebook上传播要求将两位曾激怒总统的美国高级官员斩首的播客,Facebook也没有将其封禁。

Facebook后来确实将特朗普的帖子标记为误导性信息,比如他声称自己赢得大选的帖子,Facebook还在一些帖子上附上了“计票工作需要几天或几周才能完成”的备注。但批评者认为,Facebook应该直接将特朗普的帖子删除或者屏蔽。另外一家社交媒体公司Twitter临时屏蔽了特朗普竞选团队的官方账号发布的新帖子,以及特朗普的顾问在大选之前发布的帖子。Facebook称特朗普的帖子符合其标准政策中有关“新闻价值”的豁免条款。

Facebook CEO马克·扎克伯格表示,班农的帖子已经被删除,但这个右翼煽动者并没有频繁违反公司的规定,因此不足以从Facebook平台上将其封杀。

Facebook首席技术官迈克·斯科洛普夫承认,增强公司的人工智能系统,使其能够监测甚至在许多情况下可以自动屏蔽违反公司规定的内容,并不能完全解决公司面临的有害内容问题。

斯科洛普夫说:“我在这方面没有天真的想法。我不会说技术是所有问题的解决方案。”他表示,Facebook对于社交网络的管理分为三个方面:能够识别违反公司政策的内容的技术,能够根据信息快速采取行动从而防止内容产生影响的能力,以及政策本身。他补充说,在前两点上,技术可以有所帮助,但它无法决定政策。

公司越来越多地借助自动化系统,为15,000名人类内容审查员提供帮助。许多审查员属于公司在全球聘请的承包商。今年,Facebook首次开始使用人工智能确定将内容交由人类审查员审查的顺序,由审查员确定将其保留还是删除。人工智能软件会根据内容可能违反政策的严重性以及内容在Facebook的社交网络上传播的可能性,确定内容的优先顺序。

斯科洛普夫表示,该系统的目的是尽量限制Facebook所说的“流行度”,这个指标大体上可以理解成会有多少用户可能看到一条内容或者与内容互动。

该公司还迅速在其内容审查系统中部署了多种由其内部研究人员开发的尖端人工智能技术。其中有一款软件可以翻译100种语言,无需使用一种通用中间语言。这款软件帮助公司的人工智能打击仇恨言论和失实信息,尤其是以不常见的语言书写的言论,因为这些语言的人类内容审查员远远不足。

斯科洛普夫表示,公司在“相似度匹配”方面取得了巨大进步。相似性匹配会确定一条新内容与因为违反Facebook政策已经被删除的另一条内容是否基本相似。他列举了与新冠疫情有关的失实信息的例子。当时有帖子谎称外科医用口罩中含有已知的致癌物,经过人类事实核查员评估之后这些帖子被删除,第二条帖子使用了略有不同的语言和类似但并不完全相同的口罩图片,也被人工智能系统识别出来并自动屏蔽。

他还表示,许多系统现在扩充到了“多模态”领域,可以结合文本与图片或视频进行分析,有时候也会结合语音。Facebook有专门的软件用于监测具体类型的恶意内容,比如识别广告垃圾邮件的软件和识别仇恨言论的软件等,同时它还推出了一个新系统“整个帖子完整性嵌入”(简称“WPie”),这款软件可以识别各种各样不同类型的政策违规情况,不需使用各类违规的大量实例进行培训。

另外,Facebook还利用研究竞赛,帮助其开发更强大的内容审查人工智能。去年,Facebook公布的竞赛结果显示,研究人员开发出可自动识别深度伪造视频的软件。深度伪造视频本身也是使用机器学习技术开发的高度逼真的虚假视频。Facebook目前的竞赛,旨在找到检测仇恨模因的最佳算法。这次竞赛极具挑战性,因为一个成功的系统需要理解模因中的图片和文字对意义的影响,甚至需要理解模因本身不存在的大量语境。(财富中文网)

翻译:刘进龙

审校:汪皓

Facebook has revealed that the artificial intelligence systems it uses to police its social media sites are now good enough to automatically flag more than 94% of hate speech on its social media sites, as well as catching more than 96% of content linked to organized hate groups.

This represents a rapid leap in Facebook's capabilities—in some cases, these A.I. systems are five times better at catching content that violates the company's policies than they were just one year ago.

And yet this technological progress isn't likely to do much to improve Facebook's embattled public image as long as the company continues to make exceptions to its rules for powerful politicians and popular, but extremist, media organizations.

In recent weeks, Facebook has been under fire for not doing more to slow the false claims about the election made by U.S. President Donald Trump and not banning former Trump advisor Steve Bannon after he used Facebook to distribute a podcast in which he called for the beheading of two U.S. officials whose positions have sometimes angered the president.

Facebook did belatedly label some of Trump's posts, such as ones in which he said he had won the election, as misleading and appended a note saying that "ballot counting will continue for days or weeks" to some of them. But critics said it should have removed or blocked these posts completely. Rival social media company Twitter did temporarily block new posts from the official Trump campaign account as well as those from some Trump advisors during the run-up to the election. Facebook said Trump's posts fell within a "newsworthiness" exemption to its normal policies.

As for Bannon's posts, Facebook CEO Mark Zuckerberg said they had been taken down but that the rightwing firebrand had not violated the company's rules frequently enough to warrant banning him from the platform.

Mike Schroepfer, Facebook's chief technology officer, acknowledged that efforts to strengthen the company's A.I. systems so they could detect—and in many cases automatically block—content that violates the company's rules were not a complete solution to the company's problems with harmful content.

"I'm not naive about this," Schroepfer said. "I'm not saying technology is the solution to all these problems." Schroepfer said the company's efforts to police its social network rested on three legs: technology capable of identifying content that violated the company's policies, the capability to quickly act on that information to prevent that content from having an impact and the policies themselves. Technology could help with the first two of those, but could not determine the policies, he added.

The company has increasingly turned to automated systems to help augment the 15,000 human content moderators, many of them contractors, that it employs across the globe. This year for the first time, Facebook began using using A.I. to determine the order in which content is brought before these human moderators for a decision on whether it should remain up or be taken down. The software prioritizes content based on how severe the likely policy violation may be and how likely the piece of content is to spread across Facebook's social networks.

Schroepfer said that the aim of the system is to try to limit what Facebook calls "prevalence"—a metric which translates roughly into how many users might be able to see or interact with a given piece of content.

The company has moved rapidly to put several cutting-edge A.I. technologies pioneered by its own researchers into its content moderation systems. These include software that can translate between 100 languages without using a common intermediary. This has helped the company's A.I. to combat hate speech and disinformation, especially in less common languages for which it has far fewer human content moderators.

Schroepfer said the company had made big strides in "similarity matching"—which tries to determine if a new piece of content is broadly similar to another one that has already been removed for violating Facebook's policies. He gave an example of a COVID-19-related disinformation campaign—posts falsely claiming that surgical face masks contained known carcinogens—which was taken down after review by human fact-checkers and a second post that used slightly differently language and a similar, but not identical face mask image, which an A.I. system identified and was able to automatically block.

He also said that many of these systems were now "multi-modal"—able to analyze text in conjunction with images or video and sometimes also audio. And while Facebook has individual software designed to catch each specific type of malicious content—one for advertising spam and one for hate speech, for example—it also has a new system it calls Whole Post Integrity Embedding (WPie for short) that is a single piece of software that can identify a whole range of different types of policy violations, without having to be trained on a large number of examples of each violation type.

The company has also used research competitions to try to help it build better content moderation A.I. Last year, it announced the results of a contest it ran that saw researchers build software to automatically identify deepfake videos, highly-realistic looking fake videos that are themselves created with a machine learning technique. It is currently running a competition to find the best algorithms for detecting hateful memes—a difficult challenge because a successful system will need to understand how the image and text in a meme affect meaning as well as possibly understand a lot of context not found within the meme itself.

热读文章
热门视频
扫描二维码下载财富APP