立即打开
APP下载
财富Plus APP
不良信息终结者:Facebook称其人工智能监管技术取得突破

不良信息终结者:Facebook称其人工智能监管技术取得突破

Jeremy Khan 2020年05月15日
Facebook称,公司在自动检测和删除仇恨言论和新冠疫情错误信息方面取得了进步。

Facebook于周二在一系列博文中指出,该公司是多项人工智能技术的先驱,这些技术被其用于监管其社交网络内容。

就在Facebook发布所用技术细节的同一天,它还发布了最新的季度信息更新,介绍了其在应对仇恨言论、儿童色情、虚假账户、政治错误信息、恐怖主义恶意宣传,以及其他社区标准违反行为方面所采取的举措。该报告称,公司自年初以来一直在遏制仇恨言论以及新冠疫情相关错误信息的激增。

Facebook在周二重点提到的新人工智能系统包括:能够更好地理解语义及其所使用语境的系统,以及结合图片和语言处理以侦测有害模因的新系统。

为了帮助遏制与新冠疫情相关的错误信息,该公司还采用了新人工智能算法来监管新广告政策的实施。该政策旨在禁止发布利用新冠疫情来谋取利益的广告,例如口罩、洗手液和其他物品的销售广告。

Facebook在一篇博文中指出,公司在4月向5000万个帖子发出了警告,因为这些帖子可能涉及与新冠疫情有关的错误信息。公司还称,自3月初以来,公司已经删除了250万条违反个人防护设备或冠状病毒测试包销售禁令的内容。

Facebook称,得益于这项新技术,在过去一个季度删除的帖文中,有88.8%都是在他人看见并向公司的人工审核员举报之前便已被自动侦测,较前一个季度的80%有所增长。

但Facebook称,公司发现的仇恨言论总数依然在上升,2020年前三个月共删除了960万条此类内容,较上个季度多出了390万条。

Facebook的首席技术官迈克·施洛普弗表示,仇恨言论数量的增加源于公司检测仇恨言论能力的增强,而不是仇恨言论自身数量的增加。他在报告发布之前的一次记者电话会议上说:“我认为这一点明显得益于技术进步。”

特别值得一提的是,Facebook的这一技术得益于海量语言学习算法的进步,只不过它是近三年期间才开发出来的算法。这些模型的工作方式在于,绘制一张能够展示贴文内容文字与贴文发布前后其他文字关联度的统计图谱。Facebook已经开发了一个名为XLM-R的系统,它经过了2TB数据的训练,相当于50万册300页书籍所含的所有文字。它会一次性学习所有这些文字的多种语言统计图谱。该算法背后的理念在于,任何语言的仇恨言论在概念上的共性意味着仇恨言论的统计图谱在任何语言中看起来都是相似的,哪怕语言本身完全不同。

Facebook正在尽力展示自己在这一方面取得的成功,因为首席执行官扎克伯格曾多次承诺机器学习和人工智能将允许公司在其各大平台遏制仇恨言论、恐怖主义言论和政治错误信息的散播。在过去四年中,这些问题已经让Facebook成为了全球监管方的靶子,而且也让很多曾几何时的粉丝站到了公司的对立面。

施洛普弗说:“我们很现实。人工智能并非是所有问题的解决方案,而且我们认为在可预见的未来,人力依然是不可或缺的部分。”

Facebook介绍的大多数技术旨在简化其内容管理人员和相关事实核查机构的工作,并减少重复性。

当前,Facebook和众多国家实施的社交隔离举措意味着其内容管理人员的工作场所将不得不关闭,而且审核人员也都返回了家中,其中很多都是合同工,因此这些技术在眼下这个时期尤为重要。施罗普弗表示,在某些情况下,公司已经通过一些方式让这些人能够继续在家工作,不过,并非所有人都可以做到这一点。

施罗普弗说:“我们希望人们能够做出最终决定,尤其是在当前这个局面比较微妙的时候。但我们也希望可以为我们与之共事的人员提供日常所需的强有力工具。”他指出,例如,如果一名内容审核员认定一整套图片都包含错误信息,那么Facebook应该能够自动将这一标签应用至Facebook和Facebook旗下Instagram的类似内容,这样便无需审核人员找出所有内容并手动删除。

人们尝试规避Facebook内容黑名单的一种方式在于,对被屏蔽的内容进行小幅修改,例如更改图片中的某些像素区域或使用照片滤镜,然后尝试再次上载,并希望其能够逃过Facebook的算法。为了应对这类手段,公司已经开发了一套名为SimSearchNet的新人工智能系统,旨在寻找相似度很高的内容。

为了推行其新冠疫情相关的广告政策举措,Facebook部署了另一个计算机视觉系统,它能识别图像中的物体,而不是简单地将其所含像素汇总为统计图谱。施罗普弗说,通过这种方式,哪怕口罩遭到扭曲或放置于让机器学习软件难以识别的背景当中,算法应该都能识别出来。

最后,Facebook称其正在开发“多模式”机器学习系统,来应对仇恨模因的散布。该系统能够分析文本和图片,而且在未来有望分析视频和声音。

为了实现这一目标,Facebook已经打造了一个由1万个模因构成的新数据集,并将其作为遏制仇恨言论举措的一部分,而且研究人员可免费使用这一资源来打造能够成功侦测仇恨言论的人工智能系统。Facebook将举办一个奖金达10万美元的竞赛,以寻找最佳仇恨言论侦测软件。但参赛的前提是,研究人员必须开放其算法的源代码。

作为基准,Facebook的人工智能研究人员自行开发了多款系统,并利用上述数据集来培训这些系统。然而,公司目前的结果显示了该挑战的难度:尽管Facebook最好的仇恨言论侦测器已经同时接受了大量文本和图片数据集的培训,但其准确率只有63%。作为对比,人工审核的准确率约为85%,遗漏率不到20%。(财富中文网)

译者:Feb

Facebook于周二在一系列博文中指出,该公司是多项人工智能技术的先驱,这些技术被其用于监管其社交网络内容。

就在Facebook发布所用技术细节的同一天,它还发布了最新的季度信息更新,介绍了其在应对仇恨言论、儿童色情、虚假账户、政治错误信息、恐怖主义恶意宣传,以及其他社区标准违反行为方面所采取的举措。该报告称,公司自年初以来一直在遏制仇恨言论以及新冠疫情相关错误信息的激增。

Facebook在周二重点提到的新人工智能系统包括:能够更好地理解语义及其所使用语境的系统,以及结合图片和语言处理以侦测有害模因的新系统。

为了帮助遏制与新冠疫情相关的错误信息,该公司还采用了新人工智能算法来监管新广告政策的实施。该政策旨在禁止发布利用新冠疫情来谋取利益的广告,例如口罩、洗手液和其他物品的销售广告。

Facebook在一篇博文中指出,公司在4月向5000万个帖子发出了警告,因为这些帖子可能涉及与新冠疫情有关的错误信息。公司还称,自3月初以来,公司已经删除了250万条违反个人防护设备或冠状病毒测试包销售禁令的内容。

Facebook称,得益于这项新技术,在过去一个季度删除的帖文中,有88.8%都是在他人看见并向公司的人工审核员举报之前便已被自动侦测,较前一个季度的80%有所增长。

但Facebook称,公司发现的仇恨言论总数依然在上升,2020年前三个月共删除了960万条此类内容,较上个季度多出了390万条。

Facebook的首席技术官迈克·施洛普弗表示,仇恨言论数量的增加源于公司检测仇恨言论能力的增强,而不是仇恨言论自身数量的增加。他在报告发布之前的一次记者电话会议上说:“我认为这一点明显得益于技术进步。”

特别值得一提的是,Facebook的这一技术得益于海量语言学习算法的进步,只不过它是近三年期间才开发出来的算法。这些模型的工作方式在于,绘制一张能够展示贴文内容文字与贴文发布前后其他文字关联度的统计图谱。Facebook已经开发了一个名为XLM-R的系统,它经过了2TB数据的训练,相当于50万册300页书籍所含的所有文字。它会一次性学习所有这些文字的多种语言统计图谱。该算法背后的理念在于,任何语言的仇恨言论在概念上的共性意味着仇恨言论的统计图谱在任何语言中看起来都是相似的,哪怕语言本身完全不同。

Facebook正在尽力展示自己在这一方面取得的成功,因为首席执行官扎克伯格曾多次承诺机器学习和人工智能将允许公司在其各大平台遏制仇恨言论、恐怖主义言论和政治错误信息的散播。在过去四年中,这些问题已经让Facebook成为了全球监管方的靶子,而且也让很多曾几何时的粉丝站到了公司的对立面。

施洛普弗说:“我们很现实。人工智能并非是所有问题的解决方案,而且我们认为在可预见的未来,人力依然是不可或缺的部分。”

Facebook介绍的大多数技术旨在简化其内容管理人员和相关事实核查机构的工作,并减少重复性。

当前,Facebook和众多国家实施的社交隔离举措意味着其内容管理人员的工作场所将不得不关闭,而且审核人员也都返回了家中,其中很多都是合同工,因此这些技术在眼下这个时期尤为重要。施罗普弗表示,在某些情况下,公司已经通过一些方式让这些人能够继续在家工作,不过,并非所有人都可以做到这一点。

施罗普弗说:“我们希望人们能够做出最终决定,尤其是在当前这个局面比较微妙的时候。但我们也希望可以为我们与之共事的人员提供日常所需的强有力工具。”他指出,例如,如果一名内容审核员认定一整套图片都包含错误信息,那么Facebook应该能够自动将这一标签应用至Facebook和Facebook旗下Instagram的类似内容,这样便无需审核人员找出所有内容并手动删除。

人们尝试规避Facebook内容黑名单的一种方式在于,对被屏蔽的内容进行小幅修改,例如更改图片中的某些像素区域或使用照片滤镜,然后尝试再次上载,并希望其能够逃过Facebook的算法。为了应对这类手段,公司已经开发了一套名为SimSearchNet的新人工智能系统,旨在寻找相似度很高的内容。

为了推行其新冠疫情相关的广告政策举措,Facebook部署了另一个计算机视觉系统,它能识别图像中的物体,而不是简单地将其所含像素汇总为统计图谱。施罗普弗说,通过这种方式,哪怕口罩遭到扭曲或放置于让机器学习软件难以识别的背景当中,算法应该都能识别出来。

最后,Facebook称其正在开发“多模式”机器学习系统,来应对仇恨模因的散布。该系统能够分析文本和图片,而且在未来有望分析视频和声音。

为了实现这一目标,Facebook已经打造了一个由1万个模因构成的新数据集,并将其作为遏制仇恨言论举措的一部分,而且研究人员可免费使用这一资源来打造能够成功侦测仇恨言论的人工智能系统。Facebook将举办一个奖金达10万美元的竞赛,以寻找最佳仇恨言论侦测软件。但参赛的前提是,研究人员必须开放其算法的源代码。

作为基准,Facebook的人工智能研究人员自行开发了多款系统,并利用上述数据集来培训这些系统。然而,公司目前的结果显示了该挑战的难度:尽管Facebook最好的仇恨言论侦测器已经同时接受了大量文本和图片数据集的培训,但其准确率只有63%。作为对比,人工审核的准确率约为85%,遗漏率不到20%。(财富中文网)

译者:Feb

It has pioneered a number of artificial intelligence techniques to help it police content across its social networks, Facebook said Tuesday in a series of blog posts.

The details about the technology Facebook is using came on the same day the company released its latest quarterly update on its efforts to combat hate speech, child pornography, fake accounts, political misinformation, terrorist propaganda, and other violations of its community standards. The report showed the company was combating a big surge in hate speech and COVID-19 related misinformation since the start of the year.

Among the new A.I. systems Facebook highlighted on Tuesday are systems that better understand the meaning of language and the context in which it is used, as well as nascent systems that combine image and language processing in order to detect harmful memes.

As well as helping to combat misinformation related to COVID-19, Facebook has also turned to new A.I. algorithms to police its new policy banning ads selling face masks, hand sanitizer, and other items that seek to exploit the pandemic for profit.

The company put warning labels on 50 million posts in April for possible misinformation around COVID-19, the company said in a blog. It also said that since the beginning of March it has removed 2.5 million pieces of content that violated rules about selling personal protective equipment or coronavirus test kits.

Facebook said that thanks to the new techniques, 88.8% of the hate speech the social network took down in the past quarter was detected automatically before someone saw and flagged the offensive material for review by the company's human reviewers. This is up from about 80% in the previous quarter.

But the company said that the total amount of hate speech it's finding continues to rise—9.6 million pieces of content were removed in the first three months of 2020, 3.9 million more than in the previous three months.

Mike Schroepfer, Facebook's chief technology officer, said the increase was due to the company getting better at finding hateful content, not a surge in hate speech itself. "I think this is clearly attributable to technological advances," he said on a call with reporters ahead of the release of the report.

In particular, Facebook has built on advances in very large language learning algorithms that have only been developed in the past three years. These models work by building a statistical picture of how the words in posted content relate to the other words that come both before and after it. Facebook has developed a system called XLM-R, trained on two terrabytes of data, or about the equivalent of all the words in half a million 300-page books. It learns the statistical map of all of those words across multiple languages at once. The idea is that conceptual commonalities between hate speech in any language will mean the statistical maps of hate speech will look similar across every language even if the words themselves are completely different.

Facebook is at pains to show it is making good at CEO Mark Zuckerberg's repeated promises that machine learning and A.I. will enable the company to combat the spread of hate speech, terrorist propaganda, and political misinformation across its platforms—problems that have put Facebook in the crosshairs of regulators globally and turned many one-time fans against the company in the past four years.

"We are not naive," Schroepfer said. "A.I. is not the solution to every single problem and we believe that humans will be in the loop for the foreseeable future."

Much of the tech Facebook highlighted is designed to make the job of its human content moderators and associated fact-checking organizations easier and less repetitive.

That is especially important at a time when social distancing measures instituted by the company as well as by various countries have meant that the centers where many of its human content moderators work have had to close, and the reviewers, many of whom are contractors, have been sent home. In some cases, Schroepfer said, the company has found ways for these people to continue their work from home, although that has not been possible in all cases.

"We want people making the final decisions, especially when the situation is nuanced," Schroepfer said. "But we want to give people we work with every day power tools." For instance, he said, if a human reviewer decided that a whole class of images constituted misinformation, Facebook should be able to automatically apply that label to similar content across both Facebook and Facebook-owned Instagram without the human reviewers having to find and manually remove all of it.

One way people try to evade Facebook's content blacklists is by making small modifications to blocked content—altering some pixels in an image or using a photo filter, for instance—and then trying to upload it again and hope it sneaks past Facebook's algorithms. To battle these tactics, the company has developed a new A.I. system, called SimSearchNet, trained to find pieces of nearly identical content.

Another computer vision system the company has deployed to enforce its new COVID-19 ad policy works by identifying the objects present in an image, not simply forming a statistical map of all of the pixels it contains. This way the algorithm should be able to determine that the image has a face mask in it, even if that face mask is rotated at a funny angle or shown against a background designed to make it harder for machine learning software to recognize it, Schroepfer said.

Finally, the company said it was also working on "multimodal" machine learning systems—ones that can simultaneously analyze text and imagery, and in the future, possibly video and sound too—to combat the spread of hateful memes.

To that end, the company has created a new dataset consisting of 10,000 memes that were determined to be part of hate speech campaigns and it is making it freely available for researchers to use to build A.I. systems capable of successfully detecting them. The company is creating a competition with a $100,000 prize pool to find the best hateful meme detection software, with the condition that in order to enter the contest, researchers must commit to open-sourcing their algorithms.

As a benchmark, Facebook's A.I. researchers created several systems of their own and trained them on this dataset. But the company's results so far indicate how difficult the challenge is: Facebook's best hateful meme detector, which was pre-trained on very large dataset of both text and images simultaneously, was only 63% accurate. Human reviewers, by contrast, were about 85% accurate and missed less than 20% of the memes it should have caught.

最新:
  • 热读文章
  • 热门视频
活动
扫码打开财富Plus App