立即打开
谷歌和Facebook最大的难题不在于控制自身平台,而在于管理公众期望

谷歌和Facebook最大的难题不在于控制自身平台,而在于管理公众期望

Christopher Koopman, Megan Hansen 2018-12-26
尽管公众很容易就能指责这些公司存在偏见,但这种做法并不正确。

2018年12月11日,谷歌的首席执行官桑德尔·皮查伊在美国众议院司法委员会面前作证。考虑到内容审核的规模和复杂性,显然呈现的结果会大不相同。这并不意味着谷歌和Facebook存在偏见。图片来源:Alex Wong—Getty Images

谷歌(Google)的首席执行官桑德尔·皮查伊在12月11日出席美国众议院司法委员会(House Judiciary Committee)的听证会,只不过是科技公司被迫就偏见问题进行回应的最新例证。显然,皮查伊耗费了大量时间来保护公司免受这类针对谷歌和YouTube搜索结果的指控,不过他并不孤单。例如,Facebook就因为“迎合保守派”和作为“极左自由主义意识形态的温床”而饱受指责。

尽管公众很容易就能指责这些公司存在偏见,但这种做法并不正确。

正如加利福尼亚州民主党议员佐伊·洛夫格伦在皮查伊的听证会上准确指出的那样:“这不是某个小人坐在幕布后面,指挥(这些公司)给用户显示什么结果。”相反,这些公司——以及其中的员工——的任务是审核全球数十亿用户创造的内容,与此同时满足广大群众和毫不害怕滥用职权的心存抵触的立法者。此外,这些公司在进行这项几乎不可能完成的审核任务时,还要不断以中立的意识形态过滤内容。而在大多数情况下,他们的工作都值得赞扬。

考虑到这项任务的规模与复杂程度,我们对于呈现结果的差异不应感到惊讶。正如皮查伊指出,去年谷歌提供了超过3万亿次搜索,而谷歌接受的日常搜索中,有15%的词条是之前从未出现过的。计算一下,就意味着谷歌去年搜索了约4,500亿次全新的词条。

不可避免的,许多人会不满于自己喜欢的评论员和意识形态的观点在那些搜索结果中呈现的方式,以及在其他平台中被审核的方式。错误会出现,权衡也在所难免,而对于内容审核充满偏见和敌意的言论总会不断涌现。

科技公司试图一次性实现许多不同——有时甚至是互相冲突——的目标。他们尽力限制裸露和暴力,控制假新闻,屏蔽仇恨言论,保障所有人的网络安全。这样一张任务清单导致我们很难给成功设定一个标准,而要实现成功则难上加难。当这些目标与美国人神圣不可侵犯的言论自由原则和尊重不同观点的爱好相抵触时,情况尤其如此。

一旦这些价值观产生冲突,谁来决定屏蔽哪些言论,又允许哪些言论?

由于Facebook不断扩张并受到超过20亿用户的喜爱,公司的内容审核系统也进行了升级。公司如今在全球11个办事处配备了由律师、政策专家和公关专家组成的团队,他们的任务是出台“社区标准”,决定怎样审核内容。

近几个月里,Facebook对于这些规则如何出台并执行变得更加开明。今年春天,公司的全球政策管理主管莫妮卡·比克特撰文阐述了Facebook安全、发声和平等三大原则,并“努力把这些标准持续、公平地应用到所有的社区与文化中。”

哪个标准可以持续应用到每天以超过100种不同语言发布的数十亿篇文章中?人工智能和机器学习很擅长过滤裸露照片、垃圾信息、虚假账号和影像暴力。但内容取决于上下文,相较之下总是更加棘手,因此平台必须通过人工版主来处理每篇可能违规的文章。

Facebook和其他平台的运营不可能让任何政治派别满意,撇开这个事实不谈,他们严肃认真地履行了自己的义务来保护用户。毕竟,每个平台都有很强的经济动机来取悦用户,避免出现对某种政治理念的倾向。因此,创造可以持续遵守的中立规则,不考虑政治立场,是符合平台自身利益的。

不过,看看内容审核的实现方式,你会很明显地发现人类的判断在其中起到了很大作用。Facebook关于仇恨言论判定的政策是由人类制定的,最终也由人类执行。无论这些人多么心怀善意,他们来自不同背景,存在不同倾向,对主题也有不同理解。因此如果最后呈现的结果矛盾而混乱,让保守主义者和自由主义者都大为不满,我们也不用吃惊。这并不意味着科技公司存在政治倾向,只表明他们的工作实在是难以置信的困难。(财富中文网)

作者克里斯托弗·库普曼是犹他州立大学增长与机遇中心的战略与研究高级主管,梅根·汉森是该机构的研究主管。

译者:严匡正

Google CEO Sundar Pichai’s testimony before the House Judiciary Committee on Dec 11 is just the latest example of a tech company having to respond to accusations of bias. While Pichai obviously spent much of his time defending Google from such allegations in search results on Google and YouTube, he isn’t alone. Platforms like Facebook, for instance, are being blamed of both “catering to conservatives,” and acting as a network of “incubators for far-left liberal ideologies.”

While accusing these companies of bias is easy, it’s also wrong.

As Rep. Zoe Lofgren (D-CA) correctly pointed out during Pichai’s testimony, “It’s not some little man sitting behind the curtain figuring out what [companies] are going to show the users.” Instead, these companies—and the people who work there—have been tasked with moderating content created by billions of users across the globe while also having to satisfy both the broader public and competing lawmakers who aren’t afraid to throw their weight around. Moreover, these companies are taking on this impossible task of moderating while also filtering content in a consistent and ideologically neutral way. And, for the most part, they are doing an admirable job.

Given the complexity and scale of the task, we shouldn’t be surprised that results vary. As Pichai noted, Google served over 3 trillion searches last year, and 15% of the searches Google sees per day have never been entered before on the platform. To do the math, that means that somewhere around 450 billion of the searches Google served last year were brand new inquiries.

Inevitably, many people will be left unsatisfied with how their preferred commentators and ideological views are returned in those searches, or moderated on other platforms. Mistakes will occur, trade-offs will be made, and there will always be claims that content moderation is driven by bias and animus.

Tech companies are attempting to achieve many different—sometimes conflicting—goals at once. They are working to limit nudity and violence, control fake news, prevent hate speech, and keep the internet safe for all. Such a laundry list makes success hard to define—and even harder to achieve. This is especially the case when these goals are pitted against the sacrosanct American principle of free speech, and a desire (if not a business necessity) to respect differing viewpoints.

When these values come into conflict, who decides what to moderate, and what to allow?

As it has expanded and welcomed in more than 2 billion users, Facebook has upped its content moderation game as well. The company now has a team of lawyers, policy professionals, and public relations experts in 11 offices across the globe tasked with crafting “community standards” that determine how to moderate content.

In recent months, Facebook has been more open about how these rules are developed and employed. This spring, Monika Bickert, the platform’s head of global policy management, wrote about Facebook’s three principles of safety, voice, and equity, and the “aim to apply these standards consistently and fairly to all communities and cultures.”

Can any standard be consistently applied to billions of posts made every single day in more than 100 different languages? Artificial intelligence and machine learning are very good at filtering out nudity, spam, fake accounts, and graphic violence. But for content that is dependent on context—which has always been the thornier issue—platforms must rely on human moderators to sort through each and every post that might violate its rules.

Putting aside the fact that they have not been able to satisfy those operating on either side of the political spectrum, Facebook and other platforms have taken their obligation to protect users seriously. After all, each faces a strong financial incentive to keep their users happy, and to avoid the appearance of favoring one set of political beliefs over another. Thus, creating neutral rules that can be consistently applied, regardless of political affiliation, is in a platform’s self-interest.

But when you look at how content moderation actually gets done, it’s clear that discretion by human beings plays a very large role. Facebook’s policies on what constitutes hate speech are written by human beings, and ultimately are enforced by human beings who—no matter how well-meaning they are—have different backgrounds, biases, and understandings of the subject matter. We shouldn’t be surprised when the results are inconsistent, messy, and end up leaving both conservatives and liberals unhappy. This doesn’t mean tech companies are politically biased—it means their job is incredibly difficult.

Christopher Koopman is the senior director of strategy and research and Megan Hansen is the research director for the Center for Growth and Opportunity at Utah State University.

热读文章
热门视频
扫描二维码下载财富APP