立即打开
X自建团队清理有毒内容

X自建团队清理有毒内容

KYLIE ROBISON 2024-02-25
又要干活,又不能过分抵触老板马斯克的“言论自由”承诺

社交媒体平台X在埃隆·马斯克的领导下,组建了内容审查团队。这是否会有所帮助?摄影:JONATHAN RAA/NURPHOTO经盖蒂图片社提供

2023年秋,Twitter还没有更名为X的时候,公司开始计划部署一种新系统,旨在从平台上清除最不该出现的内容。

该公司没有像大多数社交媒体网站(包括以前的Twitter)一样雇佣大量合同工负责内容审查,而是将组建规模较小的内部内容审查团队,这个专业安全团队的目的是既不过分破坏老板埃隆·马斯克的“言论自由”公开承诺,又能避免平台上出现不端内容。

近一年后,X上周宣布在德克萨斯州奥斯汀新建一个信任与安全卓越中心(Trust and Safety Center of Excellence)。X的代表在彭博社的一篇最新报道中宣称,这个由100名内容审查员组成的团队,规模远小于前信任与安全部门工作人员透露的500人的初步设想。

在立法者高度关注社交媒体公司威胁儿童安全之际,X位于奥斯汀的安全中心显然具有公关价值。X的CEO琳达·雅卡里诺周三在参议院听证会上表示,新中心将“在公司内部有更多信任与安全人员,以加快扩大我们的影响力”。

虽然有批评者提到X宣布这一消息的时机有投机之嫌,但X的奥斯汀计划的细节还提出了一个与马斯克的平台有关的更大的问题:马斯克非常规的内容审查模式的效果,能否超越社交媒体行业糟糕的网络安全历史记录,或者它是否只是代表了另外一种削减成本的途径,公司根本没有太大兴趣决定哪些内容适合用户。

《财富》杂志采访的多位内容审查专家和现任或前任X内部人士认为,与当前的行业标准相比,内部专业团队具有明显优势。但许多人也强调了内部政策前后一致的重要性,以及投资于工具与技术的重要性。

一位熟悉X信任与安全事务的消息人士解释称:“在审查能力方面,X+100比单纯的X更加强大。”但这位消息人士表示:“相比在电脑前工作的人数,在某种程度上来说,更重要的是有基于已验证的减少伤害策略的明确政策,并且有大规模执行这些政策的必要工具和系统,但自2022年末以来,X已经抛弃了这两者。”

X并未回应采访或置评请求。

X的CEO琳达·雅卡里诺出席参议院的在线儿童安全听证会摄影:ANDREW CABALLERO-REYNOLDS/法新社经盖蒂图片社提供

为什么X决定在内部组建内容审查团队

2022年11月,马斯克以440亿美元完成收购之后,X上泛滥的不良内容,经常成为公开辩论和争议的焦点。

在反数字仇恨中心(Center for Countering Digital Hate)发布的一份报告指控X未能审查“极端仇恨言论”后,马斯克起诉该组织以“毫无根据的指控”故意中伤公司。与此同时,有报道称,一些描绘虐待动物的视频在该平台上广泛传播。就在上周,AI生成的泰勒·斯威夫特的露骨内容在该平台上肆意传播了17个小时,之后平台才彻底屏蔽了对她的姓名的搜索。

前信任与安全部门员工表示,X还严重依赖其社区笔记功能审查成百上千万活跃用户。该功能允许用户在帖子中添加笔记,附上额外的背景说明。但这位消息人士强调,这只是可用于内容审查的“一种工具”。此外,《连线》(Wired)的一项调查发现,该功能内部存在协调宣传虚假信息的情形,这凸显出该公司缺乏重要的监督。

据估计,马斯克裁撤了80%负责信任与安全的工程师,并削减了外包内容审查员。这些审查员的工作就是监控和删除违反公司政策的内容。据路透社报道,7月,雅卡里诺对员工宣布,三位领导人将监管信任与安全领域的不同事务,包括执法和威胁中断等。然而据X的另外一位消息人士称,目前尚不确定信任与安全在公司内部的级别,该部门似乎“不再是最高层级”。

但马斯克的社交媒体公司还在重新思考如何执行内容审查。

计划变了:欢迎来到奥斯汀

新奥斯汀中心实际上最初是湾区中心。据一位熟悉信任与安全事务的知情人士对《财富》杂志表示,建立该中心的目的是在旧金山这样的城市设立一个中心,帮助招聘顶级多语言人才,这是应对网络暴力的关键,因为超过80%的X用户生活在美国境外。考虑到不同语言之间的细微差别,以及每种语言有独特的习语和表达,因此公司的出发点是招募熟悉特定语言或文化的员工,与没有专业技能的低薪通才合同工相比,他们能更好地区分玩笑和威胁。

前X员工表示:“他们首先在湾区进行招聘,测试他们的质量水平,以及他们的工作是否比外包更有效果。 [X]招聘了一个小团队进行测试,并评估他们准确决策的能力。”该计划首先准备招聘75人,如果能够带来成效,将把团队规模扩大到500人。

但当时马斯克倾向于选择一个更具有成本效益的地点,他首选奥斯汀,因为他相信自己有能力吸引精通不同语言的人才,并让他们搬家。这个变化让项目经历了许多波折。

前X员工解释称:“招聘数百人,让他们正常运转起来并接受培训,这大约需要两三个月时间。在开始培训后,你会知道实际上团队准备就绪需要三个、四个或者五个月时间。这还是假设就业市场状况良好,而且你不需要让人们搬家,不会有各种麻烦事。”

据LinkedIn透露,上个月,已有十多人加入X的奥斯汀中心担任“信任与安全人员”,而且大多数人似乎来自埃森哲(Accenture)。埃森哲为互联网公司提供内容审查承包商。目前尚不确定,X与埃森哲之间是否有合同雇佣计划,即由埃森哲等咨询公司招聘的员工,在合适的客户公司担任全职岗位,但消息人士确认,X过去曾使用过埃森哲的承包服务。

规则不断变化带来的麻烦

关于奥斯汀团队的具体工作重点,还有许多疑问。他们将专注于审查仅涉及未成年人的内容,还是仅在美国的内容?他们将专注于个人发帖,还是会开展性剥削调查?

前Twitter信任与安全委员会成员安妮·科利尔对《财富》杂志表示:“奥斯汀的百人团队将是全球内容审查网络必不可少的一个小节点。祝这个百人团队好运。”

无论这个团队背负着什么任务,社交媒体审查专家均认为,公司需要大力投资AI工具,以最大程度提高团队的效率。

例如,据The Verge报道,2020年,Facebook在全球雇佣了约15,000名审查员,并宣布将“把AI与人类审查员相结合,以减少错误数量”。 Snap采取了类似做法,并在一篇博客中表示,其使用“自动化工具和人类审查员进行内容审查”。

据前X内部人士透露,公司一直在试验AI审查。马斯克最近通过成立一年的初创公司X.AI开发自己的大语言模型,进军人工智能技术领域,这将为人类审查员团队提供一种宝贵的资源。

该内部人士称,AI系统“只需要约三秒钟时间,就能判断出每一条推文是否符合政策,它们的准确率约为98%,但任何公司依赖人类审查员的准确率都不超过65%。”你可能想要同时看到使用AI和只依赖人类的效果,我认为,你会看到什么是两者之间正确的平衡。”

无论AI工具和人类审查员的表现多出色,重要的是幕后的政策,而X在马斯克的领导下在政策方面有所欠缺。

熟悉X信任与安全事务的消息人士解释称,政策应该足够灵活,能够适应文化背景,它们也需要具有足够的可预测性,使所有人都能了解这些规则。该消息人士称,这对于大型平台的内容审查尤其重要,因为大型平台有“成百甚至上千名审查员,必须了解和解释稳定的规则。如果政策不断变化,你无法一致准确地执行规则。”

规则松散和由此导致的政策不明确,一直是X在马斯克领导下的弊病之一。

马斯克收购了X之后,先后恢复了一批因违反平台政策被封禁的账号,其中包括违反新冠虚假信息政策的众议员玛乔丽·泰勒·格林,发布了一则违反Twitter仇恨行为政策的恐跨性别故事的巴比伦·比,以及因为女性应该为被性侵承担“一些责任”的言论而被封禁的安德鲁·泰特(被Facebook、Instagram和TikTok封禁)。在马斯克入主X之后,这些人的账号均已恢复。

有媒体怀疑,马斯克任内最后一位信任与安全负责人艾拉·欧文的离开,与马斯克批评团队删除马特·沃尔什的《何为女人》(What is a Woman?)恐跨性别纪录片违反X的规则,两者之间存在一定的联系。虽然这部纪录片违反了X的书面政策,但马斯克却坚持禁止将其封禁。

熟悉X审查事务的消息人士补充道:“我从来没有明显感觉到X在根据政策进行审查。该网站在线发布的规则似乎只是个幌子,是为了掩盖其老板最终随心所欲地发号施令。”

前Twitter信任与安全委员会成员朱莉·英曼·格兰特更加直白。她表示:“你不能指望用指头堵住堤坝,就能阻止在平台上泛滥的儿童性暴露海啸,或者深度造假色情片的洪水。”格兰特正在起诉该公司,指控其在儿童性虐待材料方面缺乏透明度。

“根据我2014年至2016年在Twitter的从业经历,这种专业能力的培养需要好几年时间,而要让一个情况糟糕到面目全非的平台做出有意义的改变,需要的时间更长。”(财富中文网)

翻译:刘进龙

审校:汪皓

社交媒体平台X在埃隆·马斯克的领导下,组建了内容审查团队。这是否会有所帮助?

摄影:JONATHAN RAA/NURPHOTO经盖蒂图片社提供

2023年秋,Twitter还没有更名为X的时候,公司开始计划部署一种新系统,旨在从平台上清除最不该出现的内容。

该公司没有像大多数社交媒体网站(包括以前的Twitter)一样雇佣大量合同工负责内容审查,而是将组建规模较小的内部内容审查团队,这个专业安全团队的目的是既不过分破坏老板埃隆·马斯克的“言论自由”公开承诺,又能避免平台上出现不端内容。

近一年后,X上周宣布在德克萨斯州奥斯汀新建一个信任与安全卓越中心(Trust and Safety Center of Excellence)。X的代表在彭博社的一篇最新报道中宣称,这个由100名内容审查员组成的团队,规模远小于前信任与安全部门工作人员透露的500人的初步设想。

在立法者高度关注社交媒体公司威胁儿童安全之际,X位于奥斯汀的安全中心显然具有公关价值。X的CEO琳达·雅卡里诺周三在参议院听证会上表示,新中心将“在公司内部有更多信任与安全人员,以加快扩大我们的影响力”。

虽然有批评者提到X宣布这一消息的时机有投机之嫌,但X的奥斯汀计划的细节还提出了一个与马斯克的平台有关的更大的问题:马斯克非常规的内容审查模式的效果,能否超越社交媒体行业糟糕的网络安全历史记录,或者它是否只是代表了另外一种削减成本的途径,公司根本没有太大兴趣决定哪些内容适合用户。

《财富》杂志采访的多位内容审查专家和现任或前任X内部人士认为,与当前的行业标准相比,内部专业团队具有明显优势。但许多人也强调了内部政策前后一致的重要性,以及投资于工具与技术的重要性。

一位熟悉X信任与安全事务的消息人士解释称:“在审查能力方面,X+100比单纯的X更加强大。”但这位消息人士表示:“相比在电脑前工作的人数,在某种程度上来说,更重要的是有基于已验证的减少伤害策略的明确政策,并且有大规模执行这些政策的必要工具和系统,但自2022年末以来,X已经抛弃了这两者。”

X并未回应采访或置评请求。

X的CEO琳达·雅卡里诺出席参议院的在线儿童安全听证会

摄影:ANDREW CABALLERO-REYNOLDS/法新社经盖蒂图片社提供

为什么X决定在内部组建内容审查团队

2022年11月,马斯克以440亿美元完成收购之后,X上泛滥的不良内容,经常成为公开辩论和争议的焦点。

在反数字仇恨中心(Center for Countering Digital Hate)发布的一份报告指控X未能审查“极端仇恨言论”后,马斯克起诉该组织以“毫无根据的指控”故意中伤公司。与此同时,有报道称,一些描绘虐待动物的视频在该平台上广泛传播。就在上周,AI生成的泰勒·斯威夫特的露骨内容在该平台上肆意传播了17个小时,之后平台才彻底屏蔽了对她的姓名的搜索。

前信任与安全部门员工表示,X还严重依赖其社区笔记功能审查成百上千万活跃用户。该功能允许用户在帖子中添加笔记,附上额外的背景说明。但这位消息人士强调,这只是可用于内容审查的“一种工具”。此外,《连线》(Wired)的一项调查发现,该功能内部存在协调宣传虚假信息的情形,这凸显出该公司缺乏重要的监督。

据估计,马斯克裁撤了80%负责信任与安全的工程师,并削减了外包内容审查员。这些审查员的工作就是监控和删除违反公司政策的内容。据路透社报道,7月,雅卡里诺对员工宣布,三位领导人将监管信任与安全领域的不同事务,包括执法和威胁中断等。然而据X的另外一位消息人士称,目前尚不确定信任与安全在公司内部的级别,该部门似乎“不再是最高层级”。

但马斯克的社交媒体公司还在重新思考如何执行内容审查。

计划变了:欢迎来到奥斯汀

新奥斯汀中心实际上最初是湾区中心。据一位熟悉信任与安全事务的知情人士对《财富》杂志表示,建立该中心的目的是在旧金山这样的城市设立一个中心,帮助招聘顶级多语言人才,这是应对网络暴力的关键,因为超过80%的X用户生活在美国境外。考虑到不同语言之间的细微差别,以及每种语言有独特的习语和表达,因此公司的出发点是招募熟悉特定语言或文化的员工,与没有专业技能的低薪通才合同工相比,他们能更好地区分玩笑和威胁。

前X员工表示:“他们首先在湾区进行招聘,测试他们的质量水平,以及他们的工作是否比外包更有效果。 [X]招聘了一个小团队进行测试,并评估他们准确决策的能力。”该计划首先准备招聘75人,如果能够带来成效,将把团队规模扩大到500人。

但当时马斯克倾向于选择一个更具有成本效益的地点,他首选奥斯汀,因为他相信自己有能力吸引精通不同语言的人才,并让他们搬家。这个变化让项目经历了许多波折。

前X员工解释称:“招聘数百人,让他们正常运转起来并接受培训,这大约需要两三个月时间。在开始培训后,你会知道实际上团队准备就绪需要三个、四个或者五个月时间。这还是假设就业市场状况良好,而且你不需要让人们搬家,不会有各种麻烦事。”

据LinkedIn透露,上个月,已有十多人加入X的奥斯汀中心担任“信任与安全人员”,而且大多数人似乎来自埃森哲(Accenture)。埃森哲为互联网公司提供内容审查承包商。目前尚不确定,X与埃森哲之间是否有合同雇佣计划,即由埃森哲等咨询公司招聘的员工,在合适的客户公司担任全职岗位,但消息人士确认,X过去曾使用过埃森哲的承包服务。

规则不断变化带来的麻烦

关于奥斯汀团队的具体工作重点,还有许多疑问。他们将专注于审查仅涉及未成年人的内容,还是仅在美国的内容?他们将专注于个人发帖,还是会开展性剥削调查?

前Twitter信任与安全委员会成员安妮·科利尔对《财富》杂志表示:“奥斯汀的百人团队将是全球内容审查网络必不可少的一个小节点。祝这个百人团队好运。”

无论这个团队背负着什么任务,社交媒体审查专家均认为,公司需要大力投资AI工具,以最大程度提高团队的效率。

例如,据The Verge报道,2020年,Facebook在全球雇佣了约15,000名审查员,并宣布将“把AI与人类审查员相结合,以减少错误数量”。 Snap采取了类似做法,并在一篇博客中表示,其使用“自动化工具和人类审查员进行内容审查”。

据前X内部人士透露,公司一直在试验AI审查。马斯克最近通过成立一年的初创公司X.AI开发自己的大语言模型,进军人工智能技术领域,这将为人类审查员团队提供一种宝贵的资源。

该内部人士称,AI系统“只需要约三秒钟时间,就能判断出每一条推文是否符合政策,它们的准确率约为98%,但任何公司依赖人类审查员的准确率都不超过65%。”你可能想要同时看到使用AI和只依赖人类的效果,我认为,你会看到什么是两者之间正确的平衡。”

无论AI工具和人类审查员的表现多出色,重要的是幕后的政策,而X在马斯克的领导下在政策方面有所欠缺。

熟悉X信任与安全事务的消息人士解释称,政策应该足够灵活,能够适应文化背景,它们也需要具有足够的可预测性,使所有人都能了解这些规则。该消息人士称,这对于大型平台的内容审查尤其重要,因为大型平台有“成百甚至上千名审查员,必须了解和解释稳定的规则。如果政策不断变化,你无法一致准确地执行规则。”

规则松散和由此导致的政策不明确,一直是X在马斯克领导下的弊病之一。

马斯克收购了X之后,先后恢复了一批因违反平台政策被封禁的账号,其中包括违反新冠虚假信息政策的众议员玛乔丽·泰勒·格林,发布了一则违反Twitter仇恨行为政策的恐跨性别故事的巴比伦·比,以及因为女性应该为被性侵承担“一些责任”的言论而被封禁的安德鲁·泰特(被Facebook、Instagram和TikTok封禁)。在马斯克入主X之后,这些人的账号均已恢复。

有媒体怀疑,马斯克任内最后一位信任与安全负责人艾拉·欧文的离开,与马斯克批评团队删除马特·沃尔什的《何为女人》(What is a Woman?)恐跨性别纪录片违反X的规则,两者之间存在一定的联系。虽然这部纪录片违反了X的书面政策,但马斯克却坚持禁止将其封禁。

熟悉X审查事务的消息人士补充道:“我从来没有明显感觉到X在根据政策进行审查。该网站在线发布的规则似乎只是个幌子,是为了掩盖其老板最终随心所欲地发号施令。”

前Twitter信任与安全委员会成员朱莉·英曼·格兰特更加直白。她表示:“你不能指望用指头堵住堤坝,就能阻止在平台上泛滥的儿童性暴露海啸,或者深度造假色情片的洪水。”格兰特正在起诉该公司,指控其在儿童性虐待材料方面缺乏透明度。

“根据我2014年至2016年在Twitter的从业经历,这种专业能力的培养需要好几年时间,而要让一个情况糟糕到面目全非的平台做出有意义的改变,需要的时间更长。”(财富中文网)

翻译:刘进龙

审校:汪皓

Under owner Elon Musk, X is bringing some content moderators in-house. Will it help?

JONATHAN RAA/NURPHOTO VIA GETTY IMAGES

In the spring of 2023, when X was still called Twitter, the company began planning a new system to keep the most undesirable content off of its platform.

In place of the army of contract workers that policed most social media sites, including Twitter, the company would build its own, smaller, in-house team of content moderators — a specialized safety net to prevent the most egregious stuff from slipping through without crimping too much on Twitter owner Elon Musk’s outspoken commitment to “free speech.”

Last week, nearly one year later, X announced a new Trust and Safety Center of Excellence in Austin, Texas. The 100-person team of content moderators touted by an X representative in a Bloomberg news report is significantly smaller than the 500-person team that was initially envisioned, according to a former trust and safety staffer. And it’s unclear if X has hired more than a dozen or so people so far.

Still, at a time when lawmakers are turning up the heat on social media companies for endangering children, X’s safety center in Austin has clear PR value. The new center will “bring more agents in house to accelerate our impact,” X CEO Linda Yaccarino said at a Senate hearing on Wednesday.

While some critics took note of the opportunistic timing of the announcement, the details of X’s Austin plan raise a bigger question about the Musk-owned platform: Could Musk’s unconventional approach to content moderation outperform the social media industry’s woeful track record of online safety, or does it represent just another means to cut costs by an organization with little interest in deciding which content is appropriate for its users.

According to several content moderation experts and current or former X insiders that Fortune spoke to, a team of in-house specialists could provide significant advantages compared to the current industry norms. But many also stressed the importance of a coherent underlying policy and investments in tools and technology.

“X+100 is better than just X, in terms of moderation capacity,” a source familiar with trust and safety at X explained. But, the person continued, “the number of humans at computers matters less, in some ways, than having clear policies rooted in proven harm reduction strategies, and the tools and systems necessary to implement those policies at scale — both of which have been dismantled since late 2022.”

X did not respond to requests for an interview or to comment for this story.

Linda Yaccarino, CEO of X, at a Senate hearing on online child safety

ANDREW CABALLERO-REYNOLDS/AFP VIA GETTY IMAGES

Why X decided to bring the content police in-house

The flood of problematic content on X has become a frequent topic of public debate—and disputes— since Musk’s $44 billion acquisition closed in November 2022.

After the Center for Countering Digital Hate published a report claiming that X failed to moderate “extreme hate speech,” Musk sued the group for doing calculated harm with “baseless claims.” Meanwhile videos of graphic animal abuse have spread widely on the platform, according to reports. And just last week explicit, AI-generated content featuring Taylor Swift circulated unchecked for 17 hours until the platform shut down the ability to search for her name at all.

X also leans heavily on its Community Notes feature, which allows approved users to add a note to posts with additional context, to moderate its millions of active users, the former trust and safety staffer said. But the person emphasized that this is merely “one tool” that should be used for moderation. What’s more, a Wired investigation uncovered coordinated efforts within the feature to propagate disinformation, highlighting a lack of significant oversight from the company.

By some estimates Musk has cut 80% of the engineers dedicated to trust and safety and thinned the ranks of outsourced content moderators whose job it is to monitor and remove content that violates the company’s policies. In July, Yaccarino announced to staff that three leaders would oversee various aspects of trust and safety, such as law enforcement operations and threat disruptions, Reuters reported. However, according to another source at X, it’s unclear where T&S stands within the organization’s hierarchy, noting that the group doesn’t appear to be “at the top level anymore.”

However, within Musk’s social media company, there also has been an effort to rethink how the job is done.

Change of plans: Welcome to Austin

The new Austin center actually began as a Bay Area center. The intention was to establish the center in a city like San Francisco, which would help recruit top-tier multilingual talent—crucial for countering Internet trolls, since more than 80% of X’s user base don’t live in the U.S., a source familiar with trust and safety under Musk told Fortune. Given the nuances of individual languages, and the idioms and expressions unique to each one, the idea was that someone familiar with a specific language or culture could, for example, better distinguish a joke from a threat than could a low-paid generalist contract worker with no specialized skills.

“They actually started this by hiring people in the Bay Area to test their quality level and whether or not it would work better than having it outsourced,” the former staffer said. “[X] hired a small team of people and tested it out and their ability to make accurate decisions.” The plan called for starting with 75 staffers and to eventually scale it to a 500-person team if it delivered results.

However, Musk, at that time, leaned towards a more cost-effective location, favoring Austin, because he was certain in his ability to attract and potentially relocate individuals proficient in various languages. The change has added a few wrinkles to the project.

“Having to hire hundreds of people and get them up and running and trained and all that is a roughly two to three month process,” the former X staffer explained. “Then you start training and so you know, realistically you’re looking at three, four or five months before you get a team in place. That assumes the job market is awesome, right? And you don’t have to relocate people and all of that fun stuff.”

According to LinkedIn, a dozen recruits have joined X as “trust and safety agents” in Austin over the last month — and most appeared to have moved from Accenture, a firm that provides content moderation contractors to Internet companies. It’s not clear if X has a contract-to-hire plan in place with Accenture—whereby workers retained by consulting firms like Accenture are given full-time roles at a client company when there’s a good fit—but the source confirmed that X has used Accenture’s contracting services in the past.

The trouble with enforcing rules that are constantly shifting

There are a lot of questions about what exactly the Austin team will focus on. Will they focus on content involving only minors, or only in the U.S.? Will they focus on individual posts, or conduct investigations into sexual exploitation?

“100 people in Austin would be one tiny node in what needs to be a global content moderation network,” former Twitter trust and safety council member Anne Collier told Fortune. “100 people in Austin, I wish them luck”

Whatever their task, social media moderation experts agree that the company will need to make a significant investment in AI tools for the team to be most effective.

Facebook for example employed about 15,000 moderators globally in 2020 when it announced it was “marrying AI and human reviewers to make less total mistakes,” the Verge reported at the time. Snap operates similarly, stating in a blog post that it uses “a combination of automated tools and human review to moderate.”

According to the former X insider, the company has experimented with AI moderation. And Musk’s latest push into artificial intelligence technology through X.AI, a one-year old startup that’s developed its own large language model, could provide a valuable resource for the team of human moderators.

An AI system “can tell you in about roughly three seconds for each of those tweets, whether they’re in policy or out of policy, and by the way, they’re at the accuracy levels about 98% whereas with human moderators, no company has better accuracy level than like 65%,” the source said. “You kind of want to see at the same time in parallel what you can do with AI versus just humans and so I think they’re gonna see what that right balance is.”

But no matter how good the AI tools and the moderators, the performance is only as good as the policy that drives it, and that’s an area where X has struggled under Musk.

Policies need to be flexible enough to adapt to cultural contexts, but they also need to be sufficiently predictable for everyone to understand what the rules are, the source familiar with trust and safety at X explained. This is especially important when moderating content on a large platform “where hundreds or even thousands of moderators have to understand and interpret a stable set of rules. You can’t implement rules consistently and accurately if they’re constantly shifting,” the person said.

The loosening of the rules, and the resulting lack of clarity, has been one constant at X under Musk’s stewardship.

After Musk’s takeover, he went on to reinstate a slate of accounts that were banned for breaking the platform’s policies: Rep. Marjorie Taylor Greene violated COVID-19 misinformation policies, the Babylon Bee posted a transphobic story that violated Twitter’s hateful conduct policy, Andrew Tate (banned from Facebook, Instagram, and TikTok) was banned for saying that women should bear “some responsibility” for being sexually assaulted. All these people were reinstated under Musk.

Some outlets speculated there was a link between the exit of Ella Irwin—the final head of trust and safety during Musk’s tenure—and Musk’s criticism of the team’s decision to remove Matt Walsh’s transphobic “What is a Woman?” documentary, a violation of X’s rules. Despite the violation of X’s written policy, Musk insisted the documentary stay up.

“It’s not obvious to me that X moderates in accordance with policies at all anymore. The site’s rules as published online seem to be a pretextual smokescreen to mask its owner ultimately calling the shots in whatever way he sees it,” the source familiar with X moderation added.

Julie Inman Grant, a former Twitter trust and safety council member who is now suing the company for for lack of transparency over CSAM, is more blunt in her assessment: “You cannot just put your finger back in the dike to stem a tsunami of child sexual expose – or a flood of deepfake porn proliferating the platform,” she said.

“In my experience at Twitter from 2014 to 2016, it took literally years to build this expertise – and it will take much more than that to make a meaningful change to a platform that has become so toxic it is almost unrecognizable.”

热读文章
热门视频
扫描二维码下载财富APP