立即打开
美国的难题:是否允许网站自己把关内容?

美国的难题:是否允许网站自己把关内容?

Nir Nuthi 2021-09-23
立法者反对美国一些最受欢迎的公司所谓的把关行为,但他们在采取行动时却忘记了此举对数字言论自由和网络安全产生的影响。

如果内容控制规则明确,用户上网就会觉得更加安全。图片来源:Eric Lafforgue/Art in All of Us—Getty Images

立法者目前推行的反垄断政策将削弱美国的竞争优势,也将导致互联网的安全性降低。

今年夏天,美国众议院司法部反垄断小组委员会(House Judiciary’s Subcommittee on Antitrust)发起并推动通过了一系列法案,目的是彻底改革美国的竞争政策,更具体地说是消费者福利标准。立法者反对美国一些最受欢迎的公司所谓的把关行为,但他们在采取行动时却忘记了此举对数字言论自由和网络安全产生的影响。

他们似乎也没有意识到其实对手正是自己。立法者在谈论第230条(Section 230)时,本意是希望进一步应用法律,以避免美国人受到互联网有害言论的影响,该项法律与宪法第一修正案(First Amendment)结合后,允许在全美推广本地新闻和观点。然而,当立法者转而讨论反垄断监管时,又根本不想对内容实施监管。如此一来必将形成一个混乱且复杂的监管制度,损害企业和公众的网络言论自由。

事实上,众议院司法部反垄断的一揽子计划中有一项法案,导致内容审查可能违背反垄断法。这项法案即《美国选择与创新在线法案》(American Choice and Innovation Online Act),可能迫使网站接纳新纳粹式领导人和争取平等权利和社会变革的活动家,删除有害内容的同时可能因为未能同等对待不良行为者与普通用户而违反反垄断法,要么只能完全删除由用户生成的内容。

该法案将导致美国人在每天使用网站和应用程序时更难有安全感。这不仅是反垄断法的过度监管,也是激烈的内容管控决定,结果是把互联网变成人们不想涉足的污秽场所,或者是公众无法影响对话的美化媒体网络。

与普遍运营商提案一样,该法案采用了无差别原则,以防止公司对竞争对手不利。然而,无差别原则也让各种网络服务无法限制由用户生成的内容。从而导致为保护弱势群体编写的社区指导直接失效。

从反垄断的角度来看,无差别听起来很简单。具体做法就是:不将自家产品和服务与竞争对手的产品和服务区别对待。然而,从内容适度的角度来说,禁止差别对待破坏了网络安全和言论自由之间不稳定的平衡。这意味着,使用社交媒体的普通用户与发表内容威胁公共健康、国家安全或弱势群体安全的不良行为者没有不同,无法区别对待。

尽管可能很理想,但为了实现内容管控和反垄断担心的平衡,想找到潜在的妥协措施似乎很难。议会和学术界尚未找到可行的“中间地带”,也说明确实需要权衡,即平台可以按照认为合适的方式策划内容,或者议会尝试鼓励平台以某种方式策划内容,但如此一来可能变成第一修正案中提到的司法干预。从这个意义上说,两方面的无差别对待都不可避免地让网站面临另一方的诉讼。

社交媒体公司只有两个选择:要么继续为所有用户服务并接收用户生成内容,让互联网接近人性底线,要么就取消一开始催动用户生成内容繁荣的功能。由于不良内容仅对业务有不利影响,多数公司将选择内容结构化处理。如果早上刷Instagram,就不再只有可爱的狗狗和家人朋友的更新,还很容易变成虚假不真实的新方式,最后满眼都是虚假不真实的企业赞助和已经过审查的内容。在这样的互联网上,女性和黑人平权运动不可能壮大,连生存都很困难。

明确的内容管控规则有助于美国人在网上感到安全。家长、边缘人群和企业家并不希望为了寻求支持、资源和机会而不得不忍受骚扰和辱骂。

社交媒体能够分析不同类型内容的上下文,从而屏蔽不良行为者,并向用户介绍有用的信息。内容合理是目标的核心,可以确保网站平衡自由表达和网络安全,且两方面均实现最大化。否则,朋友和邻居只能通过咒骂、暴力和色情内容与社区建立联系。

虽然一些网站靠惊人的三观和引战发家,但多数网站之所以成功还是因为用户不想面对人类能够做出的恶行。如果允许网站自行管理内容的保护措施就此取消,互联网会怎样?我不知道别人怎么想,但如果数字生态退化到再也无法轻松地分享生活新鲜事,或者变成充斥着糟糕内容的粪池,我就可能会拔掉网线,彻底告别互联网。(财富中文网)

基尔·尼特西是Young Voices的联合撰稿人,主要撰写有关数字言论自由和自由企业的文章。

译者:冯丰

审校:夏林

立法者目前推行的反垄断政策将削弱美国的竞争优势,也将导致互联网的安全性降低。

今年夏天,美国众议院司法部反垄断小组委员会(House Judiciary’s Subcommittee on Antitrust)发起并推动通过了一系列法案,目的是彻底改革美国的竞争政策,更具体地说是消费者福利标准。立法者反对美国一些最受欢迎的公司所谓的把关行为,但他们在采取行动时却忘记了此举对数字言论自由和网络安全产生的影响。

他们似乎也没有意识到其实对手正是自己。立法者在谈论第230条(Section 230)时,本意是希望进一步应用法律,以避免美国人受到互联网有害言论的影响,该项法律与宪法第一修正案(First Amendment)结合后,允许在全美推广本地新闻和观点。然而,当立法者转而讨论反垄断监管时,又根本不想对内容实施监管。如此一来必将形成一个混乱且复杂的监管制度,损害企业和公众的网络言论自由。

事实上,众议院司法部反垄断的一揽子计划中有一项法案,导致内容审查可能违背反垄断法。这项法案即《美国选择与创新在线法案》(American Choice and Innovation Online Act),可能迫使网站接纳新纳粹式领导人和争取平等权利和社会变革的活动家,删除有害内容的同时可能因为未能同等对待不良行为者与普通用户而违反反垄断法,要么只能完全删除由用户生成的内容。

该法案将导致美国人在每天使用网站和应用程序时更难有安全感。这不仅是反垄断法的过度监管,也是激烈的内容管控决定,结果是把互联网变成人们不想涉足的污秽场所,或者是公众无法影响对话的美化媒体网络。

与普遍运营商提案一样,该法案采用了无差别原则,以防止公司对竞争对手不利。然而,无差别原则也让各种网络服务无法限制由用户生成的内容。从而导致为保护弱势群体编写的社区指导直接失效。

从反垄断的角度来看,无差别听起来很简单。具体做法就是:不将自家产品和服务与竞争对手的产品和服务区别对待。然而,从内容适度的角度来说,禁止差别对待破坏了网络安全和言论自由之间不稳定的平衡。这意味着,使用社交媒体的普通用户与发表内容威胁公共健康、国家安全或弱势群体安全的不良行为者没有不同,无法区别对待。

尽管可能很理想,但为了实现内容管控和反垄断担心的平衡,想找到潜在的妥协措施似乎很难。议会和学术界尚未找到可行的“中间地带”,也说明确实需要权衡,即平台可以按照认为合适的方式策划内容,或者议会尝试鼓励平台以某种方式策划内容,但如此一来可能变成第一修正案中提到的司法干预。从这个意义上说,两方面的无差别对待都不可避免地让网站面临另一方的诉讼。

社交媒体公司只有两个选择:要么继续为所有用户服务并接收用户生成内容,让互联网接近人性底线,要么就取消一开始催动用户生成内容繁荣的功能。由于不良内容仅对业务有不利影响,多数公司将选择内容结构化处理。如果早上刷Instagram,就不再只有可爱的狗狗和家人朋友的更新,还很容易变成虚假不真实的新方式,最后满眼都是虚假不真实的企业赞助和已经过审查的内容。在这样的互联网上,女性和黑人平权运动不可能壮大,连生存都很困难。

明确的内容管控规则有助于美国人在网上感到安全。家长、边缘人群和企业家并不希望为了寻求支持、资源和机会而不得不忍受骚扰和辱骂。

社交媒体能够分析不同类型内容的上下文,从而屏蔽不良行为者,并向用户介绍有用的信息。内容合理是目标的核心,可以确保网站平衡自由表达和网络安全,且两方面均实现最大化。否则,朋友和邻居只能通过咒骂、暴力和色情内容与社区建立联系。

虽然一些网站靠惊人的三观和引战发家,但多数网站之所以成功还是因为用户不想面对人类能够做出的恶行。如果允许网站自行管理内容的保护措施就此取消,互联网会怎样?我不知道别人怎么想,但如果数字生态退化到再也无法轻松地分享生活新鲜事,或者变成充斥着糟糕内容的粪池,我就可能会拔掉网线,彻底告别互联网。(财富中文网)

基尔·尼特西是Young Voices的联合撰稿人,主要撰写有关数字言论自由和自由企业的文章。

译者:冯丰

审校:夏林

By pushing antitrust policies that will blunt America’s competitive edge, our lawmakers are mistakenly also making the internet less safe for us all.

This summer, the House Judiciary’s Subcommittee on Antitrust marked up and pushed through a flurry of bills that were designed to overhaul American competition policy, and more specifically, the consumer welfare standard. Our lawmakers were trying to thwart alleged gatekeeping by some of the most popular American companies, but in doing so, they forgot about the impact this would have on our digital free speech and online safety.

They also don’t seem to realize that they’re fighting against themselves. When these very lawmakers talk about Section 230—a law that when combined with the First Amendment helps promote local news and views across the country—they want more regulation to protect Americans from harmful speech online. Yet when they turn around to talk about antitrust regulation, they don’t want any regulation of content at all. That leads to a confusing and burdensome regulatory regime that will hurt businesses and the public’s free speech online.

In fact, one of the bills in question from the House Judiciary antitrust package will make content moderation decisions potential antitrust violations. This bill, the American Choice and Innovation Online Act, could force websites to either host Neo-Nazi leaders alongside activists fighting for equal rights and social change, take down harmful content but risk committing an antitrust violation for not treating bad actors the same as typical users, or eliminate user-generated content altogether.

The bill will make it harder for Americans to feel safe on websites and apps they use every day. Not only is this an overreach of antitrust regulation, but it is also a drastic content moderation decision that will turn our internet into either a cesspool we won’t want to wade through or a glorified media network where we the public can’t influence the conversation.

Similar to the common carriage proposals gaining traction, this bill takes the principle of nondiscrimination and uses it to prevent companies from disadvantaging their competitors. However, the bill’s principle of nondiscrimination also makes it impossible for online services of all kinds to restrict user-generated content. It renders community guidelines written to protect vulnerable people useless.

Nondiscrimination from an antitrust perspective sounds simple enough. Don’t discriminate between your own products and services and that of your competitors. However, banning discrimination from a content moderation perspective ruins the precarious balance between online safety and free speech. It means that the average user who uses social media is no different and cannot be treated differently from bad actors whose content threatens public health, national security, or the safety of vulnerable people.

As desirable as it may be, a potential compromise that balances content moderation and antitrust concerns doesn’t feel possible. The fact that Congress and academia have not developed a workable "middle ground" suggests it really is a trade-off: platforms can curate content as they see fit, or Congress can try to encourage them to curate in a certain way—which then risks judicial intervention under the First Amendment. In this sense, efforts for nondiscrimination on either front will inevitably open up websites to the threat of litigation on the other.

This leaves social media companies with two options: carry all users and user-generated content and inch the internet towards the worst humanity can offer or take away the features that allowed user-generated content to thrive in the first place. Given that horrible content is simply bad for business, most companies will turn towards structured content. A morning scroll through Instagram won’t just be cute dogs and updates from family and friends; it could easily become the new fabricated and inauthentic way to flip through channels full of corporate-sponsored, pre-vetted content. This is not an internet the #MeToo movement or Black Lives Matter can thrive or even survive on.

Clear content moderation rules help Americans feel safe online. What parents, marginalized communities, and entrepreneurs don’t want is to have to endure harassment and abuse in order to seek support, resources, and opportunities.

Social media relies on contextualizing different types of content, removing bad actors, and promoting useful information to its users. Content moderation is at the core of that goal, ensuring websites can balance free expression and online safety to maximize both. Otherwise, our friends and neighbors would have to wade through expletives, violence, and sexual content just to connect with their communities.

While some websites thrive on shock value and causing offense, most succeed because their users do not want to face the onslaught of horrible things humanity is capable of. What is going to happen to the internet when the protections allowing websites to govern their own content go away? I don’t know about you, but if our digital ecosystem devolves into a space where I can’t share life updates easily or becomes a cesspool of horrible content, I might just unplug.

Kir Nuthi is an associate contributor at Young Voices writing on issues related to digital free speech and free enterprise.

热读文章
热门视频
扫描二维码下载财富APP