立即打开
AI倾向不是问题,问题在美国社会

AI倾向不是问题,问题在美国社会

Alex Salkever, Viviek Wadhwa 2019-04-17
虽然人工智能在改善我们的生活方面有着不可思议的潜力,但实际情况是它只会如实地向我们映射出我们的社会问题。

2018年4月16日,费城一家星巴克前的抗议人群,此前两天有两名黑人在此被捕。人工智能倾向是机器学习了人类偏见的结果。只有人能够解决这个问题。图片来源:Bastiaan Slabbers—NurPhoto via Getty Images

上周三,美国参议员罗恩·怀登、柯瑞·布克和众议员伊薇特·克拉克提出了《算法问责法案》(Algorithmic Accountability Act)。这表明面部识别、自动驾驶汽车、客服、营销以及内容审核等工具使用的人工智能展现出人类的倾向越发让议员们感到担心。

虽然人工智能在改善我们的生活方面有着不可思议的潜力,但实际情况是它只会如实地向我们映射出我们的社会问题。出于这个原因,我们不能放心地让人工智能做出有可能存在人类偏见的重大决策。

即使最开明的人也会有根深蒂固的倾向。这样的倾向很难发现,更难以纠正。如今,人工智能的学习方法是解读输入数据的模式。比如说,你构建的人工智能系统的作用是发现今后会犯罪的人,而你能依赖的数据都来自于过去。由于监狱中黑人罪犯的比例较高,而白人罪犯的比例低于他们在美国人口中的比重,一个“天真的”人工智能系统就会由此推断黑人犯罪的可能性要高于白人。

这样的系统无法将所有造成黑人入狱比例较高的系统性倾向都考虑在内。同时,目前我们可用于训练人工智能系统的数据都只是表面客观,而且它们天生就是社会常态和倾向的体现。

寻找更好数据的工作将异常艰难。就算我们在程序中设定让人工智能系统忽视种族因素,并用其他指标来预测今后的犯罪情况,结果仍有可能是一样的。大家可以考虑一下,罪犯可能具有的其他共性:住在某些社区,来自于单亲家庭或者高中没毕业。所有这些实际上都是种族的“替代物”,因为机器学习无法掌握它们之间各种各样的内在联系。

从这个角度看,目前创造出的人工智能可以像专家一样聪明,但和人类有偏见的智力有着天壤之别。

人工智能发挥出色的工作都是通过模式的匹配来获得客观性结果。下围棋、在街上开车以及在乳腺中发现癌变都是狭义人工智能的典型代表。这些系统在扩展人类工作方面能够带来极大的帮助,而且已经在具体的工作环节上超过了人。癌症就是癌症,不管它在亚洲人体内还是在白种人体内。这些系统的判断都基于能够客观衡量的数据,如果全面调整对相关数据的解读,就可以随时对这些系统进行纠正。

然而,尽管人工智能机器在发现癌症方面或许能够胜过人类放射科医生,但在今后的许多年里,它仍然无法复制人类医生的智慧和洞察力。

这就是人工智能暴露最大风险的地方,那就是较软性的工作,它们可能有客观结果,但同时包含了我们通常所说的判断。这样的工作对人们的生活有很大影响。发放抵押贷款、大学招生、重罪犯假释、把怀疑受虐待的孩子和他们的亲生父母隔离开来都属于这一类别。这些判断非常容易受到人类倾向的影响,但也只有人类自己有能力发现这样的倾向。

人工智能的另一缺陷在于它对内容的分析和使用。YouTube等公司创造出了提升用户参与度的算法,后者能够找出最有粘性、参与程度最高的内容并进行推送。可惜的是,设计这些算法时没有加上“断路器”,而由人对这些算法推广的有害内容是否有利于社会做出的评估都成了迟到太久的马后炮。

此外,相关风险意识的提高并未延缓算法决策和社会网络交织在一起的速度。今天,大型科技公司部署的人工智能正在影响我们看到、听到、买到甚至是感觉到的东西。把鉴别职能从人转移给机器的情况正在迅速蔓延到其他诸多领域,但都是在“优化”的掩盖之下。

该按下暂停键了。今后我们创造出的系统也许可以完美利用和我们生活有关的数据并剔除其中的倾向。但在那之前,让人工智能自行其是对社会来说更多的是一种风险,而非益处。(财富中文网)

亚历克斯·索克埃尔是一位研究科技前沿领域的顾问和科技行业高管。维维克·瓦德哈是卡耐基梅隆大学硅谷分校及哈佛法学院的杰出研究员。他们合作撰写了《无人驾驶汽车中的司机:技术选择怎样塑造未来》(The Driver in the Driverless Car: How Our Technology Choices Will Create the Future)一书,定于今年6月出版。

译者:Charlie

审校:夏林

On last Wednesday, Sens. Ron Wyden and Cory Booker and Rep. Yvette Clarke introduced the Algorithmic Accountability Act, indicating policymakers’ increasing concern that artificial intelligence is magnifying human bias in tools such as facial recognition, self-driving cars, customer service, marketing, and content moderation.

While A.I. has incredible potential to improve our lives, the truth is that it is only capable of reflecting our societal problems right back at us. And because of that, we can’t trust it to make important decisions that are susceptible to human prejudice.

Even the most enlightened of humans have deep-seated biases. Difficult to identify, they are even harder to correct. Today’s A.I. learns by encoding patterns from the data that it feeds on. If you build an A.I. system designed to identify who is going to be a future convict, for example, the only data you can rely on is past data. Since the percentage of blacks in prison is higher and the percentage of whites in prison is lower than their respective shares of the U.S. population, a naive A.I. system will infer that a black person is more likely than a white person to commit a crime.

Such a system is unable to take into account all of the systemic biases that have ensured blacks’ relatively higher incarceration rates. And we currently don’t have data to train A.I. systems other than data that, though superficially objective, inherently expresses societal norms and biases.

Finding better data will be exceptionally hard. Even if we programmed an A.I. system to ignore race and use different measures when predicting future criminality, the results would likely come out the same. Consider the other attributes convicts might share: living in particular neighborhoods, coming from single-parent families, or not graduating from high school. All of these categories would essentially act as proxies for race, since machine learning cannot account for all their different interlinkages.

In this way, the current generation of artificial intelligence is smart like a savant, but has nothing close to the discriminating intelligence of a human.

A.I. shines in performing tasks that match patterns in order to obtain objective outcomes. Playing Go, driving a car on a street, and identifying a cancer lesion in a mammogram are excellent examples of narrow A.I. These systems can be incredibly helpful extensions of how humans work and are already surpassing us in discrete parts of jobs. A tumor is a tumor, regardless of whether it is in the body of an Asian or Caucasian. Able to base their judgements on objectively measurable data, these systems are readily correctible if and when interpretations of those data are subject to overhaul.

But, although an A.I. machine may best a human radiologist in spotting cancer, it will not, for many years to come, replicate the wisdom and perspective of the best human radiologists.

This is where A.I. presents its greatest risk: in softer tasks that may have objective outcomes but incorporate what we would normally call judgment. Some such tasks exercise much influence over people’s lives. Granting a mortgage, admitting a child to a university, awarding a felon parole, or deciding whether children should be separated from their birth parents due to suspicions of abuse fall into this category. Such judgments are highly susceptible to human biases—but they are biases that only humans themselves have the ability to detect.

Another failure of A.I. lies in its analysis and use of content. Services like YouTube have created algorithms to boost user engagement that identify the stickiest, most engaging content and promote it. Unfortunately, these algorithms were designed without a circuit breaker, and any human assessment of whether the toxic content promoted by these algorithms was good for society was a too-late afterthought.

Growing awareness of these risks, however, has not slowed the rapid weaving of algorithmic decision-making into the fabric of society. Today, the tech giants deploy A.I. to influence what we see, hear, buy—and even feel. The shift of the burden of discernment from people to machines is rapidly spreading into many other nooks—but under the guise of “optimization.”

It’s time to press pause. Perhaps in the future we can create systems that do an excellent job of utilizing data about our lives while excising bias. Until then, A.I. left unchecked is more of a risk than a benefit to society.

Alex Salkever is a consultant and technology executive researching the frontiers of technology. Vivek Wadhwa is a distinguished fellow at Carnegie Mellon University at Silicon Valley and Harvard Law School. Together, they authored The Driver in the Driverless Car: How Your Technology Choices Create the Future, which will be available in paperback in June 2019.

热读文章
热门视频
扫描二维码下载财富APP