立即打开
马斯克又错了,人工智能并不比朝鲜核武更危险

马斯克又错了,人工智能并不比朝鲜核武更危险

Michael L. Littman 2017年08月20日
或许有一天,我们身边的世界将到处都是人工智能机器。但是我们必将能与机器一起进化。

伊隆·马斯克最近在推特上声称,人工智能要比朝鲜核导威胁还要危险。他之所以做出这一推论,是因为他坚信思想的力量。然而他的这一理论其实是经不起推敲的。

如果你坚信理念可以改变世界,如果你认为电脑终有一天会进化出自主思想,那么你可能会认为,电脑有朝一日统治世界的“赛博朋克”式的噩梦并非是没有可能的。这个逻辑在马斯克的脑中是根深蒂固的,作为一个热衷于将理念变成行动并以此为生的人,他当然希望你们也相信这个理念。然而他的论断是错误的,你不应该相信这个末日预言。

马斯克的逻辑光靠一条推文是总结不完的——归功于创新理念以及科学家们的不懈努力,加上海量的投资,近些年,电脑的运行速度已经变得越来越快,功能也越来越强大。最近几年,计算机领域的一些重大难题已经相继被攻克,比如计算机已经具备了识别物体、图像和语音的能力,甚至在围棋等项目上完胜人类的世界冠军。如果机器学习研究人员编制的程序已经能够取代字幕组、打字员和围棋运动员这些工种,那么或许过不了多久,机器就将能够自己给自己编程。一旦计算机程序进行自设计阶段,他们就会迅速进化,更且将越来越擅于自我完善和优化。

由此带来的“人工智能爆炸”将使计算机拥有无与伦比的能力,届时统治世界的必将是它们而非人类。到时这些计算机会产生什么主观意识和目的呢?哪怕它们的主观目的是善意的,也必将对人类的存续产生重大威胁。这也是为什么马斯克认为人工智能的问题要比朝鲜问题重大多了。就算有几个美国城市被朝鲜核导弹轰平了,对人类的危害也不是永久的,而人类被不断自我完善的计算机系统性地灭绝,人类的知识最终融为其强大计算能力的一部分,这才是人类永恒的噩梦。

不过马斯克的推论毕竟高估了“人工智能爆炸”的可能性。我们不能仅仅因为机器学习领域最近取得的几个成功,就断言人工智能终将封神。而且机器学习技术也并不像它乍看起来那样危险。

举个例子,你可能看到过计算机以超人的能力处理某项任务,结果也令人非常惊叹。人类的语言和博弈能力是建立在综合的人生经验基础上的,因此当你看到机器能回答问题,或是在围棋比赛中将你杀得一败涂地时,你很可能自然而然地认为,计算机也同样具备其他的人类技能。然而计算机系统并非是那样工作的。

简单地说,近年来取得成功的机器学习系统,都采用了以下构建方法:首先,人们要决定他们想解决什么问题,然后以一系列代码的形式表现出来,这些代码被称为“目标函数”,系统可以针对目标函数进行打分。然而后他们会收集数以百万计的案例,来“训练”系统学会他们希望其展示的行为。然后他们会设计自己的人工智能架构,并对其进行优化,通过人类见解和强大的优化算法使目标函数最大化。

通过这种方法得到的计算机系统,往往可以展现出超人的性能。然而这种超人的性能仅仅限于系统最初赋予的单一任务。如果你希望这个系统能完成其他任务,那么你可能要按照这种方法重头设计另一个系统。不过更重要的是,和围棋游戏不同,人生的游戏是没有一个清晰的目标函数的——现有的算法也不适合建立一个大而全的人工智能机器。

或许有一天,我们身边的世界将到处都是人工智能机器。但是我们必将能与机器一起进化,而且我们还有无数个决策要做。世界将如何演化,也将取决于我们的这些决策。我们不应该让恐惧阻止我们在技术上继续前进。(财富中文网)

本文作者Michael L. Littman是美国布朗大学的计算机科学教授,也是布朗大学以人为本机器人项目(Humanity Centered Robotics Initiative)主任。

译者:贾政景

Elon Musk's recent remark on Twitter that artificial intelligence (AI) is more dangerous than North Korea is based on his bedrock belief in the power of thought. But this philosophy has a dark side.

If you believe that a good idea can take over the world and if you conjecture that computers can or will have ideas, then you have to consider the possibility that computers may one day take over the world. This logic has taken root in Musk's mind and, as someone who turns ideas into action for a living, he wants to make sure you get on board too. But he’s wrong, and you shouldn’t believe his apocalyptic warnings.

Here's the story Musk wants you to know but hasn't been able to boil down to a single tweet. By dint of clever ideas, hard work, and significant investment, computers are getting faster and more capable. In the last few years, some famously hard computational problems have been mastered, including identifying objects in images, recognizing the words that people say, and outsmarting human champions in games like Go. If machine learning researchers can create programs that can replace captioners, transcriptionists, and board game masters, maybe it won't be long before they can replace themselves. And, once computer programs are in the business of redesigning themselves, each time they make themselves better, they make themselves better at making themselves better.

The resulting “intelligence explosion” would leave computers in a position of power, where they, not humans, control our future. Their objectives, even if benign when the machines were young, could be threatening to our very existence in the hands of an intellect dwarfing our own. That's why Musk thinks this issue is so much bigger than war with North Korea. The loss of a handful of major cities wouldn't be permanent, whereas human extinction by a system seeking to improve its own capabilities by turning us into computational components in its mega-brain—that would be forever.

Musk’s comparison, however, grossly overestimates the likelihood of an intelligence explosion. His primary mistake is in extrapolating from recent successes of machine learning the eventual development of general intelligence. But machine learning is not as dangerous as it might look on the surface.

For example, you may see a machine perform a task that appears to be superhuman and immediately be impressed. When people learn to understand speech or play games, they do so in the context of the full range of human experiences. Thus when you see something that can respond to questions or beat you soundly in a board game, it is not unreasonable to infer that it also possesses a range of other human capacities. But that's not how these systems work.

In a nutshell, here's the methodology that has been successful for building advanced systems of late: First, people decide what problem they want to solve and they express it in the form of a piece of code called an objective function—a way for the system to score itself on the task. They then assemble perhaps millions of examples of precisely the kind of behavior they want their system to exhibit. After that they design the structure of their AI system and tune it to maximize the objective function through a combination of human insight and powerful optimization algorithms.

At the end of this process, they get a system that, often, can exhibit superhuman performance. But the performance is on the particular task that was selected at the beginning. If you want the system to do something else, you probably will need to start the whole process over from scratch. Moreover, the game of life does not have a clear objective function—current methodologies are not suited to creating a broadly intelligent machine.

Someday we may inhabit a world with intelligent machines. But we will develop together and will have a billion decisions to make that shape how that world develops. We shouldn't let our fears prevent us from moving forward technologically.

Michael L. Littman is a professor of computer science at Brown University and co-director of Brown's Humanity Centered Robotics Initiative.

  • 热读文章
  • 热门视频
活动
扫码打开财富Plus App