立即打开
人工智能统治世界?还早着呢

人工智能统治世界?还早着呢

Vivek Wadhwa 2018-10-10
AI拥有不可思议的潜力,这一点毋庸置疑。但这项技术仍处于起步阶段,也不存在什么AI超级大国。

如果按新闻报道判断,人们很容易就会相信世界很快将会被人工智能(AI)所统治。中国风投人士李开复表示,AI很快就会创造出数万亿美元的财富,而且中国和美国是两个AI超级大国。

AI拥有不可思议的潜力,这一点毋庸置疑。但这项技术仍处于起步阶段,也不存在什么AI超级大国。将AI付诸使用的竞赛基本上还没有开始,特别是在商业领域。另外,最先进的AI工具是开源软件,也就是说任何人都能接触的到。

科技公司用很酷的演示来炒作AI,比如谷歌的AlphaGo Zero。它用三天时间学会了围棋,这是世界上最难的棋类之一,而且能轻松击败顶尖选手。还有几家公司宣称在自动驾驶汽车方面取得了突破。但别被骗了——下棋只是特例,自动驾驶汽车也仍处于试验阶段。

AlphaGo Zero的前身AlphaGo用生成对抗网络(generative adversarial network)来开发自己的智力。这项技术让两个AI系统通过相互对抗来相互学习,其要点在于这两个系统在开始对抗前会接受大量训练。更重要的是,它们的问题和结果都有很明确的定义。

和下棋或者玩街机不同,商业系统没有确定的结果和规则。它在运转时使用的数据非常有限,而且往往支离破碎、混乱不堪。同时,进行关键商业分析的不是计算机,理解计算机系统收集的信息并决定怎样予以使用是人的工作。人能处理不确定性和疑问,AI则不行。谷歌的Waymo自动驾驶汽车总共已经行驶了900多万英里(逾1450万公里),但它的正式推出还遥遥无期。特斯拉的自动驾驶系统Autopilot已经收集了15亿英里(24.15亿公里)的数据,却还不会在遇到红灯时停下来。

如今的AI系统都竭尽全力来模仿人脑的神经网络功能,但它们的模拟能力非常有限。它们使用的技术叫做深度学习——在你明确告诉AI希望它学什么并提供标注清晰的范例后,AI就会分析数据中的模式,并将其存储起来以备今后使用。它掌握这些模式的精确程度取决于数据的完整程度,所以你给的范例越多,AI就会越有用。

但这里有一个问题,那就是AI只能达到它所接收数据的水平,而且只能在给定背景的狭窄范围内对数据进行解读。它并不“理解”自己分析了什么,因此无法将其用于其他背景下的情景。另外,AI也无法弄明白因果和相关的区别。

此类AI的更大问题在于它学到了什么仍是个迷,或者说那是对数据的一组无法确定的反应。神经网络受训后,就连设计者也不完全清楚其运作机理。他们把这种情况称为AI黑箱。

企业可不能让自身机制做出无法解释的决定,因为监管部门对它们有要求,而且它们也担心自己的声誉。所以,它们做出的所有决定都必须可以理解、解释并证明其合理性。

接下来就是可靠性问题。航空公司已经开始安装基于AI的面部识别系统,中国也正在基于这样的系统来构建全国性监控网络。人们将AI用于营销和信用分析,还用它来操控汽车、无人机和机器人。人们训练AI对医疗数据进行分析,目的是协助甚至取代人类医生。但问题在于,在所有案例中,AI都有可能受到蒙骗。

去年12月谷歌发表了一篇论文,证明自己可以欺骗AI系统,让它把香蕉认成烤面包机。印度科技学院不久前做的展示也说明他们有可能让几乎所有AI系统陷入困惑,而且就像谷歌,他们甚至没有使用AI系统作为学习基础的知识。AI出现后,安全和隐私成了马后炮,就像计算机和互联网刚刚开始发展时一样。

顶尖AI公司通过开源工具交出了这个领域的钥匙。软件曾被视为商业机密,但开发者已经意识到把它展示给别人并让后者基于他们的代码继续构建有可能给软件带来极大的改进。微软、谷歌和Facebook已经公开了他们的AI代码,公众可以免费进行探究、改编和完善。百度也公开了自动驾驶软件阿波罗的源代码。

软件的真正价值在于应用,也就是你怎样使用它。就像中国打造自己的科技公司以及印度用硅谷创造的工具建立了价值1600亿美元的IT服务业一样,任何人都可以用对公众开放的AI工具制作出成熟的应用。创新现已全球化,从而创造出了一个公平的竞争环境,特别是在AI领域。(财富中文网)

维维克·瓦德哈是卡耐基梅隆大学工程学院的杰出研究员,他与别人合作撰写了《有人黑了你的幸福:为何科技正在夺得人类大脑控制权以及如何反击》(Your Happiness Was Hacked: Why Tech Is Winning the Battle to Control Your Brain—and How to Fight Back)一书。

译者:Charlie

审校:夏林

To gauge by the news headlines, it would be easy to believe that artificial intelligence (AI) is about to take over the world. Kai-Fu Lee, a Chinese venture capitalist, says that AI will soon create tens of trillions of dollars of wealth and claims China and the U.S. are the two AI superpowers.

There is no doubt that AI has incredible potential. But the technology is still in its infancy; there are no AI superpowers. The race to implement AI has hardly begun, particularly in business. As well, the most advanced AI tools are open source, which means that everyone has access to them.

Tech companies are generating hype with cool demonstrations of AI, such as Google’s AlphaGo Zero, which learned one of the world’s most difficult board games in three days and could easily defeat its top-ranked players. Several companies are claiming breakthroughs with self-driving vehicles. But don’t be fooled: The games are just special cases, and the self-driving cars are still on their training wheels.

AlphaGo, the original iteration of AlphaGo Zero, developed its intelligence through use of generative adversarial networks, a technology that pits two AI systems against each another to allow them to learn from each other. The trick was that before the networks battled each other, they received a lot of coaching. And, more importantly, their problems and outcomes were well defined.

Unlike board games and arcade games, business systems don’t have defined outcomes and rules. They work with very limited datasets, often disjointed and messy. The computers also don’t do critical business analysis; it’s the job of humans to comprehend information that the systems gather and to decide what to do with it. Humans can deal with uncertainty and doubt; AI cannot. Google’s Waymo self-driving cars have collectively driven over 9 million miles, yet are nowhere near ready for release. Tesla’s Autopilot, after gathering 1.5 billion miles’ worth of data, won’t even stop at traffic lights.

Today’s AI systems do their best to reproduce the functioning of the human brain’s neural networks, but their emulations are very limited. They use a technique called deep learning: After you tell an AI exactly what you want it to learn and provide it with clearly labeled examples, it analyzes the patterns in those data and stores them for future application. The accuracy of its patterns depends on completeness of data, so the more examples you give it, the more useful it becomes.

Herein lies a problem, though: An AI is only as good as the data it receives, and is able to interpret them only within the narrow confines of the supplied context. It doesn’t “understand” what it has analyzed, so it is unable to apply its analysis to scenarios in other contexts. And it can’t distinguish causation from correlation.

The larger issue with this form of AI is that what it has learned remains a mystery: a set of indefinable responses to data. Once a neural network has been trained, not even its designer knows exactly how it is doing what it does. They call this the black box of AI.

Businesses can’t afford to have their systems making unexplained decisions, as they have regulatory requirements and reputational concerns and must be able to understand, explain, and prove the logic behind every decision that they make.

Then there is the issue of reliability. Airlines are installing AI-based facial-recognition systems and China is basing its national surveillance systems on such systems. AI is being used for marketing and credit analysis and to control cars, drones, and robots. It is being trained to perform medical data analysis and assist or replace human doctors. The problem is that, in all such uses, AI can be fooled.

Google published a paper last December that showed that it could trick AI systems into recognizing a banana as a toaster. Researchers at the Indian Institute of Science have just demonstrated that they could confuse almost any AI system without even using, as Google did, knowledge of what the system has used as a basis for learning. With AI, security and privacy are an afterthought, just as they were early in the development of computers and the Internet.

Leading AI companies have handed over the keys to their kingdoms by making their tools open source. Software used to be considered a trade secret, but developers realized that having others look at and build on their code could lead to great improvements in it. Microsoft, Google, and Facebook have released their AI code to the public for free to explore, adapt, and improve. China’s Baidu has also made its self-driving software, Apollo, available as open source.

Software’s real value lies in its implementation: what you do with it. Just as China built its tech companies and India created a $160 billion IT services industry on top of tools created by Silicon Valley, anyone can use openly available AI tools to build sophisticated applications. Innovation has now globalized, creating a level playing field—especially in AI.

Vivek Wadhwa is a distinguished fellow at Carnegie Mellon University’s College of Engineering. He is the co-author of Your Happiness Was Hacked: Why Tech Is Winning the Battle to Control Your Brain—and How to Fight Back.

热读文章
热门视频
扫描二维码下载财富APP