立即打开
《人类简史》作者:人工智能可能在数年后取代人类主宰世界

《人类简史》作者:人工智能可能在数年后取代人类主宰世界

Chloe Taylor 2023-09-17
人类当前面临的情况是,威胁并非来自外太空,而是来自美国加州。

2023年,《人类简史》的作者尤瓦尔·诺亚·赫拉利在以色列特拉维夫举办的一场活动上发表演讲,对人工智能的潜在威胁发出了可怕的警告。图片来源:EYAL WARSHAVSKY—SOPA IMAGES/LIGHTROCKET/GETTY IMAGES

人工智能技术被誉为“革命”,为了推动该领域的发展,人类已经投入数十亿美元。不过在著名历史学家和哲学家尤瓦尔·诺亚·赫拉利看来,人工智能是可能导致人类灭绝的“外来物种”。

“人工智能与历史上任何事物相比,跟其他发明相比,不管是核武器还是印刷术都完全不一样。”9月12日,赫拉利在英国伦敦的CogX Festival上对观众说。赫拉利是畅销书《未来简史》(Homo Deus: A Brief History of Humankind)和《人类简史》(Sapiens: A Brief History of Humankind)的作者。

“这是历史上第一个可以自行做决定的工具。原子弹自己做不了决定。轰炸日本广岛的决定是人类做的。”

赫拉利说,独立思考能力造成的风险是,超智能机器最终可能取代人类主宰世界。

“我们可能要面临人类历史的终结——也就是人类统治的终结。”他警告道。“在接下来的几年里,人工智能很可能迅速吸收人类所有文化,从石器时代以来(所有成就),开始产生外来智慧的新文化。”

赫拉利表示,由此会引发一个问题,即这项技术不仅会影响周围的物理世界,还会对心理学和宗教等方面造成影响。

“从某些方面来看,人工智能或许比人类更有创造力。”他表示。“说到底,人类的创造力受到有机生物学限制。而人工智能是无机智能,真的像外星智慧。”

“如果我说五年后会出现外来物种,也许非常好,没准还能够治愈癌症,但将夺走人类控制世界的力量,人们会感到恐惧。”

“这就是当前面临的情况,不过(威胁)并非来自外太空,而是来自美国加州。”

人工智能进化

过去一年,OpenAI推出的生成式人工智能聊天机器人ChatGPT惊人崛起,成为该领域重大投资的催化剂。一时间,科技巨头竞相开发全世界最尖端的人工智能系统。

然而赫拉利指出,人工智能领域的发展速度太迅猛,“所以非常可怕”。赫拉利曾经在作品中纵观人类的过去和未来。

“如果跟有机进化相比,人工智能现在就像(变形虫)——在有机进化过程中花了数十万年才变成恐龙。”他在CogX Festival上告诉人们。“有了人工智能助力,变形虫可能10年或20年就会演化成霸王龙。问题之一是人类没有时间适应。其实人类是极擅长适应的生物……但需要时间,现在适应的时间不够。”

人类下一个“巨大而可怕的实验”?

赫拉利承认,以前类似蒸汽机和飞机之类技术创新也曾经引发对人类安全的警告,但“最终结果还好”。他坚称,在人工智能方面“结果可能不会好。”

“人类不擅长新技术,经常犯大错误,所以总要做实验。”他说。

赫拉利指出,例如在工业革命期间,人类就犯过“一些可怕的错误”,而欧洲帝国主义、纳粹主义也是“巨大而可怕的实验,曾经夺走数十亿人的生命”。

“我们花了一个世纪,一个半世纪的时间,才从失败的实验中恢复。”他表示。“也许这次我们无法安然度过。即便最终人类获胜,又有多少亿人会在这个过程里丧失生命。”

引发分裂的技术

随着人工智能越发普遍,该技术到底会带来复兴还是世界末日,专家的意见并不一致。

在今年夏天仅受邀参加的耶鲁大学首席执行官峰会(Yale CEO Summit)上,近一半受访首席执行官表示,相信在未来五年到十年内,人工智能有可能摧毁人类。

早在今年3月,包括埃隆·马斯克和苹果(Apple)的联合创始人史蒂夫·沃兹尼亚克在内1,100名著名技术人员和人工智能研究人员联名发布公开信,呼吁研发强大人工智能系统的工作暂停六个月。他们指出,相关人工智能系统有可能变成威胁人类文明的超级智能。

特斯拉(Tesla)和太空探索技术公司(SpaceX)的联合创始人马斯克在不同场合表示,该技术会“像小行星一样”冲击人类,还警告称可能“像电影《终结者》(Terminator)一样”。此后,他成立了自己的人工智能公司xAI,称目标是为了“了解宇宙”,并防止人类灭绝。

然而并非所有人都同意马斯克的观点,即超级智能机器可能消灭人类。

8月,1,300多名专家联合发声,希望平息人们对人工智能造出一群“邪恶机器人霸主”的焦虑,其中所谓的人工智能三教父之一直指,担心该技术可能威胁生存“荒谬至极”。

Meta的一位高管尼克·克莱格在最近一次采访中也在努力安抚人们对该技术的担心,他坚称当前的大型语言模型“相当愚蠢”,完全达不到拯救或摧毁文明的程度。

“时间至关重要”

赫拉利表示,尽管对人工智能发出了严重警告,采取措施防止最糟糕的预测成为现实还有时间。

“在人工智能抢下方向盘之前,人类还有几年时间,我不知道有多久,人类拥有掌控能力的时间或许还有五年、十年、30年。”他说。“这些年的时间应该非常谨慎地使用。”

他建议采取三个实际步骤降低人工智能的风险:不给机器人言论自由,不让人工智能伪装成人类,对人工智能领域的重大投资征税用来资助监管和能够协助控制技术的机构。

“很多人在努力推进相关举措。”他说,“我希望尽快(实施),因为时间至关重要。”

他还敦促人工智能领域的从业人员认真考虑,在全世界铺开创新是否最符合地球的利益。

“不能仅仅阻止技术发展,但开发和使用应该分开。”他说。“开发出来并不意味着必须使用。”(财富中文网)

译者:夏林

人工智能技术被誉为“革命”,为了推动该领域的发展,人类已经投入数十亿美元。不过在著名历史学家和哲学家尤瓦尔·诺亚·赫拉利看来,人工智能是可能导致人类灭绝的“外来物种”。

“人工智能与历史上任何事物相比,跟其他发明相比,不管是核武器还是印刷术都完全不一样。”9月12日,赫拉利在英国伦敦的CogX Festival上对观众说。赫拉利是畅销书《未来简史》(Homo Deus: A Brief History of Humankind)和《人类简史》(Sapiens: A Brief History of Humankind)的作者。

“这是历史上第一个可以自行做决定的工具。原子弹自己做不了决定。轰炸日本广岛的决定是人类做的。”

赫拉利说,独立思考能力造成的风险是,超智能机器最终可能取代人类主宰世界。

“我们可能要面临人类历史的终结——也就是人类统治的终结。”他警告道。“在接下来的几年里,人工智能很可能迅速吸收人类所有文化,从石器时代以来(所有成就),开始产生外来智慧的新文化。”

赫拉利表示,由此会引发一个问题,即这项技术不仅会影响周围的物理世界,还会对心理学和宗教等方面造成影响。

“从某些方面来看,人工智能或许比人类更有创造力。”他表示。“说到底,人类的创造力受到有机生物学限制。而人工智能是无机智能,真的像外星智慧。”

“如果我说五年后会出现外来物种,也许非常好,没准还能够治愈癌症,但将夺走人类控制世界的力量,人们会感到恐惧。”

“这就是当前面临的情况,不过(威胁)并非来自外太空,而是来自美国加州。”

人工智能进化

过去一年,OpenAI推出的生成式人工智能聊天机器人ChatGPT惊人崛起,成为该领域重大投资的催化剂。一时间,科技巨头竞相开发全世界最尖端的人工智能系统。

然而赫拉利指出,人工智能领域的发展速度太迅猛,“所以非常可怕”。赫拉利曾经在作品中纵观人类的过去和未来。

“如果跟有机进化相比,人工智能现在就像(变形虫)——在有机进化过程中花了数十万年才变成恐龙。”他在CogX Festival上告诉人们。“有了人工智能助力,变形虫可能10年或20年就会演化成霸王龙。问题之一是人类没有时间适应。其实人类是极擅长适应的生物……但需要时间,现在适应的时间不够。”

人类下一个“巨大而可怕的实验”?

赫拉利承认,以前类似蒸汽机和飞机之类技术创新也曾经引发对人类安全的警告,但“最终结果还好”。他坚称,在人工智能方面“结果可能不会好。”

“人类不擅长新技术,经常犯大错误,所以总要做实验。”他说。

赫拉利指出,例如在工业革命期间,人类就犯过“一些可怕的错误”,而欧洲帝国主义、纳粹主义也是“巨大而可怕的实验,曾经夺走数十亿人的生命”。

“我们花了一个世纪,一个半世纪的时间,才从失败的实验中恢复。”他表示。“也许这次我们无法安然度过。即便最终人类获胜,又有多少亿人会在这个过程里丧失生命。”

引发分裂的技术

随着人工智能越发普遍,该技术到底会带来复兴还是世界末日,专家的意见并不一致。

在今年夏天仅受邀参加的耶鲁大学首席执行官峰会(Yale CEO Summit)上,近一半受访首席执行官表示,相信在未来五年到十年内,人工智能有可能摧毁人类。

早在今年3月,包括埃隆·马斯克和苹果(Apple)的联合创始人史蒂夫·沃兹尼亚克在内1,100名著名技术人员和人工智能研究人员联名发布公开信,呼吁研发强大人工智能系统的工作暂停六个月。他们指出,相关人工智能系统有可能变成威胁人类文明的超级智能。

特斯拉(Tesla)和太空探索技术公司(SpaceX)的联合创始人马斯克在不同场合表示,该技术会“像小行星一样”冲击人类,还警告称可能“像电影《终结者》(Terminator)一样”。此后,他成立了自己的人工智能公司xAI,称目标是为了“了解宇宙”,并防止人类灭绝。

然而并非所有人都同意马斯克的观点,即超级智能机器可能消灭人类。

8月,1,300多名专家联合发声,希望平息人们对人工智能造出一群“邪恶机器人霸主”的焦虑,其中所谓的人工智能三教父之一直指,担心该技术可能威胁生存“荒谬至极”。

Meta的一位高管尼克·克莱格在最近一次采访中也在努力安抚人们对该技术的担心,他坚称当前的大型语言模型“相当愚蠢”,完全达不到拯救或摧毁文明的程度。

“时间至关重要”

赫拉利表示,尽管对人工智能发出了严重警告,采取措施防止最糟糕的预测成为现实还有时间。

“在人工智能抢下方向盘之前,人类还有几年时间,我不知道有多久,人类拥有掌控能力的时间或许还有五年、十年、30年。”他说。“这些年的时间应该非常谨慎地使用。”

他建议采取三个实际步骤降低人工智能的风险:不给机器人言论自由,不让人工智能伪装成人类,对人工智能领域的重大投资征税用来资助监管和能够协助控制技术的机构。

“很多人在努力推进相关举措。”他说,“我希望尽快(实施),因为时间至关重要。”

他还敦促人工智能领域的从业人员认真考虑,在全世界铺开创新是否最符合地球的利益。

“不能仅仅阻止技术发展,但开发和使用应该分开。”他说。“开发出来并不意味着必须使用。”(财富中文网)

译者:夏林

Billions of dollars are being poured into the development of AI, with the technology being hailed as a “revolution”—but famed historian and philosopher Yuval Noah Harari sees it as an “alien species” that could trigger humanity’s extinction.

“AI is fundamentally different from anything we’ve seen in history, from any other invention, whether it’s nuclear weapons or the printing press,” Harari—the bestselling author of Homo Deus and Sapiens: A Brief History of Humankind—told an audience at CogX Festival in London on September 12.

“It’s the first tool in history that can make decisions by itself. Atom bombs could not make decisions. The decision to bomb Hiroshima was taken by a human.”

The risk that comes with this ability to think for itself, Harari said, is that superintelligent machines could ultimately end up usurping the human race as the world’s dominant power.

“Potentially we are talking about the end of human history—the end of the period dominated by human beings,” he warned. “It’s very likely that in the next few years, it will eat up all of human culture, [everything we’ve achieved] since the Stone Age, and start spewing out a new culture coming from an alien intelligence.”

This raises questions, according to Harari, about what the technology will do not just to the physical world around us, but also to things like psychology and religion.

“In certain ways, AI can be more creative [than people],” he argued. “In the end, our creativity is limited by organic biology. This is a nonorganic intelligence. It’s really like an alien intelligence.

“If I said an alien species is coming in five years, maybe they will be nice, maybe they will cure cancer, but they will take our power to control the world from us, people would be terrified.

“This is the situation we’re in, but instead of coming from outer space, [the threat is] coming from California.”

AI evolution

The phenomenal rise of OpenAI’s generative AI chatbot, ChatGPT, over the past year has been a catalyst for major investment in the space, with Big Tech entering into a race to develop the most cutting-edge artificial intelligence systems in the world.

But it’s the pace of development in the AI space, according to Harari—whose written works have examined humanity’s past and future—that “makes it so scary.”

“If you compare it to organic evolution, AI now is like [an amoeba]—in organic evolution, it took them hundreds of thousands of years to become dinosaurs,” he told the crowd at CogX Festival. “With AI, the amoeba could become a T. rex within 10 or 20 years. Part of the problem is we don’t have time to adapt. Humans are amazingly adaptable beings…but it takes time, and we don’t have this time.”

Humanity’s next “huge and terrible experiment”?

Conceding that previous technological innovations, such as the steam engine and airplanes, had sparked similar warnings about human safety and that “in the end it was okay,” Harari insisted when it came to AI, “in the end is not good enough.”

“We are not good with new technology, we tend to make big mistakes, we experiment,” he said.

During the Industrial Revolution, for example, mankind had made “some terrible mistakes,” Harari noted, while European imperialism, Nazism had also been “huge and terrible experiments that cost the lives of billions of people.”

“It took us a century, a century and a half, of all these failed experiments to somehow get it right,” he argued. “Maybe we don’t survive it this time. Even if we do, think about how many hundreds of millions of lives will be destroyed in the process.”

Divisive technology

As AI becomes more and more ubiquitous, experts are divided on whether the tech will deliver a renaissance or doomsday.

At the invitation-only Yale CEO Summit this summer, almost half of the chief executives surveyed at the event said they believed AI has the potential to destroy humanity within the next five to 10 years.

Back in March, 1,100 prominent technologists and AI researchers—including Elon Musk and Apple cofounder Steve Wozniak—signed an open letter calling for a six-month pause on the development of powerful AI systems. They pointed to the possibility of these systems already being on a path to superintelligence that could threaten human civilization.

Tesla and SpaceX cofounder Musk has separately said the tech will hit people “like an asteroid” and warned there is a chance it will “go Terminator.” He has since launched his own AI firm, xAI, in what he says is a bid to “understand the universe” and prevent the extinction of mankind.

Not everyone is on board with Musk’s view that superintelligent machines could wipe out humanity, however.

Last month, more than 1,300 experts came together to calm anxiety around AI creating a horde of “evil robot overlords,” while one of the three so-called Godfathers of A.I. has labeled concerns around the tech becoming an existential threat “preposterously ridiculous.”

Top Meta executive Nick Clegg also attempted to quell concerns about the technology in a recent interview, insisting that large language models in their current form are “quite stupid” and certainly not smart enough yet to save or destroy civilization.

“Time is of the essence”

Despite his own dire warnings about AI, Harari said there was still time for something to be done to prevent the worst predictions from becoming a reality.

“We have a few years, I don’t know how many—five, 10, 30—where we are still in the driver’s seat before AI pushes us to the back seat,” he said. “We should use these years very carefully.”

He suggested three practical steps that could be taken to mitigate the risks around AI: Don’t give bots freedom of speech, don’t let artificial intelligence masquerade as humans, and tax major investments into AI to fund regulation and institutions that can keep the technology under control.

“There are a lot of people trying to push these and other initiatives forward,” he said. “I hope we do [implement them] as soon as possible, because time is of the essence.”

He also urged those working in the AI space to consider whether unleashing their innovations on the world was really in the planet’s best interests.

“We can’t just stop the development of technology, but we need to make the distinction between development and deployment,” he said. “Just because you develop it, doesn’t mean you have to deploy it.”

热读文章
热门视频
扫描二维码下载财富APP