立即打开
OpenAI首席执行官认为,是马斯克让他意识到深度科技投资的重要性

OpenAI首席执行官认为,是马斯克让他意识到深度科技投资的重要性

JEREMY KAHN 2023-05-28
OpenAI联合创始人兼首席执行官山姆·阿尔特曼在伦敦举行的一场座无座席的演讲中,既受到来自群众的抗议,也遇到仰慕者要求自拍合影。

OpenAI联合创始人兼首席执行官山姆·阿尔特曼在伦敦举行的一场座无座席的演讲中,既受到来自群众的抗议,也遇到仰慕者要求自拍合影。图片来源:WIN MCNAMEE—GETTY IMAGES

OpenAI联合创始人兼首席执行官山姆·阿尔特曼即将在伦敦大学学院(University College London)的地下礼堂发表演讲,礼堂共有985个座位,排队的人从门口往外排,排满了几阶楼梯,一直排到街上,然后蜿蜒穿过一个城市街区。再往前走,就会看到六个举着标语的年轻人,他们呼吁OpenAI放弃开发通用人工智能的努力——在大多数认知相关任务中,人工智能系统可以达到人类大脑同等智慧。一名抗议者拿着扩音器指责阿尔特曼的弥赛亚情结(想要通过拯救他人来实现自救,通过扮演救世主的角色来体现自己的价值),称他为了实现自我价值而冒着毁灭人类的风险。

指责阿尔特曼有弥赛亚情结可能有点过火了。但在礼堂里,阿尔特曼受到了摇滚明星般的待遇。演讲结束后,他被仰慕者团团围住,让他摆姿势进行自拍合影,并就创业公司如何更好地创建“护城河”(结构性竞争优势)征求他的意见。“这正常吗?”当我们站在阿尔特曼周围拥挤的人群中时,一位难以置信的记者向OpenAI的新闻发言人提问。这位发言人表示:“我们这次行程所到之处几乎都是如此。”

阿尔特曼目前正在进行OpenAI“世界巡回演讲”——从里约和拉各斯到柏林和东京等城市——与企业家、开发者和学生们讨论OpenAI的技术以及人工智能在更广泛领域的潜在影响。阿尔特曼以前进行过这样的世界巡回演讲。但今年,随着人工智能聊天机器人ChatGPT的病毒式流行,它已成为史上增速最快的面向消费者的软件产品。因此,进行“世界巡回演讲”有一种绕场一周庆祝胜利的感觉。阿尔特曼还将与政府主要领导人会面。在伦敦大学学院的演讲结束后,他将与英国首相里希·苏纳克共进晚餐,并将在布鲁塞尔与欧盟官员会面。

我们从阿尔特曼的演讲中了解到什么?除了其他方面外,阿尔特曼认为埃隆·马斯克让他意识到深度科技投资的重要性,他还认为高级人工智能将减少全球不平等,他还将教育工作者对OpenAI的ChatGPT感到恐惧与前几代人对计算器的出现感到绝望相提并论,但他对移民火星不感兴趣。

阿尔特曼在美国参议院作证时呼吁政府对人工智能进行监管,最近还与人合写了一篇博文,呼吁建立一个类似于国际原子能机构(International Atomic Energy Agency)这样的组织,来监管全球高级人工智能系统的发展。他表示,监管机构应该在美国监管新技术方面的传统自由放任方式和欧洲采取的积极监管立场之间取得平衡。他说,他希望看到人工智能的开源开发蓬勃发展。他说:“有人呼吁停止开源运动,我认为这将是真正的耻辱。”但“如果有人破解了代码,并研发出超级人工智能(不管你希望如何定义它)。”他警告说,“可能制定全球性规则是合乎情理的。”

阿尔特曼说:“对于可能研发出超级人工智能的最大规模的系统,我们至少应该像对待核材料一样认真对待它。”

这位OpenAI首席执行官还警告说,在他自己公司的机器人ChatGPT和文本生成图像工具DALL-E等技术的助力下,可以轻易生成大量错误信息。比起生成式人工智能被用来扩大现有的虚假信息活动规模,阿尔特曼更担心的是,这项技术有可能生成量身定制的、有针对性的虚假信息。他指出,OpenAI和其他开发专有人工智能模型的公司可以建立更好的护栏来防止此类活动,但他说,开源开发可能会破坏这种努力,因为开源开发允许用户修改软件并移除护栏。尽管监管“可能会有所帮助”,但阿尔特曼表示,人们需要成为批判性的信息消费者,并将其与图像处理软件Adobe Photoshop首次发布时,人们对数字编辑照片感到担忧的时期进行比较。他说:“同样的事情也会发生在这些新技术上。但我认为,我们越早让人们了解这一点越好,因为这样做能够引起更普遍的情感共鸣。”

阿尔特曼对人工智能的看法比以往更为乐观。虽然有些人认为,生成式人工智能系统会导致普通工人工资下降或造成大规模失业,进而加剧全球不平等,但阿尔特曼表示,他认为事实恰恰相反。他指出,人工智能有助于实现全球经济增长,并提高生产力,从而帮助人们摆脱贫困,并创造新机会。他说:“我对这项技术感到非常兴奋,它能够恢复过去几十年失去的生产力,并实现超越追赶。”他提出了基本观点:全球两大“限制因素”是智力成本和能源成本。他表示,如果这两者的成本能够大幅降低,那么它们对穷人的帮助应该比对富人的帮助更大。“人工智能技术将改变整个世界。”他说。

阿尔特曼还提到,他认为人工智能存在不同版本,包括超级人工智能。一些人,包括阿尔特曼过去曾说过,尽管这种未来技术可能对人类构成严重威胁,但它实际上是可以被控制的。他说:“我过去对超级人工智能未来走向的看法是,我们将构建一个极其强大的系统。”他指出,这样的系统本质上是非常危险的。“我现在认为,我们已经找到了发展路径:可以创建越来越强大的工具,并且数十亿、数万亿的副本能够在世界各地广泛使用。它们可以帮助个人提高效率,从而能够完成更多任务,个人的产出可能显著增加。超级人工智能的作用不仅仅体现在支持最大的单一神经网络方面,还体现在我们正在发现的所有新科学和我们正在创造的一切新事物中。

当被问及从不同导师那里学到了什么时,阿尔特曼提到了埃隆·马斯克。他说:“当然,我从埃隆身上学到什么是能够完成的,以及你不需要接受这样的事实:你不能忽视艰苦研发和硬技术的重要性,这是很有价值的。”

阿尔特曼还回答了一个问题,即他是否认为人工智能可以帮助人类在火星上定居。他说:“听着,我不想去火星生活,这听起来很可怕,但我对其他人想去火星生活感到高兴。”阿尔特曼建议应该首先把机器人送到火星上,帮助改造火星,使它更适合人类居住。

在礼堂外,抗议者继续高呼反对这位OpenAI首席执行官。但在与会者停下来询问他们的抗议活动时,他们也停下来与好奇的与会者进行认真交谈。

“我们努力做的是提高人们的认识,即人工智能确实对人类构成了威胁和风险,包括就业和经济、偏见、错误信息、社会两极分化和僵化,但也造成稍长期,却非真正长期的,更攸关人类存亡的威胁。”帮助组织抗议活动的、时年27岁的伦敦大学学院政治学和伦理学研究生阿利斯泰尔·斯图尔特说。

斯图尔特引用了最近对人工智能专家的一项调查,该调查发现,48%的专家认为,高级人工智能系统导致人类灭绝或带来其他严重威胁的可能性为10%或更高。他说,他和其他抗议奥尔特曼出席此类活动的人呼吁暂停开发比OpenAI的GPT-4大型语言模型更强大的人工智能系统,直到研究人员“解决对齐问题”——这一术语意味着找到一种方法来防止未来的超级人工智能系统采取可能对人类文明造成损害的行动。

这一暂停开发的呼吁呼应了包括马斯克和一些知名人工智能研究人员和企业家在内的数千名公开信的签名者所发出的呼吁,该信由生命未来研究所(Future of Life Institute)于3月底发表。

斯图尔特说,他所在的组织希望提高公众对人工智能所带来的威胁的认识,这样他们就可以向政治家施压,要求他们采取行动,对该技术进行监管。本周早些时候,一个自称“暂停人工智能”(Pause AI)的组织的抗议者也开始在谷歌DeepMind(另一个高级人工智能研究实验室)的伦敦办公室前进行抗议。斯图尔特说,他所在的组织并不隶属于“暂停人工智能”,尽管这两个组织有许多相同的宗旨和目标。 (财富中文网)

译者:中慧言-王芳

OpenAI联合创始人兼首席执行官山姆·阿尔特曼即将在伦敦大学学院(University College London)的地下礼堂发表演讲,礼堂共有985个座位,排队的人从门口往外排,排满了几阶楼梯,一直排到街上,然后蜿蜒穿过一个城市街区。再往前走,就会看到六个举着标语的年轻人,他们呼吁OpenAI放弃开发通用人工智能的努力——在大多数认知相关任务中,人工智能系统可以达到人类大脑同等智慧。一名抗议者拿着扩音器指责阿尔特曼的弥赛亚情结(想要通过拯救他人来实现自救,通过扮演救世主的角色来体现自己的价值),称他为了实现自我价值而冒着毁灭人类的风险。

指责阿尔特曼有弥赛亚情结可能有点过火了。但在礼堂里,阿尔特曼受到了摇滚明星般的待遇。演讲结束后,他被仰慕者团团围住,让他摆姿势进行自拍合影,并就创业公司如何更好地创建“护城河”(结构性竞争优势)征求他的意见。“这正常吗?”当我们站在阿尔特曼周围拥挤的人群中时,一位难以置信的记者向OpenAI的新闻发言人提问。这位发言人表示:“我们这次行程所到之处几乎都是如此。”

阿尔特曼目前正在进行OpenAI“世界巡回演讲”——从里约和拉各斯到柏林和东京等城市——与企业家、开发者和学生们讨论OpenAI的技术以及人工智能在更广泛领域的潜在影响。阿尔特曼以前进行过这样的世界巡回演讲。但今年,随着人工智能聊天机器人ChatGPT的病毒式流行,它已成为史上增速最快的面向消费者的软件产品。因此,进行“世界巡回演讲”有一种绕场一周庆祝胜利的感觉。阿尔特曼还将与政府主要领导人会面。在伦敦大学学院的演讲结束后,他将与英国首相里希·苏纳克共进晚餐,并将在布鲁塞尔与欧盟官员会面。

我们从阿尔特曼的演讲中了解到什么?除了其他方面外,阿尔特曼认为埃隆·马斯克让他意识到深度科技投资的重要性,他还认为高级人工智能将减少全球不平等,他还将教育工作者对OpenAI的ChatGPT感到恐惧与前几代人对计算器的出现感到绝望相提并论,但他对移民火星不感兴趣。

阿尔特曼在美国参议院作证时呼吁政府对人工智能进行监管,最近还与人合写了一篇博文,呼吁建立一个类似于国际原子能机构(International Atomic Energy Agency)这样的组织,来监管全球高级人工智能系统的发展。他表示,监管机构应该在美国监管新技术方面的传统自由放任方式和欧洲采取的积极监管立场之间取得平衡。他说,他希望看到人工智能的开源开发蓬勃发展。他说:“有人呼吁停止开源运动,我认为这将是真正的耻辱。”但“如果有人破解了代码,并研发出超级人工智能(不管你希望如何定义它)。”他警告说,“可能制定全球性规则是合乎情理的。”

阿尔特曼说:“对于可能研发出超级人工智能的最大规模的系统,我们至少应该像对待核材料一样认真对待它。”

这位OpenAI首席执行官还警告说,在他自己公司的机器人ChatGPT和文本生成图像工具DALL-E等技术的助力下,可以轻易生成大量错误信息。比起生成式人工智能被用来扩大现有的虚假信息活动规模,阿尔特曼更担心的是,这项技术有可能生成量身定制的、有针对性的虚假信息。他指出,OpenAI和其他开发专有人工智能模型的公司可以建立更好的护栏来防止此类活动,但他说,开源开发可能会破坏这种努力,因为开源开发允许用户修改软件并移除护栏。尽管监管“可能会有所帮助”,但阿尔特曼表示,人们需要成为批判性的信息消费者,并将其与图像处理软件Adobe Photoshop首次发布时,人们对数字编辑照片感到担忧的时期进行比较。他说:“同样的事情也会发生在这些新技术上。但我认为,我们越早让人们了解这一点越好,因为这样做能够引起更普遍的情感共鸣。”

阿尔特曼对人工智能的看法比以往更为乐观。虽然有些人认为,生成式人工智能系统会导致普通工人工资下降或造成大规模失业,进而加剧全球不平等,但阿尔特曼表示,他认为事实恰恰相反。他指出,人工智能有助于实现全球经济增长,并提高生产力,从而帮助人们摆脱贫困,并创造新机会。他说:“我对这项技术感到非常兴奋,它能够恢复过去几十年失去的生产力,并实现超越追赶。”他提出了基本观点:全球两大“限制因素”是智力成本和能源成本。他表示,如果这两者的成本能够大幅降低,那么它们对穷人的帮助应该比对富人的帮助更大。“人工智能技术将改变整个世界。”他说。

阿尔特曼还提到,他认为人工智能存在不同版本,包括超级人工智能。一些人,包括阿尔特曼过去曾说过,尽管这种未来技术可能对人类构成严重威胁,但它实际上是可以被控制的。他说:“我过去对超级人工智能未来走向的看法是,我们将构建一个极其强大的系统。”他指出,这样的系统本质上是非常危险的。“我现在认为,我们已经找到了发展路径:可以创建越来越强大的工具,并且数十亿、数万亿的副本能够在世界各地广泛使用。它们可以帮助个人提高效率,从而能够完成更多任务,个人的产出可能显著增加。超级人工智能的作用不仅仅体现在支持最大的单一神经网络方面,还体现在我们正在发现的所有新科学和我们正在创造的一切新事物中。

当被问及从不同导师那里学到了什么时,阿尔特曼提到了埃隆·马斯克。他说:“当然,我从埃隆身上学到什么是能够完成的,以及你不需要接受这样的事实:你不能忽视艰苦研发和硬技术的重要性,这是很有价值的。”

阿尔特曼还回答了一个问题,即他是否认为人工智能可以帮助人类在火星上定居。他说:“听着,我不想去火星生活,这听起来很可怕,但我对其他人想去火星生活感到高兴。”阿尔特曼建议应该首先把机器人送到火星上,帮助改造火星,使它更适合人类居住。

在礼堂外,抗议者继续高呼反对这位OpenAI首席执行官。但在与会者停下来询问他们的抗议活动时,他们也停下来与好奇的与会者进行认真交谈。

“我们努力做的是提高人们的认识,即人工智能确实对人类构成了威胁和风险,包括就业和经济、偏见、错误信息、社会两极分化和僵化,但也造成稍长期,却非真正长期的,更攸关人类存亡的威胁。”帮助组织抗议活动的、时年27岁的伦敦大学学院政治学和伦理学研究生阿利斯泰尔·斯图尔特说。

斯图尔特引用了最近对人工智能专家的一项调查,该调查发现,48%的专家认为,高级人工智能系统导致人类灭绝或带来其他严重威胁的可能性为10%或更高。他说,他和其他抗议奥尔特曼出席此类活动的人呼吁暂停开发比OpenAI的GPT-4大型语言模型更强大的人工智能系统,直到研究人员“解决对齐问题”——这一术语意味着找到一种方法来防止未来的超级人工智能系统采取可能对人类文明造成损害的行动。

这一暂停开发的呼吁呼应了包括马斯克和一些知名人工智能研究人员和企业家在内的数千名公开信的签名者所发出的呼吁,该信由生命未来研究所(Future of Life Institute)于3月底发表。

斯图尔特说,他所在的组织希望提高公众对人工智能所带来的威胁的认识,这样他们就可以向政治家施压,要求他们采取行动,对该技术进行监管。本周早些时候,一个自称“暂停人工智能”(Pause AI)的组织的抗议者也开始在谷歌DeepMind(另一个高级人工智能研究实验室)的伦敦办公室前进行抗议。斯图尔特说,他所在的组织并不隶属于“暂停人工智能”,尽管这两个组织有许多相同的宗旨和目标。 (财富中文网)

译者:中慧言-王芳

The line to enter the 985-seat basement auditorium at University College London where OpenAI cofounder and CEO Sam Altman is about to speak stretches out the door, snakes up several flights of stairs, carries on into the street, and then meanders most of the way down a city block. It inches forward, past a half-dozen young men holding signs calling for OpenAI to abandon efforts to develop artificial general intelligence—or A.I. systems that are as capable as humans at most cognitive tasks. One protester, speaking into a megaphone, accuses Altman of having a Messiah complex and risking the destruction of humanity for the sake of his ego.

Messiah might be taking it a bit far. But inside the hall, Altman received a rock star reception. After his talk, he was mobbed by admirers, asking him to pose for selfies and soliciting advice on the best way for a startup to build a “moat.” “Is this normal?” one incredulous reporter asks an OpenAI press handler as we stand in the tight scrum around Altman. “It’s been like this pretty much everywhere we’ve been on this trip,” the spokesperson says.

Altman is currently on an OpenAI “world tour”—visiting cities from Rio and Lagos to Berlin and Tokyo—to talk to entrepreneurs, developers, and students about OpenAI’s technology and the potential impact of A.I. more broadly. Altman has done this kind of world trip before. But this year, after the viral popularity of A.I.-powered chatbot ChatGPT, which has become the fastest growing consumer software product in history, it has the feeling of a victory lap. Altman is also meeting with key government leaders. Following his UCL appearance, he was off to meet U.K. Prime Minister Rishi Sunak for dinner, and he will be meeting with European Union officials in Brussels.

What did we learn from Altman’s talk? Among other things, that he credits Elon Musk with convincing him of the importance of deep tech investing, that he thinks advanced A.I. will reduce global inequality, that he equates educators’ fears of OpenAI’s ChatGPT with earlier generations’ hand-wringing over the calculator, and that he has no interest in living on Mars.

Altman, who has called on government to regulate A.I. in testimony before the U.S. Senate and recently coauthored a blog post calling for the creation of an organization like the International Atomic Energy Agency to police the development of advanced A.I. systems globally, said that regulators should strike a balance between America’s traditional laissez-faire approach to regulating new technologies and Europe’s more proactive stance. He said that he wants to see the open source development of A.I. thrive. “There’s this call to stop the open source movement that I think would be a real shame,” he said. But he warned that “if someone does crack the code and builds a superintelligence, however you want to define that, probably some global rules on that are appropriate.”

“We should treat this as least as seriously as we treat nuclear material, for the biggest scale systems that could give birth to superintelligence,” Altman said.

The OpenAI CEO also warned about the ease of churning out massive amounts of misinformation thanks to technology like his own company’s ChatGPT bot and DALL-E text-to-image tool. More worrisome to Altman than generative A.I. being used to scale up existing disinformation campaigns, he pointed to the tech’s potential to create individually tailored and targeted disinformation. OpenAI and others developing proprietary A.I. models could build better guardrails against such activity, he noted—but he said the effort could be undermined by open source development, which allows users to modify software and remove guardrails. And while regulation “could help some,” Altman said that people will need to become much more critical consumers of information, comparing it to the period when Adobe Photoshop was first released and people were concerned about digitally edited photographs. “The same thing will happen with these new technologies,” he said. “But the sooner we can educate people about it, because the emotional resonance is going to be so much higher, I think the better.”

Altman posited a more optimistic vision of A.I. than he has sometimes suggested in the past. While some have postulated that generative A.I. systems will make global inequality worse by depressing wages for average workers or causing mass unemployment, Altman said he thought the opposite would be true. He noted that enhancing economic growth and productivity globally, ought to lift people out of poverty and create new opportunities. “I’m excited that this technology can, like, bring the missing productivity gains of the last few decades back, and more than catch up,” he said. He noted his basic thesis, that the two “limiting reagents” of the world are the cost of intelligence and the cost of energy. If those two become dramatically less expensive, he said, it ought to help poorer people more than rich people. “This technology will lift all of the world up,” he said.

He also said he thought there were versions of A.I. superintelligence, a future technology that some, including Altman in the past, have said could pose severe dangers to all of humanity, that can be controlled. “The way I used to think about heading towards superintelligence is that we were going to build this one, extremely capable system,” he said, noting that such a system would be inherently very dangerous. “I think we now see a path where we very much build these tools that get more and more powerful, and there are billions of copies, trillions of copies being used in the world, helping individual people be way more effective, capable of doing way more; the amount of output that one person can have can dramatically increase. And where the superintelligence emerges is not just the capability of our biggest single neural network but all of the new science we are discovering, all of the new things we’re creating.”

In response to a question about what he learned from various mentors, Altman cited Elon Musk. “Certainly learning from Elon about what is just, like, possible to do and that you don’t need to accept that, like, hard R&D and hard technology is not something you ignore, that’s been super valuable,” he said.

He also fielded a question about whether he thought A.I. could help human settlement of Mars. “Look, I have no desire to go live on Mars, it sounds horrible,” he said. “But I’m happy other people do.” He said robots should be sent to Mars first to help terraform the planet to make it more hospitable for human habitation.

Outside the auditorium, the protesters kept up their chants against the OpenAI CEO. But they also paused to chat thoughtfully with curious attendees who stopped by to ask them about their protest.

“What we’re trying to do is raise awareness that A.I. does pose these threats and risks to humanity right now in terms of jobs and the economy, bias, misinformation, societal polarization, and ossification, but also slightly longer term, but not really long term, more existential threats,” said Alistair Stewart, a 27-year-old graduate student in political science and ethics at UCL who helped organize the protests.

Stewart cited a recent survey of A.I. experts that found 48% of them thought there was a 10% or greater chance of human extinction or other grave threats from advanced A.I. systems. He said that he and others protesting Altman’s appearance were calling for a pause in the development of A.I. systems more powerful than OpenAI’s GPT-4 large language model until researchers had “solved alignment”—a phrase that basically means figuring out a way to prevent a future superintelligent A.I. system from taking actions that would cause harm to human civilization.

That call for a pause echoes the one made by thousands of signatories of an open letter, including Musk and a number of well-known A.I. researchers and entrepreneurs, that was published by the Future of Life Institute in late March.

Stewart said his group wanted to raise public awareness of the threat posed by A.I. so that they could pressure politicians to take action and regulate the technology. Earlier this week, protesters from a group calling itself Pause AI have also begun picketing the London offices of Google DeepMind, another advanced A.I. research lab. Stewart said his group was not affiliated with Pause AI, although the two groups shared many of the same goals and objectives.

热读文章
热门视频
扫描二维码下载财富APP