
回溯往昔——更确切地说,就在今年年初——硅谷还对“通用人工智能”这一话题津津乐道、痴迷不已。
OpenAI首席执行官萨姆·奥尔特曼1月写道:“我们如今满怀信心——知道如何打造通用人工智能。”此前,2024年末他在Y Combinator播客节目中表示,通用人工智能可能在2025年实现,并在2024年发推称OpenAI已在“内部实现通用人工智能”。OpenAI对通用人工智能痴迷至极,以至于其销售主管将团队称为“通用人工智能向导”,前首席科学家伊尔亚·苏茨克维(Ilya Sutskever)带领研究员围坐在篝火旁高呼“感受通用人工智能的力量!”
作为OpenAI的合作伙伴及主要资金支持者,微软(Microsoft)在2024年发表一篇论文,称OpenAI的GPT-4人工智能模型已显现出“通用人工智能的火花”。与此同时,埃隆·马斯克(Elon Musk)于2023年3月创立人工智能公司xAI,致力于打造通用人工智能,并表示这一目标最快或于2025或2026年达成。诺贝尔奖得主、谷歌DeepMind联合创始人戴密斯·哈萨比斯(Demis Hassabis)对记者称,世界“正站在通用人工智能的临界点边缘”。Meta首席执行官马克·扎克伯格(Mark Zuckerberg)表示,公司正致力于“构建全面的通用智能”,以驱动下一代产品与服务。Anthropic联合创始人兼首席执行官达里奥·阿莫迪(Dario Amodei)虽对“通用人工智能”这一术语心存抵触,但也认为“强大的人工智能”或将于2027年实现,并引领健康与繁荣的新纪元——前提是它不会最终毁灭人类。前谷歌首席执行官、现知名科技投资人埃里克·施密特(Eric Schmidt)则在4月的一场演讲中表示,人类将在“未来三到五年内”拥有通用人工智能。
如今,通用人工智能的热度正在消退——硅谷整体风向已彻底转向实用主义,摒弃了对乌托邦式愿景的追逐。例如,今年夏天萨姆·奥尔特曼在接受美国消费者新闻与商业频道(CNBC)采访时称,通用人工智能“并非特别实用的术语”;而此前在4月还大肆宣扬通用人工智能的施密特,近日在《纽约时报》撰文呼吁硅谷别再痴迷于“超人类智能”,并警告称这种痴迷会分散行业对实用技术的注意力。人工智能领域先驱吴恩达(Andrew Ng)与美国人工智能沙皇戴维·萨克斯(David Sacks)均认为,通用人工智能“被过度炒作”。
通用人工智能:定义模糊,过度炒作
发生了什么?首先,我们需要了解一些背景情况。所有人都认同AGI是“Artificial General Intelligence” (通用人工智能)的缩写,但这几乎是唯一的共识。人们对该术语的定义略有不同,但这种差异却至关重要。物理学家马克·埃夫拉姆·古布鲁德(Mark Avrum Gubrud)是最早使用这一术语的学者之一,他在1997年发表的一篇研究论文中写道:“我所说的先进通用人工智能,是指那些在复杂性和速度上可与人类大脑相匹敌或超越人类大脑的人工智能系统,能够获取和处理通用知识并进行推理,且本质上可应用于任何原本需要人类智能介入的工业或军事操作环节。”
此后,这一术语被人工智能研究员谢恩·莱格(Shane Legg)——他后来与哈萨比斯共同创立了谷歌DeepMind——以及计算机科学家本·戈泽尔(Ben Goertzel)、彼得·沃斯(Peter Voss)在21世纪初采纳并推广。据沃斯所述,他们将通用人工智能定义为能够学会“可靠执行任何具备相应能力的人类所能完成的认知任务”的人工智能系统。但这一定义存在漏洞:例如,谁来界定“具备相应能力的人类”的标准?此后,其他人工智能研究员提出了不同定义,认为通用人工智能应是能在所有任务上都达到人类专家水平的人工智能,而非仅等同于“具备基础能力”的人类。OpenAI于2015年末成立,其明确使命是开发通用人工智能以“造福全人类”,并在通用人工智能定义的争论中给出了自身的解读。该公司章程指出,通用人工智能是一种自主系统,能够“在大多数具有经济价值的工作中超越人类表现”。
然而,无论通用人工智能的定义如何,当下关键的变化在于:行业内已不再热衷于谈论这一概念。究其原因,一方面是,人们愈发担忧人工智能的发展进度或许并未如业内人士数月前宣称的那般“飞速推进”;另一方面,越来越多的迹象表明,此前对通用人工智能的讨论引发了过高的期待,而技术本身根本无法兑现。
通用人工智能热度骤降的最大因素之一,似乎是OpenAI在8月初推出的GPT-5模型。在微软宣称GPT-4显现出“通用人工智能的火花”两年多之后,这款新模型的亮相却令人失望:仅在路由架构基础上实现了渐进式改进,并未带来众人期待的突破性进展。参与提出通用人工智能术语的戈泽尔也提醒公众,尽管GPT-5表现亮眼,但与真正的通用人工智能仍相去甚远——缺乏真正的理解能力、持续学习能力或基于实际经验的能力。
考虑到萨姆·奥尔特曼此前的立场,他如今不再使用通用人工智能相关表述尤为引人关注。OpenAI建立在通用人工智能炒作的基础上:通用人工智能是该公司的创始使命,助其筹集数十亿美元资金,也是其与微软开展合作的基石。双方协议中甚至包含一项条款:倘若OpenAI的非营利性董事会宣布已实现通用人工智能,微软获取未来技术的权限将受到限制。据悉,在投入超130亿美元后,微软正力促删除该条款,甚至曾考虑退出合作。《连线》杂志还报道称,OpenAI内部就“发表一篇关于人工智能进度衡量的论文是否会影响公司宣布实现通用人工智能的能力”展开过争论。
“极具积极意义”的风向转变
无论观察人士认为这一风向转变是营销手段,还是市场反应,许多人,尤其是企业界人士,都认为这是件好事。Futurum Equities首席市场策略师沙伊·博洛尔(Shay Boloor)称这一转变“极具积极意义”,并指出市场嘉奖的是执行力,而非“未来某天实现超级智能”这类模糊的叙事。
其他人则强调,真正的转变是摒弃“单一通用人工智能幻想”,转向特定领域的“超级智能”。代理式人工智能公司Landbase首席执行官丹尼尔·萨克斯(Daniel Saks)认为,“围绕通用人工智能的炒作周期始终建立在‘单一、集中式人工智能将无所不知’的理念上”,但他表示这并非他所看到的情况。“未来的趋势是去中心化、专注于特定领域的模型——它们能在特定领域实现超越人类的表现,”他向《财富》杂志表示。
数字健康平台Lirio的首席人工智能科学家克里斯托弗·西蒙斯(Christopher Symons)则认为,通用人工智能这一术语本就缺乏实际价值:他解释道,那些宣扬通用人工智能的人“将资源从更具体的应用领域抽离,而在这些领域,人工智能的进步能最直接地为社会创造福祉”。
不过,通用人工智能相关言论的退潮,并不意味着这一使命(或术语本身)已彻底消失。Anthropic与DeepMind的高管仍自称“通用人工智能信念者”——这是行话。即便如此,该表述也存在争议:对部分人而言,它指“相信通用人工智能即将到来”;对另一些人而言,它仅表示“相信人工智能模型将持续改进”。但毋庸置疑的是,如今行业内更多的是“谨慎措辞与淡化处理”,而非“在通用人工智能宣传上加倍发力”。
仍有部分人呼吁警惕迫在眉睫的风险
对部分人而言,这种“谨慎措辞”恰恰凸显出风险的紧迫性。前OpenAI研究员史蒂文·阿德勒(Steven Adler)向《财富》杂志表示:“我们不应忽视,部分人工智能公司正明确以‘构建超越人类智能的系统’为目标。人工智能目前尚未达到这一水平,但无论你给这种系统冠以何种名称,它都具有危险性,需要我们以极其严肃的态度对待。”
其他人指责人工智能领域的领导者对通用人工智能的态度转变实则是在“混淆视听”,目的是规避监管。未来生命研究所(Future of Life Institute)所长麦克斯·泰格马克(Max Tegmark)表示,奥尔特曼称通用人工智能是“无用术语”并非出于科学层面的谦逊,而是该公司为持续打造更强大模型而试图规避监管的一种手段。
“对他们而言,仅在私下与投资者谈论通用人工智能是更明智的选择,”他向《财富》杂志补充道,“这就像可卡因贩子声称‘不清楚可卡因是否真能算作毒品’一样”——因为它太复杂、太难界定了。
无论是将其称作通用人工智能还是赋予其他称谓——炒作或许会消退,风向或许会转变,但其牵涉甚广:从资金、就业到安全保障,关于这场竞赛将走向何方的核心问题,才刚刚浮出水面。(财富中文网)
译者:中慧言-王芳
回溯往昔——更确切地说,就在今年年初——硅谷还对“通用人工智能”这一话题津津乐道、痴迷不已。
OpenAI首席执行官萨姆·奥尔特曼1月写道:“我们如今满怀信心——知道如何打造通用人工智能。”此前,2024年末他在Y Combinator播客节目中表示,通用人工智能可能在2025年实现,并在2024年发推称OpenAI已在“内部实现通用人工智能”。OpenAI对通用人工智能痴迷至极,以至于其销售主管将团队称为“通用人工智能向导”,前首席科学家伊尔亚·苏茨克维(Ilya Sutskever)带领研究员围坐在篝火旁高呼“感受通用人工智能的力量!”
作为OpenAI的合作伙伴及主要资金支持者,微软(Microsoft)在2024年发表一篇论文,称OpenAI的GPT-4人工智能模型已显现出“通用人工智能的火花”。与此同时,埃隆·马斯克(Elon Musk)于2023年3月创立人工智能公司xAI,致力于打造通用人工智能,并表示这一目标最快或于2025或2026年达成。诺贝尔奖得主、谷歌DeepMind联合创始人戴密斯·哈萨比斯(Demis Hassabis)对记者称,世界“正站在通用人工智能的临界点边缘”。Meta首席执行官马克·扎克伯格(Mark Zuckerberg)表示,公司正致力于“构建全面的通用智能”,以驱动下一代产品与服务。Anthropic联合创始人兼首席执行官达里奥·阿莫迪(Dario Amodei)虽对“通用人工智能”这一术语心存抵触,但也认为“强大的人工智能”或将于2027年实现,并引领健康与繁荣的新纪元——前提是它不会最终毁灭人类。前谷歌首席执行官、现知名科技投资人埃里克·施密特(Eric Schmidt)则在4月的一场演讲中表示,人类将在“未来三到五年内”拥有通用人工智能。
如今,通用人工智能的热度正在消退——硅谷整体风向已彻底转向实用主义,摒弃了对乌托邦式愿景的追逐。例如,今年夏天萨姆·奥尔特曼在接受美国消费者新闻与商业频道(CNBC)采访时称,通用人工智能“并非特别实用的术语”;而此前在4月还大肆宣扬通用人工智能的施密特,近日在《纽约时报》撰文呼吁硅谷别再痴迷于“超人类智能”,并警告称这种痴迷会分散行业对实用技术的注意力。人工智能领域先驱吴恩达(Andrew Ng)与美国人工智能沙皇戴维·萨克斯(David Sacks)均认为,通用人工智能“被过度炒作”。
通用人工智能:定义模糊,过度炒作
发生了什么?首先,我们需要了解一些背景情况。所有人都认同AGI是“Artificial General Intelligence” (通用人工智能)的缩写,但这几乎是唯一的共识。人们对该术语的定义略有不同,但这种差异却至关重要。物理学家马克·埃夫拉姆·古布鲁德(Mark Avrum Gubrud)是最早使用这一术语的学者之一,他在1997年发表的一篇研究论文中写道:“我所说的先进通用人工智能,是指那些在复杂性和速度上可与人类大脑相匹敌或超越人类大脑的人工智能系统,能够获取和处理通用知识并进行推理,且本质上可应用于任何原本需要人类智能介入的工业或军事操作环节。”
此后,这一术语被人工智能研究员谢恩·莱格(Shane Legg)——他后来与哈萨比斯共同创立了谷歌DeepMind——以及计算机科学家本·戈泽尔(Ben Goertzel)、彼得·沃斯(Peter Voss)在21世纪初采纳并推广。据沃斯所述,他们将通用人工智能定义为能够学会“可靠执行任何具备相应能力的人类所能完成的认知任务”的人工智能系统。但这一定义存在漏洞:例如,谁来界定“具备相应能力的人类”的标准?此后,其他人工智能研究员提出了不同定义,认为通用人工智能应是能在所有任务上都达到人类专家水平的人工智能,而非仅等同于“具备基础能力”的人类。OpenAI于2015年末成立,其明确使命是开发通用人工智能以“造福全人类”,并在通用人工智能定义的争论中给出了自身的解读。该公司章程指出,通用人工智能是一种自主系统,能够“在大多数具有经济价值的工作中超越人类表现”。
然而,无论通用人工智能的定义如何,当下关键的变化在于:行业内已不再热衷于谈论这一概念。究其原因,一方面是,人们愈发担忧人工智能的发展进度或许并未如业内人士数月前宣称的那般“飞速推进”;另一方面,越来越多的迹象表明,此前对通用人工智能的讨论引发了过高的期待,而技术本身根本无法兑现。
通用人工智能热度骤降的最大因素之一,似乎是OpenAI在8月初推出的GPT-5模型。在微软宣称GPT-4显现出“通用人工智能的火花”两年多之后,这款新模型的亮相却令人失望:仅在路由架构基础上实现了渐进式改进,并未带来众人期待的突破性进展。参与提出通用人工智能术语的戈泽尔也提醒公众,尽管GPT-5表现亮眼,但与真正的通用人工智能仍相去甚远——缺乏真正的理解能力、持续学习能力或基于实际经验的能力。
考虑到萨姆·奥尔特曼此前的立场,他如今不再使用通用人工智能相关表述尤为引人关注。OpenAI建立在通用人工智能炒作的基础上:通用人工智能是该公司的创始使命,助其筹集数十亿美元资金,也是其与微软开展合作的基石。双方协议中甚至包含一项条款:倘若OpenAI的非营利性董事会宣布已实现通用人工智能,微软获取未来技术的权限将受到限制。据悉,在投入超130亿美元后,微软正力促删除该条款,甚至曾考虑退出合作。《连线》杂志还报道称,OpenAI内部就“发表一篇关于人工智能进度衡量的论文是否会影响公司宣布实现通用人工智能的能力”展开过争论。
“极具积极意义”的风向转变
无论观察人士认为这一风向转变是营销手段,还是市场反应,许多人,尤其是企业界人士,都认为这是件好事。Futurum Equities首席市场策略师沙伊·博洛尔(Shay Boloor)称这一转变“极具积极意义”,并指出市场嘉奖的是执行力,而非“未来某天实现超级智能”这类模糊的叙事。
其他人则强调,真正的转变是摒弃“单一通用人工智能幻想”,转向特定领域的“超级智能”。代理式人工智能公司Landbase首席执行官丹尼尔·萨克斯(Daniel Saks)认为,“围绕通用人工智能的炒作周期始终建立在‘单一、集中式人工智能将无所不知’的理念上”,但他表示这并非他所看到的情况。“未来的趋势是去中心化、专注于特定领域的模型——它们能在特定领域实现超越人类的表现,”他向《财富》杂志表示。
数字健康平台Lirio的首席人工智能科学家克里斯托弗·西蒙斯(Christopher Symons)则认为,通用人工智能这一术语本就缺乏实际价值:他解释道,那些宣扬通用人工智能的人“将资源从更具体的应用领域抽离,而在这些领域,人工智能的进步能最直接地为社会创造福祉”。
不过,通用人工智能相关言论的退潮,并不意味着这一使命(或术语本身)已彻底消失。Anthropic与DeepMind的高管仍自称“通用人工智能信念者”——这是行话。即便如此,该表述也存在争议:对部分人而言,它指“相信通用人工智能即将到来”;对另一些人而言,它仅表示“相信人工智能模型将持续改进”。但毋庸置疑的是,如今行业内更多的是“谨慎措辞与淡化处理”,而非“在通用人工智能宣传上加倍发力”。
仍有部分人呼吁警惕迫在眉睫的风险
对部分人而言,这种“谨慎措辞”恰恰凸显出风险的紧迫性。前OpenAI研究员史蒂文·阿德勒(Steven Adler)向《财富》杂志表示:“我们不应忽视,部分人工智能公司正明确以‘构建超越人类智能的系统’为目标。人工智能目前尚未达到这一水平,但无论你给这种系统冠以何种名称,它都具有危险性,需要我们以极其严肃的态度对待。”
其他人指责人工智能领域的领导者对通用人工智能的态度转变实则是在“混淆视听”,目的是规避监管。未来生命研究所(Future of Life Institute)所长麦克斯·泰格马克(Max Tegmark)表示,奥尔特曼称通用人工智能是“无用术语”并非出于科学层面的谦逊,而是该公司为持续打造更强大模型而试图规避监管的一种手段。
“对他们而言,仅在私下与投资者谈论通用人工智能是更明智的选择,”他向《财富》杂志补充道,“这就像可卡因贩子声称‘不清楚可卡因是否真能算作毒品’一样”——因为它太复杂、太难界定了。
无论是将其称作通用人工智能还是赋予其他称谓——炒作或许会消退,风向或许会转变,但其牵涉甚广:从资金、就业到安全保障,关于这场竞赛将走向何方的核心问题,才刚刚浮出水面。(财富中文网)
译者:中慧言-王芳
Once upon a time—meaning, um, as recently as earlier this year—Silicon Valley couldn’t stop talking about AGI.
OpenAI CEO Sam Altman wrote in January: “We are now confident we know how to build AGI.” This is after he told a Y Combinator vodcast in late 2024 that AGI might be achieved in 2025 and tweeted in 2024 that OpenAI had “AGI achieved internally.” OpenAI was so AGI-entranced that its head of sales dubbed her team “AGI Sherpas” and its former chief scientist Ilya Sutskever led his fellow researchers in campfire chants of “Feel the AGI!”
OpenAI’s partner and major financial backer Microsoft put out a paper in 2024 claiming OpenAI’s GPT-4 AI model exhibited “sparks of AGI.” Meanwhile, Elon Musk founded xAI in March 2023 with a mission to build AGI, a development he said might occur as soon as 2025 or 2026. Demis Hassabis, the Nobel-laureate cofounder of Google DeepMind, told reporters that the world was “on the cusp” of AGI. Meta CEO Mark Zuckerberg said his company was committed to “building full general intelligence” to power the next generation of its products and services. Dario Amodei, cofounder and CEO of Anthropic, while noting he disliked the term “AGI,” said “powerful AI” could arrive by 2027 and usher in a new age of health and abundance—if it didn’t wind up killing us all. Eric Schmidt, the former Google CEO turned prominent tech investor, said in a talk in April that we would have AGI “within three to five years.”
Now the AGI fever is breaking—in what amounts to a wholesale vibe shift toward pragmatism as opposed to chasing utopian visions. For example, at a CNBC appearance this summer, Altman called AGI “not a super-useful term.” In the New York Times, Schmidt—yes that same guy who was talking up AGI in April—urged Silicon Valley to stop fixating on superhuman AI, warning that the obsession distracts from building useful technology. Both AI pioneer Andrew Ng and U.S. AI czar David Sacks called AGI “overhyped.”
AGI: Under-defined and overhyped
What happened? Well, first, a little background. Everyone agrees that AGI stands for “artificial general intelligence.” And that’s pretty much the only thing everyone agrees upon. People define the term in subtly, but importantly, different ways. Among the first to use the term was physicist Mark Avrum Gubrud who in a 1997 research article wrote that “by advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate, and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed.”
The term was later picked up and popularized by AI researcher Shane Legg—who would go on to cofound Google DeepMind with Hassabis—and fellow computer scientists Ben Goertzel and Peter Voss in the early 2000s. They defined AGI, according to Voss, as an AI system that could learn to “reliably perform any cognitive task that a competent human can.” That definition had some problems—for instance, who decides who qualifies as a competent human? And, since then, other AI researchers have developed different definitions that see AGI as AI that is as capable as any human expert at all tasks, as opposed to merely a “competent” person. OpenAI was founded in late 2015 with the explicit mission of developing AGI “for the benefit of all,” and it added its own twist to the AGI definition debate. The company’s charter says AGI is an autonomous system that can “outperform humans at most economically valuable work.”
But whatever AGI is, the important thing these days, it seems, is not to talk about it. And the reason why has to do with growing concerns that progress in AI development may not be galloping ahead as fast as industry insiders touted just a few months ago—and growing indications that all the AGI talk was stoking inflated expectations that the tech itself couldn’t live up to.
Among the biggest factors in AGI’s sudden fall from grace, seems to have been the rollout of OpenAI’s GPT-5 model in early August. Just over two years after Microsoft’s claim that GPT-4 showed “sparks” of AGI, the new model landed with a thud: incremental improvements wrapped in a routing architecture, not the breakthrough many expected. Goertzel, who helped coin the term AGI, reminded the public that while GPT-5 is impressive, it remains nowhere near true AGI—lacking real understanding, continuous learning, or grounded experience.
Altman’s retreat from AGI language is especially striking given his prior position. OpenAI was built on AGI hype: AGI is in the company’s founding mission, it helped raise billions in capital, and it underpins the partnership with Microsoft. A clause in their agreement even states that if OpenAI’s nonprofit board declares it has achieved AGI, Microsoft’s access to future technology would be restricted. Microsoft—after investing more than $13 billion—is reportedly pushing to remove that clause, and has even considered walking away from the deal. Wired also reported on an internal OpenAI debate over whether publishing a paper on measuring AI progress could complicate the company’s ability to declare it had achieved AGI.
A ‘very healthy’ vibe shift
But whether observers think the vibe shift is a marketing move or a market response, many, particularly on the corporate side, say it is a good thing. Shay Boloor, chief market strategist at Futurum Equities, called the move “very healthy,” noting that markets reward execution, not vague “someday superintelligence” narratives.
Others stress that the real shift is away from a monolithic AGI fantasy, toward domain-specific “superintelligences.” Daniel Saks, CEO of agentic AI company Landbase, argued that “the hype cycle around AGI has always rested on the idea of a single, centralized AI that becomes all-knowing,” but said that is not what he sees happening. “The future lies in decentralized, domain-specific models that achieve superhuman performance in particular fields,” he told Fortune.
Christopher Symons, chief AI scientist at digital health platform Lirio, said that the term AGI was never useful: Those promoting AGI, he explained, “draw resources away from more concrete applications where AI advancements can most immediately benefit society.”
Still, the retreat from AGI rhetoric doesn’t mean the mission—or the phrase—has vanished. Anthropic and DeepMind executives continue to call themselves “AGI-pilled,” which is a bit of insider slang. Even that phrase is disputed, though; for some it refers to the belief that AGI is imminent, while others say it’s simply the belief that AI models will continue to improve. But there is no doubt that there is more hedging and downplaying than doubling down.
Some still call out urgent risks
And for some, that hedging is exactly what makes the risks more urgent. Former OpenAI researcher Steven Adler told Fortune: “We shouldn’t lose sight that some AI companies are explicitly aiming to build systems smarter than any human. AI isn’t there yet, but whatever you call this, it’s dangerous and demands real seriousness.”
Others accuse AI leaders of changing their tune on AGI to muddy the waters in a bid to avoid regulation. Max Tegmark, president of the Future of Life Institute, says Altman calling AGI “not a useful term” isn’t scientific humility, but a way for the company to steer clear of regulation while continuing to build toward more and more powerful models.
“It’s smarter for them to just talk about AGI in private with their investors,” he told Fortune, adding that “it’s like a cocaine salesman saying that it’s unclear whether cocaine is really a drug,” because it’s just so complex and difficult to decipher.
Call it AGI or call it something else—the hype may fade and the vibe may shift, but with so much on the line, from money and jobs to security and safety, the real questions about where this race leads are only just beginning.