首页 500强 活动 榜单 商业 科技 商潮 专题 品牌中心
杂志订阅

人工智能质疑潮正在印证一位研究者多年来的警告

Nick Lichtenberg
2025-08-29

GPT-5的失望表现成为一个关键节点,但并非唯一的预警信号。

文本设置
小号
默认
大号
Plus(0条)

加里·马库斯(Gary Marcus)。图片来源:Ramsey Cardy/Web Summit via Sportsfile via Getty Images

首先,萨姆·奥尔特曼(Sam Altman)亲口承认,OpenAI“彻底搞砸了”GPT-5的发布。随后,奥尔特曼在与记者共进晚餐时抛出了那个以“B”开头的词——“泡沫”。 《The Verge》援引这位OpenAI首席执行官的话报道称,“当泡沫出现时,聪明人往往会因为一点真实的东西而过度兴奋。”接着,麻省理工学院(MIT)的一项大规模调查给出了一个触目惊心的数字:多达95%的企业生成式人工智能试点项目都以失败告终。

科技股随之遭抛售。投资者情绪动荡,令标普500指数的市值蒸发1万亿美元。该指数日益被转型为“AI股票”的科技股主导,这无疑预示着:人工智能热潮正滑向“互联网泡沫2.0”。当然,推动市场波动的因素并不仅限于对AI交易的担忧。上周五,随着美联储主席杰罗姆·鲍威尔(Jerome Powell)在怀俄明州杰克逊霍尔发表“鸽派”言论后,标普500指数终结了连续五日的下跌。即便鲍威尔仅对9月降息释放出一丝开放信号,市场便立刻闻风而动。

自2019年以来,加里·马库斯(Gary Marcus)就不断警告大型语言模型(LLMs)存在局限性;自2023年起,他更是直言人工智能可能存在泡沫以及经济隐患。他的言论之所以别具分量,源于他在业内的特殊身份。这位从认知科学家转型为资深人工智能研究者的人士,自2015年起便投身机器学习领域。同年,他创立了 Geometric Intelligence。2016年,该公司被优步(Uber)收购。马库斯随后离开,在其他人工智能初创公司任职的同时,也始终直言不讳地批评那些在他看来注定走入死胡同的人工智能发展路径。

不过,马库斯并不认为自己是“卡珊德拉”,也无意扮演这样的角色,他在接受《财富》采访时这样表示。卡珊德拉是希腊悲剧中的人物,她能预言未来,却始终不被相信,直到一切为时已晚。“我认为自己是一个现实主义者,一个能预见问题并且判断无误的人。”

在他看来,市场的动荡首要源于GPT-5。他指出,这并非失败,但“表现平平”、“令人失望”,这“确实让很多人猛然清醒。”他补充道:“GPT-5在销售时几乎被包装成通用人工智能(AGI),但事实并非如此。”所谓通用人工智能,是指具有人类般推理能力的假想人工智能。“它并不是一个糟糕的模型,不能说它不好,”马库斯说,但“它并不是很多人所期待的那种跨越式飞跃。”

马库斯表示,这对任何真正关注该领域的人来说并不是新闻。早在2022年,他就断言“深度学习正面临瓶颈”。事实上,他一直在其Substack上公开讨论生成式人工智能泡沫何时会破裂。他对《财富》直言,如今“群体心理”确实在发挥作用,他几乎每天都会想起约翰·梅纳德·凯恩斯(John Maynard Keynes)的一句名言:“市场保持非理性的时间,往往比你维持清偿能力的时间更长。”或者说,就像卡通片《乐一通》里的威利狼(Wile E. Coyote),追逐哔哔鸟(Road Runner)时冲出悬崖边,先在空中停顿片刻,最终才掉落下去。

“我感觉就是这样,”马库斯说。“我们已经冲出了悬崖。这一切毫无道理。而过去几天的一些迹象表明,人们终于开始意识到这一点。”

预警信号呈现

关于泡沫的讨论在7月迅速升温。当时,阿波罗全球管理公司(Apollo Global Management)首席经济学家、在华尔街广受关注的意见领袖托尔斯滕·斯洛克(Torsten Slok)发表了一项引人注目的测算。他虽未在其中直接宣布泡沫已经形成,但却写道:“当前标普500指数中前十家公司的估值溢价已超过1990年代IT泡沫时期。”他警告称,英伟达、微软、苹果和Meta等公司的未来市盈率与庞大市值已“与其实际盈利能力严重脱节”。

此后数周,GPT-5的失望表现成为一个关键节点,但并非唯一的预警信号。另一个令人担忧的迹象是巨额资金正涌入数据中心建设,以支撑对未来人工智能需求的种种理论设想。斯洛克也关注到了这一问题,他发现,2025年上半年,数据中心投资对GDP增长的拉动作用竟已与消费支出相当——而消费支出占GDP比重高达70%。(《华尔街日报》的克里斯托弗·米姆斯(Christopher Mims)几周前已给出类似测算。)8月19日,谷歌前首席执行官埃里克·施密特(Eric Schmidt)在《纽约时报》联合撰文称,“实现通用人工智能究竟需要多长时间,依然存在不确定性”,引发广泛讨论。

政治学家亨利·法瑞尔(Henry Farrell)认为,这无疑是一种重大转向。他曾于今年1月在《金融时报》撰文称,施密特是推动“新华盛顿共识”的关键声音之一,而这一共识部分建立在“通用人工智能即将到来”的预期之上。法瑞尔在其Substack上写道,施密特的这篇评论文章表明,他此前的假设“明显在崩塌”。不过,他也补充道,这一判断主要基于他与熟悉华盛顿外交与科技政策交叉领域人士的非正式交流。法瑞尔为这篇文章拟的标题是《科技单边主义的黄昏》。他在结尾写道:“如果通用人工智能的赌注落空,那么支撑这一共识的许多逻辑也将随之崩塌——而这正是埃里克·施密特似乎正在得出的结论。”

2025年夏天,舆论风向转向对人工智能的强烈质疑。5月,达雷尔·韦斯特(Darrell West)在布鲁金斯学会撰文警告称,公众和科学界很快将转而对抗那些自诩为人工智能领域的“宇宙主宰者”。随后,《Fast Company》预测,这个夏天将充斥着“AI糟粕”。8月初,Axios发现“clunker”(破机器)这一俚语被广泛用来形容人工智能的各种失误,尤其是在客户服务领域频频出错的案例。

历史经验表明:短期阵痛,长期收益

《金融时报》的约翰·索恩希尔(John Thornhill)在探讨泡沫问题时提醒读者,既要为应对崩盘做好准备,但同时也要为迎接人工智能未来的“黄金时代”蓄力。他特别强调了数据中心的建设规模——2024年和2025年,科技巨头在这方面的投资高达7500亿美元,而到2029年,全球相关投入预计将达到3万亿美元。索恩希尔援引金融史学家的观点,认为这种狂热的投资模式往往会引发泡沫、剧烈的市场崩盘以及“创造性破坏”。但历史一再证明,最终仍会沉淀出真正持久的价值。

他指出,卡洛塔·佩雷斯(Carlota Perez)在《技术革命与金融资本:泡沫与黄金时代的动力学》中记录了这一模式。她认为,人工智能是自18世纪末以来延续这一规律的第五次技术革命。正是这一连串演变,造就了现代经济中的铁路基础设施和个人计算机等成果。每一次技术革命都曾经历泡沫与崩盘。索恩希尔虽未在这篇专栏中引用其观点,但爱德华·钱瑟勒(Edward Chancellor)在其经典著作《魔鬼带走最后的人》(Devil Take The Hindmost)中同样描绘了类似的模式。这本书不仅详细探讨了泡沫机制,更曾提前预测了互联网泡沫爆发。

2024年11月,阿卡迪安资产管理公司(Acadian Asset Management)的欧文·拉蒙特(Owen Lamont)引用钱瑟勒的观点,认为市场已经跨过一个关键泡沫时刻:大量市场参与者涌现,一方面声称价格过高,另一方面却坚信价格仍会继续上涨。

华尔街态度谨慎,但未直接宣称泡沫形成

华尔街投行大多未直接宣称泡沫形成。摩根士丹利近期发布的一份报告指出,人工智能将为企业带来巨大效率提升,预计标普500企业每年可因此节省9200亿美元。瑞银集团(UBS)则对麻省理工学院这项引发广泛报道的研究中所警示的风险表示认同。该机构警告投资者,随着数据中心的扩张,市场可能会经历一段“资本开支消化不良期”。但瑞银同时强调,人工智能应用正远超预期,并列举了OpenAI的ChatGPT、谷歌母公司Alphabet的Gemini以及由人工智能驱动的客户关系管理系统等案例,表明其商业化潜力正在日益释放。

美国银行研究部在8月初(GPT-5发布之前)发布的一份报告中指出,人工智能正推动劳动力生产率发生“巨变”,将为标普500企业持续带来“创新溢价”。该行美国股票策略主管萨维塔·苏布拉马尼安(Savita Subramanian)认为,2020年代的通胀浪潮让企业学会了“以少博多”,即将人力转化为流程化运作,而人工智能将进一步加速这一趋势。她在接受《财富》采访时表示:“我认为标普500本身未必存在泡沫。”随后又补充道:“但其他一些领域已显现泡沫迹象。”

苏布拉马尼安提到,小企业以及私人信贷领域,可能存在“估值调整过于激进”的风险。她同样担心企业在数据中心建设上投入过度,指出这意味着行业回归重资产模式,背离了当前美国经济中表现最突出的公司所奉行的轻资产战略。

“这确实是新情况,”她说。“科技业曾经高度轻资产,专注研发和创新,如今却斥巨资建设数据中心。”她补充道,这可能会终结其轻资产、高利润率的发展模式,使其转变为“重资产、越来越像制造业”的企业。在她看来,这种转变理应对应更低的市场估值倍数。当被问及这是否等同于泡沫,或者至少是一种修正时,她表示:“这种情况已经在某些领域开始发生。”她也认为可以将其与当年的铁路热潮相提并论。

数学与机器中的幽灵

加里·马库斯(Gary Marcus)还从数学基本面的角度表达了担忧。目前,近500家人工智能独角兽企业的总估值已逼近2.7万亿美元。“与实际营收相比,这根本说不通,”他说。马库斯举例称,OpenAI在7月报告的营收为10亿美元,但仍未实现盈利。他推测OpenAI约占人工智能市场半壁江山,进而粗略估算出全行业年营收约为250亿美元。“这当然不是一个小数目,但实现成本却极高,而目前行业总投资已达数万亿美元。”

那么,如果马库斯是对的,为什么这些年来人们一直没有听进去?他表示自己多年来不断发出警告。在2019年出版的著作《重启AI》(Rebooting AI)中,他将此称为“轻信鸿沟”;早在2012年,他就在《纽约客》撰文指出,深度学习并非登月天梯。前25年的认知科学研究使他深知“人类拟人化本能”:“人们凝视机器时,误将并不存在的智能和人性强加到机器上,最终会把这些机器当作伴侣,并误以为自己距离真正解决问题已经不远了。”他认为,如今人工智能泡沫膨胀至此,很大程度上源于这种人类将自我投射到外物之上的本能,而认知科学家恰恰受过训练,能够规避此陷阱。

这些机器或许看起来像是人类,但“运作方式实际上与人完全不同,”马库斯说。他补充道:“整个市场的根基,正是建立在人们未能理解这一点之上。人们误以为单靠规模扩张就能解决所有问题,却根本没有理解问题本质。在我看来,这几乎就是一场悲剧。”

苏布拉马尼安(Subramanian)则坦言,“人们之所以热爱人工智能技术,是因为它像魔法,给人一种神秘奇幻的感觉……但事实是,它尚未改变世界,不过我认为我们不能因此忽视它。” 她本人也深陷其中。“我现在用ChatGPT比我的孩子们还多。说真的,这个变化挺有意思的。我现在事事都依赖ChatGPT。”(财富中文网)

译者:刘进龙

审校:汪皓

首先,萨姆·奥尔特曼(Sam Altman)亲口承认,OpenAI“彻底搞砸了”GPT-5的发布。随后,奥尔特曼在与记者共进晚餐时抛出了那个以“B”开头的词——“泡沫”。 《The Verge》援引这位OpenAI首席执行官的话报道称,“当泡沫出现时,聪明人往往会因为一点真实的东西而过度兴奋。”接着,麻省理工学院(MIT)的一项大规模调查给出了一个触目惊心的数字:多达95%的企业生成式人工智能试点项目都以失败告终。

科技股随之遭抛售。投资者情绪动荡,令标普500指数的市值蒸发1万亿美元。该指数日益被转型为“AI股票”的科技股主导,这无疑预示着:人工智能热潮正滑向“互联网泡沫2.0”。当然,推动市场波动的因素并不仅限于对AI交易的担忧。上周五,随着美联储主席杰罗姆·鲍威尔(Jerome Powell)在怀俄明州杰克逊霍尔发表“鸽派”言论后,标普500指数终结了连续五日的下跌。即便鲍威尔仅对9月降息释放出一丝开放信号,市场便立刻闻风而动。

自2019年以来,加里·马库斯(Gary Marcus)就不断警告大型语言模型(LLMs)存在局限性;自2023年起,他更是直言人工智能可能存在泡沫以及经济隐患。他的言论之所以别具分量,源于他在业内的特殊身份。这位从认知科学家转型为资深人工智能研究者的人士,自2015年起便投身机器学习领域。同年,他创立了 Geometric Intelligence。2016年,该公司被优步(Uber)收购。马库斯随后离开,在其他人工智能初创公司任职的同时,也始终直言不讳地批评那些在他看来注定走入死胡同的人工智能发展路径。

不过,马库斯并不认为自己是“卡珊德拉”,也无意扮演这样的角色,他在接受《财富》采访时这样表示。卡珊德拉是希腊悲剧中的人物,她能预言未来,却始终不被相信,直到一切为时已晚。“我认为自己是一个现实主义者,一个能预见问题并且判断无误的人。”

在他看来,市场的动荡首要源于GPT-5。他指出,这并非失败,但“表现平平”、“令人失望”,这“确实让很多人猛然清醒。”他补充道:“GPT-5在销售时几乎被包装成通用人工智能(AGI),但事实并非如此。”所谓通用人工智能,是指具有人类般推理能力的假想人工智能。“它并不是一个糟糕的模型,不能说它不好,”马库斯说,但“它并不是很多人所期待的那种跨越式飞跃。”

马库斯表示,这对任何真正关注该领域的人来说并不是新闻。早在2022年,他就断言“深度学习正面临瓶颈”。事实上,他一直在其Substack上公开讨论生成式人工智能泡沫何时会破裂。他对《财富》直言,如今“群体心理”确实在发挥作用,他几乎每天都会想起约翰·梅纳德·凯恩斯(John Maynard Keynes)的一句名言:“市场保持非理性的时间,往往比你维持清偿能力的时间更长。”或者说,就像卡通片《乐一通》里的威利狼(Wile E. Coyote),追逐哔哔鸟(Road Runner)时冲出悬崖边,先在空中停顿片刻,最终才掉落下去。

“我感觉就是这样,”马库斯说。“我们已经冲出了悬崖。这一切毫无道理。而过去几天的一些迹象表明,人们终于开始意识到这一点。”

预警信号呈现

关于泡沫的讨论在7月迅速升温。当时,阿波罗全球管理公司(Apollo Global Management)首席经济学家、在华尔街广受关注的意见领袖托尔斯滕·斯洛克(Torsten Slok)发表了一项引人注目的测算。他虽未在其中直接宣布泡沫已经形成,但却写道:“当前标普500指数中前十家公司的估值溢价已超过1990年代IT泡沫时期。”他警告称,英伟达、微软、苹果和Meta等公司的未来市盈率与庞大市值已“与其实际盈利能力严重脱节”。

此后数周,GPT-5的失望表现成为一个关键节点,但并非唯一的预警信号。另一个令人担忧的迹象是巨额资金正涌入数据中心建设,以支撑对未来人工智能需求的种种理论设想。斯洛克也关注到了这一问题,他发现,2025年上半年,数据中心投资对GDP增长的拉动作用竟已与消费支出相当——而消费支出占GDP比重高达70%。(《华尔街日报》的克里斯托弗·米姆斯(Christopher Mims)几周前已给出类似测算。)8月19日,谷歌前首席执行官埃里克·施密特(Eric Schmidt)在《纽约时报》联合撰文称,“实现通用人工智能究竟需要多长时间,依然存在不确定性”,引发广泛讨论。

政治学家亨利·法瑞尔(Henry Farrell)认为,这无疑是一种重大转向。他曾于今年1月在《金融时报》撰文称,施密特是推动“新华盛顿共识”的关键声音之一,而这一共识部分建立在“通用人工智能即将到来”的预期之上。法瑞尔在其Substack上写道,施密特的这篇评论文章表明,他此前的假设“明显在崩塌”。不过,他也补充道,这一判断主要基于他与熟悉华盛顿外交与科技政策交叉领域人士的非正式交流。法瑞尔为这篇文章拟的标题是《科技单边主义的黄昏》。他在结尾写道:“如果通用人工智能的赌注落空,那么支撑这一共识的许多逻辑也将随之崩塌——而这正是埃里克·施密特似乎正在得出的结论。”

2025年夏天,舆论风向转向对人工智能的强烈质疑。5月,达雷尔·韦斯特(Darrell West)在布鲁金斯学会撰文警告称,公众和科学界很快将转而对抗那些自诩为人工智能领域的“宇宙主宰者”。随后,《Fast Company》预测,这个夏天将充斥着“AI糟粕”。8月初,Axios发现“clunker”(破机器)这一俚语被广泛用来形容人工智能的各种失误,尤其是在客户服务领域频频出错的案例。

历史经验表明:短期阵痛,长期收益

《金融时报》的约翰·索恩希尔(John Thornhill)在探讨泡沫问题时提醒读者,既要为应对崩盘做好准备,但同时也要为迎接人工智能未来的“黄金时代”蓄力。他特别强调了数据中心的建设规模——2024年和2025年,科技巨头在这方面的投资高达7500亿美元,而到2029年,全球相关投入预计将达到3万亿美元。索恩希尔援引金融史学家的观点,认为这种狂热的投资模式往往会引发泡沫、剧烈的市场崩盘以及“创造性破坏”。但历史一再证明,最终仍会沉淀出真正持久的价值。

他指出,卡洛塔·佩雷斯(Carlota Perez)在《技术革命与金融资本:泡沫与黄金时代的动力学》中记录了这一模式。她认为,人工智能是自18世纪末以来延续这一规律的第五次技术革命。正是这一连串演变,造就了现代经济中的铁路基础设施和个人计算机等成果。每一次技术革命都曾经历泡沫与崩盘。索恩希尔虽未在这篇专栏中引用其观点,但爱德华·钱瑟勒(Edward Chancellor)在其经典著作《魔鬼带走最后的人》(Devil Take The Hindmost)中同样描绘了类似的模式。这本书不仅详细探讨了泡沫机制,更曾提前预测了互联网泡沫爆发。

2024年11月,阿卡迪安资产管理公司(Acadian Asset Management)的欧文·拉蒙特(Owen Lamont)引用钱瑟勒的观点,认为市场已经跨过一个关键泡沫时刻:大量市场参与者涌现,一方面声称价格过高,另一方面却坚信价格仍会继续上涨。

华尔街态度谨慎,但未直接宣称泡沫形成

华尔街投行大多未直接宣称泡沫形成。摩根士丹利近期发布的一份报告指出,人工智能将为企业带来巨大效率提升,预计标普500企业每年可因此节省9200亿美元。瑞银集团(UBS)则对麻省理工学院这项引发广泛报道的研究中所警示的风险表示认同。该机构警告投资者,随着数据中心的扩张,市场可能会经历一段“资本开支消化不良期”。但瑞银同时强调,人工智能应用正远超预期,并列举了OpenAI的ChatGPT、谷歌母公司Alphabet的Gemini以及由人工智能驱动的客户关系管理系统等案例,表明其商业化潜力正在日益释放。

美国银行研究部在8月初(GPT-5发布之前)发布的一份报告中指出,人工智能正推动劳动力生产率发生“巨变”,将为标普500企业持续带来“创新溢价”。该行美国股票策略主管萨维塔·苏布拉马尼安(Savita Subramanian)认为,2020年代的通胀浪潮让企业学会了“以少博多”,即将人力转化为流程化运作,而人工智能将进一步加速这一趋势。她在接受《财富》采访时表示:“我认为标普500本身未必存在泡沫。”随后又补充道:“但其他一些领域已显现泡沫迹象。”

苏布拉马尼安提到,小企业以及私人信贷领域,可能存在“估值调整过于激进”的风险。她同样担心企业在数据中心建设上投入过度,指出这意味着行业回归重资产模式,背离了当前美国经济中表现最突出的公司所奉行的轻资产战略。

“这确实是新情况,”她说。“科技业曾经高度轻资产,专注研发和创新,如今却斥巨资建设数据中心。”她补充道,这可能会终结其轻资产、高利润率的发展模式,使其转变为“重资产、越来越像制造业”的企业。在她看来,这种转变理应对应更低的市场估值倍数。当被问及这是否等同于泡沫,或者至少是一种修正时,她表示:“这种情况已经在某些领域开始发生。”她也认为可以将其与当年的铁路热潮相提并论。

数学与机器中的幽灵

加里·马库斯(Gary Marcus)还从数学基本面的角度表达了担忧。目前,近500家人工智能独角兽企业的总估值已逼近2.7万亿美元。“与实际营收相比,这根本说不通,”他说。马库斯举例称,OpenAI在7月报告的营收为10亿美元,但仍未实现盈利。他推测OpenAI约占人工智能市场半壁江山,进而粗略估算出全行业年营收约为250亿美元。“这当然不是一个小数目,但实现成本却极高,而目前行业总投资已达数万亿美元。”

那么,如果马库斯是对的,为什么这些年来人们一直没有听进去?他表示自己多年来不断发出警告。在2019年出版的著作《重启AI》(Rebooting AI)中,他将此称为“轻信鸿沟”;早在2012年,他就在《纽约客》撰文指出,深度学习并非登月天梯。前25年的认知科学研究使他深知“人类拟人化本能”:“人们凝视机器时,误将并不存在的智能和人性强加到机器上,最终会把这些机器当作伴侣,并误以为自己距离真正解决问题已经不远了。”他认为,如今人工智能泡沫膨胀至此,很大程度上源于这种人类将自我投射到外物之上的本能,而认知科学家恰恰受过训练,能够规避此陷阱。

这些机器或许看起来像是人类,但“运作方式实际上与人完全不同,”马库斯说。他补充道:“整个市场的根基,正是建立在人们未能理解这一点之上。人们误以为单靠规模扩张就能解决所有问题,却根本没有理解问题本质。在我看来,这几乎就是一场悲剧。”

苏布拉马尼安(Subramanian)则坦言,“人们之所以热爱人工智能技术,是因为它像魔法,给人一种神秘奇幻的感觉……但事实是,它尚未改变世界,不过我认为我们不能因此忽视它。” 她本人也深陷其中。“我现在用ChatGPT比我的孩子们还多。说真的,这个变化挺有意思的。我现在事事都依赖ChatGPT。”(财富中文网)

译者:刘进龙

审校:汪皓

First it was the release of GPT-5 that OpenAI “totally screwed up,” according to Sam Altman. Then Altman followed that up by saying the B-word at a dinner with reporters. “When bubbles happen, smart people get overexcited about a kernel of truth,” The Verge reported on comments by the OpenAI CEO. Then it was the sweeping MIT survey that put a number on what so many people seem to be feeling: a whopping 95% of generative AI pilots at companies are failing.

A tech sell-off ensued, as rattled investors sent the value of the S&P 500 down by $1 trillion. Given the increasing dominance of that index by tech stocks that have largely transformed into AI stocks, it was a sign of nerves that the AI boom was turning into dotcom bubble 2.0. To be sure, fears about the AI trade aren’t the only factor moving markets, as evidenced by the S&P 500 snapping a five-day losing streak on Friday after Jerome Powell’s quasi-dovish comments at Jackson Hole, Wyoming, as even the hint of openness from the Fed chair toward a September rate cut set markets on a tear.

Gary Marcus has been warning of the limits of large language models (LLMs) since 2019 and warning of a potential bubble and problematic economics since 2023. His words carry a particularly distinctive weight. The cognitive scientist turned longtime AI researcher has been active in the machine learning space since 2015, when he founded Geometric Intelligence. That company was acquired by Uber in 2016, and Marcus left shortly afterward, working at other AI startups while offering vocal criticism of what he sees as dead-ends in the AI space.

Still, Marcus doesn’t see himself as a “Cassandra,” and he’s not trying to be, he told Fortune in an interview. Cassandra, a figure from Greek tragedy, was a character who uttered accurate prophecies but wasn’t believed until it was too late. “I see myself as a realist and as someone who foresaw the problems and was correct about them.”

Marcus attributes the wobble in markets to GPT-5 above all. It’s not a failure, he said, but it’s “underwhelming,” a “disappointment,” and that’s “really woken a lot of people up. You know, GPT-5 was sold, basically, as AGI, and it just isn’t,” he added, referencing artificial general intelligence, a hypothetical AI with human-like reasoning abilities. “It’s not a terrible model, it’s not like it’s bad,” he said, but “it’s not the quantum leap that a lot of people were led to expect.”

Marcus said this shouldn’t be news to anyone paying attention, as he argued in 2022 that “deep learning is hitting a wall.” To be sure, Marcus has been wondering openly on his Substack on when the generative AI bubble will deflate. He told Fortune that “crowd psychology” is definitely taking place, and he thinks every day about the John Maynard Keynes quote: “The market can stay irrational longer than you can stay solvent,” or Looney Tunes’s Wile E. Coyote following Road Runner off the edge of a cliff and hanging in midair, before falling down to Earth.

“That’s what I feel like,” Marcus says. “We are off the cliff. This does not make sense. And we get some signs from the last few days that people are finally noticing.”

Building warning signs

The bubble talk began heating up in July, when Apollo Global Management’s chief economist, Torsten Slok, widely read and influential on Wall Street, issued a striking calculation while falling short of declaring a bubble. “The difference between the IT bubble in the 1990s and the AI bubble today is that the top 10 companies in the S&P 500 today are more overvalued than they were in the 1990s,” he wrote, warning that the forward P/E ratios and staggering market capitalizations of companies such as Nvidia, Microsoft, Apple, and Meta had “become detached from their earnings.”

In the weeks since, the disappointment of GPT-5 was an important development, but not the only one. Another warning sign is the massive amount of spending on data centers to support all the theoretical future demand for AI use. Slok has tackled this subject as well, finding that data center investments’ contribution to GDP growth has been the same as consumer spending over the first half of 2025, which is notable since consumer spending makes up 70% of GDP. (The Wall Street Journal‘s Christopher Mims had offered the calculation weeks earlier.) Finally, on August 19, former Google CEO Eric Schmidt co-authored a widely discussed New York Times op-ed on August 19, arguing that “it is uncertain how soon artificial general intelligence can be achieved.”

This is a significant about-face, according to political scientist Henry Farrell, who argued in the Financial Times in January that Schmidt was a key voice shaping the “New Washington Consensus,” predicated in part on AGI being “right around the corner.” On his Substack, Farrell said Schmidt’s op-ed shows that his prior set of assumptions are “visibly crumbling away,” while caveating that he had been relying on informal conversations with people he knew in the intersection of D.C. foreign policy and tech policy. Farrell’s title for that post: “The twilight of tech unilateralism.” He concluded: “If the AGI bet is a bad one, then much of the rationale for this consensus falls apart. And that is the conclusion that Eric Schmidt seems to be coming to.”

Finally, the vibe is shifting in the summer of 2025 into a mounting AI backlash. Darrell West warned in Brookings in May that the tide of both public and scientific opinion would soon turn against AI’s masters of the universe. Soon after, Fast Company predicted the summer would be full of “AI slop.” By early August, Axios had identified the slang “clunker” being applied widely to AI mishaps, particularly in customer service gone awry.

History says: short-term pain, long-term gain

John Thornhill of the Financial Times offered some perspective on the bubble question, advising readers to brace themselves for a crash, but to prepare for a future “golden age” of AI nonetheless. He highlights the data center buildout—a staggering $750 billion investment from Big Tech over 2024 and 2025, and part of a global rollout projected to hit $3 trillion by 2029. Thornhill turns to financial historians for some comfort and some perspective. Over and over, it shows that this type of frenzied investment typically triggers bubbles, dramatic crashes, and creative destruction—but that eventually durable value is realized.

He notes that Carlota Perez documented this pattern in Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages. She identified AI as the fifth technological revolution to follow the pattern begun in the late 18th century, as a result of which the modern economy now has railroad infrastructure and personal computers, among other things. Each one had a bubble and a crash at some point. Thornhill didn’t cite him in this particular column, but Edward Chancellor documented similar patterns in his classic Devil Take The Hindmost, a book notable not just for its discussions of bubbles but for predicting the dotcom bubble before it happened.

Owen Lamont of Acadian Asset Management cited Chancellor in November 2024, when he argued that a key bubble moment had been passed: an unusually large number of market participants saying that prices are too high, but insisting that they’re likely to rise further.

Wall Street is cautious, but not calling a bubble

Wall Street banks are largely not calling for a bubble. Morgan Stanley released a note recently seeing huge efficiencies ahead for companies as a result of AI: $920 billion per year for the S&P 500. UBS, for its part, concurred with the caution flagged in the news-making MIT research. It warned investors to expect a period of “capex indigestion” accompanying the data center buildout, but it also maintained that AI adoption is expanding far beyond expectations, citing growing monetization from OpenAI’s ChatGPT, Alphabet’s Gemini, and AI-powered CRM systems.

Bank of America Research wrote a note in early August, before the launch of GPT-5, seeing AI as part of a worker productivity “sea change” that will drive an ongoing “innovation premium” for S&P 500 firms. Head of U.S. Equity Strategy Savita Subramanian essentially argued that the inflation wave of the 2020s taught companies to do more with less, to turn people into processes, and that AI will turbo-charge this. “I don’t think it’s necessarily a bubble in the S&P 500,” she told Fortune in an interview, before adding, “I think there are other areas where it’s becoming a little bit bubble-like.”

Subramanian mentioned smaller companies and potentially private lending as areas “that potentially have re-rated too aggressively.” She’s also concerned about the risk of companies diving into data centers too such a great extent, noting that this represents a shift back toward an asset-heavier approach, instead of the asset-light approach that increasingly distinguishes top performance in the U.S. economy.

“I mean, this is new,” she said. “Tech used to be very asset-light and just spent money on R&D and innovation, and now they’re spending money to build out these data centers,” adding that she sees it as potentially marking the end of their asset-light, high-margin existence and basically transforming them into “very asset-intensive and more manufacturing-like than they used to be.” From her perspective, that warrants a lower multiple in the stock market. When asked if that is tantamount to a bubble, if not a correction, she said “it’s starting to happen in places,” and she agrees with the comparison to the railroad boom.

The math and the ghost in the machine

Gary Marcus also cited the fundamentals of math as a reason that he’s concerned, with nearly 500 AI unicorns being valued at $2.7 trillion. “That just doesn’t make sense relative to how much revenue is coming [in],” he said. Marcus cited OpenAI reporting $1 billion in revenue in July, but still not being profitable. Speculating, he extrapolated that to OpenAI having roughly half the AI market, and offered a rough calculation that it means about $25 billion a year of revenue for the sector, “which is not nothing, but it costs a lot of money to do this, and there’s trillions of dollars [invested].”

So if Marcus is correct, why haven’t people been listening to him for years? He said he’s been warning people about this for years, too, calling it the “gullibility gap” in his 2019 book Rebooting AI and arguing in The New Yorker in 2012 that deep learning was a ladder that wouldn’t reach the moon. For the first 25 years of his career, Marcus trained and practiced as a cognitive scientist, and learned about the “anthropomorphization people do. … [they] look at these machines and make the mistake of attributing to them an intelligence that is not really there, a humanness that is not really there, and they wind up using them as a companion, and they wind up thinking that they’re closer to solving these problems than they actually are.” He said he thinks the bubble inflating to its current extent is in large part because of the human impulse to project ourselves onto things, something a cognitive scientist is trained not to do.

These machines might seem like they’re human, but “they don’t actually work like you,” Marcus said, adding, “this entire market has been based on people not understanding that, imagining that scaling was going to solve all of this, because they don’t really understand the problem. I mean, it’s almost tragic.”

Subramanian, for her part, said she thinks “people love this AI technology because it feels like sorcery. It feels a little magical and mystical … the truth is it hasn’t really changed the world that much yet, but I don’t think it’s something to be dismissed.” She’s also become really taken with it herself. “I’m already using ChatGPT more than my kids are. I mean, it’s kind of interesting to see this. I use ChatGPT for everything now.”

财富中文网所刊载内容之知识产权为财富媒体知识产权有限公司及/或相关权利人专属所有或持有。未经许可,禁止进行转载、摘编、复制及建立镜像等任何使用。
0条Plus
精彩评论
评论

撰写或查看更多评论

请打开财富Plus APP

前往打开