首页 500强 活动 榜单 商业 科技 商潮 专题 品牌中心
杂志订阅

OpenAI风波不断,会影响其IPO前景吗?

Jeremy Kahn
2026-04-26

与此同时,Anthropic正努力应对由自身模型加速演进带来的网络安全风险。

文本设置
小号
默认
大号
Plus(0条)

 

OpenAI首席执行官萨姆·奥尔特曼。图片来源:Anna Moneymaker—Getty Images

过去一段时间,OpenAI占据了各大新闻头条。事实上,这家公司的消息铺天盖地,甚至让人一时难以理清头绪。更难判断的是,事后回看,究竟哪一项进展才会被证明最具影响力。稍后我会逐一梳理OpenAI的动态。

不过在此之前,我想先重点提三条关于Anthropic的新闻,因为从长期来看,它们的重要性,可能超过OpenAI的所有琐事。

Anthropic公布了一项名为“Glasswing计划”的合作项目,汇集多家大型科技公司和网络安全机构,目标是在黑客利用AI对全球关键软件造成严重破坏之前,提前加固这些系统的安全。联盟伙伴已经获准使用Anthropic尚未发布的Mythos模型的一个网络安全专用预览版,希望借助该模型识别零日攻击和其他潜在漏洞,并在Mythos正式版以及OpenAI和谷歌具备超强网络攻防能力的同类AI模型上线之前,修复这些漏洞。

“Glasswing计划”进一步证明,AI实验室、网络安全公司以及政府官员日益担忧一种情况:随着近期AI模型编程能力的提升,我们正步入一个网络安全威胁空前严峻、甚至可能带来灾难性后果的时代。

Anthropic还宣布,将不再允许用户通过每月订阅Claude来驱动第三方智能体工具,例如近期爆火的OpenClaw及其衍生工具。今后,用户若想使用Claude驱动此类工具,必须改为订阅Anthropic的API,并按词元用量付费,而不再适用“包月不限量”的模式。近几周,尤其是在OpenClaw这类智能体工具爆火之后,Anthropic已显露出算力不足的问题,难以应对迅速飙升的用户需求(该公司还在高峰时段实施了严格的使用限制,引发不少用户不满)。为缓解算力压力,Anthropic宣布扩大与谷歌和博通(Broadcom)的合作,以获取运行谷歌TPU芯片的数据中心资源,这些资源预计将在2027年前陆续上线。不过在此之前,这一调整可能会对AI智能体的使用方式产生重大影响:一方面可能放缓其普及速度,另一方面也可能促使更多用户转向将开源模型作为智能体的“核心引擎”。

Anthropic还表示,公司当前的年化收入“运行率”已达到300亿美元,这一数字意味着仅在3月,其收入就飙升了58%。该水平也高于OpenAI在今年2月披露的250亿美元年化收入运行率(不过两家公司计算口径并不完全一致,因此可比性有限)。但这仍清晰表明,Anthropic正处于高速增长阶段,而在OpenAI近期一系列动态的背景下,这一点尤为关键。

言归正传,接下来我们来看OpenAI方面的最新情况。

OpenAI偏好“建设性”的媒体报道,于是决定“自制”内容

在OpenAI的一系列新闻中,或许最不重要、却引发媒体广泛讨论的一件事,是公司决定收购成立仅一年的视频播客平台科技商业节目网络(TBPN)。据《金融时报》援引消息人士称,交易金额为“数亿美元的低位区间”。OpenAI在宣布这笔交易时表示,“传统的传播方式显然已不再适用于我们”,公司需要“帮助打造一个空间,让围绕AI变革的讨论更加真实、更具建设性,并将开发者和使用者置于核心地位”。

这里的“建设性”一词意味深长。尽管OpenAI坚称TBPN将保持编辑独立性,但外界对此多持怀疑态度。其中一个原因是,该视频内容业务将向公司政策传播负责人克里斯·勒哈恩汇报——他曾是一位作风强硬的政治操盘手。这一举动看起来像是科技公司试图通过“直达受众”(即利用社交媒体和自制内容,绕过传统新闻媒体)掌控舆论叙事的最新、甚至最极端的案例。传统媒体往往更具批判性,也更可能提出企业高管不愿面对的问题。

奥尔特曼的诚信遭到质疑

如果此前还不够清楚OpenAI为何希望掌控传播渠道、为何反感传统媒体,那么《纽约客》最新刊发的对OpenAI首席执行官萨姆·奥尔特曼的长篇人物报道则点明了缘由。这篇文章由罗南·法罗和安德鲁·马兰茨历时一年半调查完成,标题赫然写道:“萨姆·奥尔特曼或许掌控着我们的未来——但他值得信任吗?”通读全文,很难得出肯定的答案。

尽管文章中披露了一些新的细节,例如,记者获取了现任Anthropic首席执行官达里奥·阿莫迪的数百页笔记,记录了他在OpenAI任高级研究员期间与奥尔特曼的互动,但其中不少事实此前已在其他报道中出现。然而,将这些信息集中呈现,依然产生了巨大的冲击力。法罗和马兰茨笔下的奥尔特曼,更像是游走在反社会人格边缘的高管,为了上位不惜撒谎且毫无愧疚感。这篇文章让人不禁质疑:除了对权力的追逐,奥尔特曼对其他任何事的承诺,到底有几分真诚?他是否真得重视AI安全问题,尤其是他在AI安全方面的表态,是否只是一种策略性姿态,起初是为了争取埃隆·马斯克等人的早期资金支持,后来则是为了吸引并留住顶尖AI研究人才,同时缓和监管压力。

可以肯定的是,潜在的IPO投资者通常不会青睐由“习惯性说谎者”掌舵的公司。他们同样排斥高层频繁变动的企业。而就在上周,OpenAI又宣布了一轮新的高管调整。公司表示,“通用人工智能部署首席执行官”、负责所有商业产品与运营的菲吉·西莫,将因慢性健康问题休数周病假。在她休假期间,此前主要负责AI基础设施建设的格雷格·布罗克曼将接管产品业务。

与此同时,OpenAI还公布了一项更长久的管理层变动。长期担任首席运营官的布拉德·莱特卡普将转任一个新的岗位,负责统筹“特殊项目”,其中包括与私募股权机构合作的一项合资计划,旨在利用AI提升传统非科技行业的效率。近期出任OpenAI首席营收官、曾任Slack首席执行官的丹妮丝·德雷瑟将接手莱特卡普的大部分原有职责;其余业务与运营板块,则由首席战略官杰森·权和首席财务官萨拉·弗莱尔共同分管。

有关支出与IPO计划的分歧浮出水面

与此同时,一则新曝出的消息暗示,弗莱尔的职位或许也不稳固。《The Information》报道称,弗莱尔在私下反对奥尔特曼的IPO时间表,并对公司未来五年高达6,000亿美元的支出承诺表达了担忧。该媒体援引一位与弗莱尔交流过的人士称,她不确定如此规模的投入是否必要,也不确定OpenAI能否以足够快的速度实现收入增长、以支撑这笔开支。

报道还指出,弗莱尔是在OpenAI宣布1,220亿美元融资之前表达了这些担忧。这笔融资于上周公布,使公司投后估值达到8,520亿美元。报道未能确认,在获得这笔新资金后,弗莱尔的立场是否发生了变化。不过,另一位未具名消息人士称,在OpenAI一次讨论重大AI基础设施支出计划的投资者会议上,弗莱尔并未被邀请出席。对此,OpenAI回应称,弗莱尔与奥尔特曼“完全认同:稳定获取算力是公司战略的关键,也是其扩张过程中的重要竞争优势。”

综合OpenAI的这些动向,人们难免会怀疑,这家全球最知名的AI公司是否正面临失控风险。至少,OpenAI能否在今年顺利IPO,已被打上了一个巨大的问号。而如果IPO计划告吹,公司还能在私募市场持续融资多久,同样是个未知数。一旦OpenAI崩盘,甚至只是经历一轮“估值下调”,都可能威胁到整个AI生态系统。当然,像英伟达(Nvidia)这样的关键参与者对此也心知肚明,因此很可能会继续出手支持OpenAI。

在这一系列动荡之中,OpenAI还发布了一份白皮书,呼吁美国针对即将到来的“超级人工智能”时代制定一项全面的产业战略。(更多相关内容可参见我同事沙龙·戈德曼的报道。)不少人认为,这份文件在很大程度上是OpenAI的一种前瞻性布局,以化解美国各地日益高涨、且正逐渐获得两党支持的反AI产业情绪。

AI行业动态

Anthropic扩大与谷歌、博通的合作,以获取数据中心算力。根据协议,Anthropic最早将从2027年起获得约3.5吉瓦的算力,但前提是该AI公司需达成特定的商业里程碑。此外,博通还将在2031年前向谷歌供应定制AI芯片TPU及相关基础设施。更多详情,请参阅《华尔街日报》的报道。

谷歌为Gemini增设心理健康防护机制。谷歌已引入一套系统,用于识别用户在与Gemini交互过程中可能表现出的心理危机信号,并在必要时引导用户联系危机干预热线。公司还宣布将捐赠3,000万美元,用于在全球范围内支持危机干预服务。此外,谷歌还增加了多项安全措施,以遏制自残倾向,并训练Gemini避免强化用户的错误信念。更多内容,请参阅彭博社的报道。

谷歌发布Gemma 4开源权重模型。谷歌推出了新一代开源权重AI模型Gemma 4。据科技媒体《The Register》报道,该系列模型基于Apache 2.0许可发布,旨在通过提升使用灵活性和数据控制能力,吸引企业用户。Gemma 4由谷歌DeepMind开发,共推出四个版本,重点强化编程能力、智能体型AI以及推理表现,同时支持多模态输入,并可在从智能手机到数据中心的多种设备上运行。此次发布正值来自中国的开源权重模型竞争白热化之际,反映出谷歌正竭力提供一个足以抗衡OpenAI和Anthropic系统的企业级可靠替代方案。

微软(Microsoft)推出“中端”AI模型,AI负责人承认存在算力缺口。微软发布了三款中型AI模型,称其在语音转录、语音生成和图像生成方面均达到行业领先水平。不过,微软AI负责人穆斯塔法·苏莱曼在接受《金融时报》采访时表示,公司目前仍缺乏构建前沿级大规模模型所需的算力。苏莱曼指出,公司现阶段将重点放在“中端”模型上,在成本与性能之间寻求平衡,同时加大对基础设施和人才的投入,以追赶谷歌和Anthropic等领先者。

Meta计划开源下一代AI模型。据报道,Meta内部此前一直在讨论,是否将其下一代AI模型——即在新成立的“超级智能实验室”(由前Scale AI首席执行官汪滔领导)开发的首个模型——继续以开源权重形式发布,还是转为通过付费API或订阅模式提供。Axios援引匿名知情人士称,这一争论如今已尘埃落定,公司决定仍采用开源权重模式。这一新模型的发布被寄予厚望:一方面,这是Meta斥资数十亿美元引入汪滔及其团队后的首个成果;另一方面,其上一代模型Llama 4被普遍认为表现不佳,明显落后于OpenAI、Anthropic和谷歌DeepMind的同类产品。

AI研究前沿

AI也有“情绪”?Anthropic最新研究给出部分肯定答案。该AI实验室表示,其研究发现,为其Claude AI模型提供支持的人工神经网络中,确实存在对“情绪概念”(如快乐或恐惧)的内部表征,这些表征会在功能层面影响模型的行为。Anthropic的研究人员强调,这并不意味着AI具备真实情感,而是其神经激活模式中存在某种结构,这些结构会引导模型的响应,在决策、偏好、输出方面与人类情绪存在一定程度的相似性。例如,当模型在不同任务之间进行选择时,往往更倾向于与“正向情绪”表征相关的选项,这表明这些内部表征在行为中具有因果作用。研究认为,理解并在一定程度上引导这些类似“情绪”的内部状态,对提升AI模型的表现至关重要。同时,这一发现也涉及安全层面的考量,因为模型的这些内部表征可能影响模型在多大程度上遵循用户意图。

启发性案例解析

谷歌盛赞其AlphaEvolve在真实企业场景中取得成效。去年,谷歌DeepMind推出了AlphaEvolve,这是一款智能体型编程助手,能够调用多个Gemini模型,先为特定任务生成算法,再通过一系列小规模、可控的实验不断迭代优化。当时,谷歌已利用该系统解决数学问题,并优化自身的算力使用方式。如今,公司公布了其在真实企业场景中的应用成果。

法国全球物流公司FM Logistic利用AlphaEvolve优化其大型仓库内的拣选与打包路径。与依赖固定规则不同,该系统会基于实际运营数据,迭代编写并测试新的路径算法,在满足叉车容量、订单优先级等约束条件的同时,尽可能减少整体行走距离。

最终生成的算法引入了多项关键创新,例如从高密度货物区域开始路径规划,以及灵活放弃低效路线,从而提升整体效率。谷歌表示,这些优化使路径效率提升了10.4%,每年减少超过1.5万公里的行驶距离,在无需增加人手或设备的情况下,提高了订单履约速度和处理能力。这一案例表明,即便在软件开发之外的领域,AI编程智能体同样具备巨大的应用潜力。(财富中文网)

译者:刘进龙

审校:汪皓

过去一段时间,OpenAI占据了各大新闻头条。事实上,这家公司的消息铺天盖地,甚至让人一时难以理清头绪。更难判断的是,事后回看,究竟哪一项进展才会被证明最具影响力。稍后我会逐一梳理OpenAI的动态。

不过在此之前,我想先重点提三条关于Anthropic的新闻,因为从长期来看,它们的重要性,可能超过OpenAI的所有琐事。

Anthropic公布了一项名为“Glasswing计划”的合作项目,汇集多家大型科技公司和网络安全机构,目标是在黑客利用AI对全球关键软件造成严重破坏之前,提前加固这些系统的安全。联盟伙伴已经获准使用Anthropic尚未发布的Mythos模型的一个网络安全专用预览版,希望借助该模型识别零日攻击和其他潜在漏洞,并在Mythos正式版以及OpenAI和谷歌具备超强网络攻防能力的同类AI模型上线之前,修复这些漏洞。

“Glasswing计划”进一步证明,AI实验室、网络安全公司以及政府官员日益担忧一种情况:随着近期AI模型编程能力的提升,我们正步入一个网络安全威胁空前严峻、甚至可能带来灾难性后果的时代。

Anthropic还宣布,将不再允许用户通过每月订阅Claude来驱动第三方智能体工具,例如近期爆火的OpenClaw及其衍生工具。今后,用户若想使用Claude驱动此类工具,必须改为订阅Anthropic的API,并按词元用量付费,而不再适用“包月不限量”的模式。近几周,尤其是在OpenClaw这类智能体工具爆火之后,Anthropic已显露出算力不足的问题,难以应对迅速飙升的用户需求(该公司还在高峰时段实施了严格的使用限制,引发不少用户不满)。为缓解算力压力,Anthropic宣布扩大与谷歌和博通(Broadcom)的合作,以获取运行谷歌TPU芯片的数据中心资源,这些资源预计将在2027年前陆续上线。不过在此之前,这一调整可能会对AI智能体的使用方式产生重大影响:一方面可能放缓其普及速度,另一方面也可能促使更多用户转向将开源模型作为智能体的“核心引擎”。

Anthropic还表示,公司当前的年化收入“运行率”已达到300亿美元,这一数字意味着仅在3月,其收入就飙升了58%。该水平也高于OpenAI在今年2月披露的250亿美元年化收入运行率(不过两家公司计算口径并不完全一致,因此可比性有限)。但这仍清晰表明,Anthropic正处于高速增长阶段,而在OpenAI近期一系列动态的背景下,这一点尤为关键。

言归正传,接下来我们来看OpenAI方面的最新情况。

OpenAI偏好“建设性”的媒体报道,于是决定“自制”内容

在OpenAI的一系列新闻中,或许最不重要、却引发媒体广泛讨论的一件事,是公司决定收购成立仅一年的视频播客平台科技商业节目网络(TBPN)。据《金融时报》援引消息人士称,交易金额为“数亿美元的低位区间”。OpenAI在宣布这笔交易时表示,“传统的传播方式显然已不再适用于我们”,公司需要“帮助打造一个空间,让围绕AI变革的讨论更加真实、更具建设性,并将开发者和使用者置于核心地位”。

这里的“建设性”一词意味深长。尽管OpenAI坚称TBPN将保持编辑独立性,但外界对此多持怀疑态度。其中一个原因是,该视频内容业务将向公司政策传播负责人克里斯·勒哈恩汇报——他曾是一位作风强硬的政治操盘手。这一举动看起来像是科技公司试图通过“直达受众”(即利用社交媒体和自制内容,绕过传统新闻媒体)掌控舆论叙事的最新、甚至最极端的案例。传统媒体往往更具批判性,也更可能提出企业高管不愿面对的问题。

奥尔特曼的诚信遭到质疑

如果此前还不够清楚OpenAI为何希望掌控传播渠道、为何反感传统媒体,那么《纽约客》最新刊发的对OpenAI首席执行官萨姆·奥尔特曼的长篇人物报道则点明了缘由。这篇文章由罗南·法罗和安德鲁·马兰茨历时一年半调查完成,标题赫然写道:“萨姆·奥尔特曼或许掌控着我们的未来——但他值得信任吗?”通读全文,很难得出肯定的答案。

尽管文章中披露了一些新的细节,例如,记者获取了现任Anthropic首席执行官达里奥·阿莫迪的数百页笔记,记录了他在OpenAI任高级研究员期间与奥尔特曼的互动,但其中不少事实此前已在其他报道中出现。然而,将这些信息集中呈现,依然产生了巨大的冲击力。法罗和马兰茨笔下的奥尔特曼,更像是游走在反社会人格边缘的高管,为了上位不惜撒谎且毫无愧疚感。这篇文章让人不禁质疑:除了对权力的追逐,奥尔特曼对其他任何事的承诺,到底有几分真诚?他是否真得重视AI安全问题,尤其是他在AI安全方面的表态,是否只是一种策略性姿态,起初是为了争取埃隆·马斯克等人的早期资金支持,后来则是为了吸引并留住顶尖AI研究人才,同时缓和监管压力。

可以肯定的是,潜在的IPO投资者通常不会青睐由“习惯性说谎者”掌舵的公司。他们同样排斥高层频繁变动的企业。而就在上周,OpenAI又宣布了一轮新的高管调整。公司表示,“通用人工智能部署首席执行官”、负责所有商业产品与运营的菲吉·西莫,将因慢性健康问题休数周病假。在她休假期间,此前主要负责AI基础设施建设的格雷格·布罗克曼将接管产品业务。

与此同时,OpenAI还公布了一项更长久的管理层变动。长期担任首席运营官的布拉德·莱特卡普将转任一个新的岗位,负责统筹“特殊项目”,其中包括与私募股权机构合作的一项合资计划,旨在利用AI提升传统非科技行业的效率。近期出任OpenAI首席营收官、曾任Slack首席执行官的丹妮丝·德雷瑟将接手莱特卡普的大部分原有职责;其余业务与运营板块,则由首席战略官杰森·权和首席财务官萨拉·弗莱尔共同分管。

有关支出与IPO计划的分歧浮出水面

与此同时,一则新曝出的消息暗示,弗莱尔的职位或许也不稳固。《The Information》报道称,弗莱尔在私下反对奥尔特曼的IPO时间表,并对公司未来五年高达6,000亿美元的支出承诺表达了担忧。该媒体援引一位与弗莱尔交流过的人士称,她不确定如此规模的投入是否必要,也不确定OpenAI能否以足够快的速度实现收入增长、以支撑这笔开支。

报道还指出,弗莱尔是在OpenAI宣布1,220亿美元融资之前表达了这些担忧。这笔融资于上周公布,使公司投后估值达到8,520亿美元。报道未能确认,在获得这笔新资金后,弗莱尔的立场是否发生了变化。不过,另一位未具名消息人士称,在OpenAI一次讨论重大AI基础设施支出计划的投资者会议上,弗莱尔并未被邀请出席。对此,OpenAI回应称,弗莱尔与奥尔特曼“完全认同:稳定获取算力是公司战略的关键,也是其扩张过程中的重要竞争优势。”

综合OpenAI的这些动向,人们难免会怀疑,这家全球最知名的AI公司是否正面临失控风险。至少,OpenAI能否在今年顺利IPO,已被打上了一个巨大的问号。而如果IPO计划告吹,公司还能在私募市场持续融资多久,同样是个未知数。一旦OpenAI崩盘,甚至只是经历一轮“估值下调”,都可能威胁到整个AI生态系统。当然,像英伟达(Nvidia)这样的关键参与者对此也心知肚明,因此很可能会继续出手支持OpenAI。

在这一系列动荡之中,OpenAI还发布了一份白皮书,呼吁美国针对即将到来的“超级人工智能”时代制定一项全面的产业战略。(更多相关内容可参见我同事沙龙·戈德曼的报道。)不少人认为,这份文件在很大程度上是OpenAI的一种前瞻性布局,以化解美国各地日益高涨、且正逐渐获得两党支持的反AI产业情绪。

AI行业动态

Anthropic扩大与谷歌、博通的合作,以获取数据中心算力。根据协议,Anthropic最早将从2027年起获得约3.5吉瓦的算力,但前提是该AI公司需达成特定的商业里程碑。此外,博通还将在2031年前向谷歌供应定制AI芯片TPU及相关基础设施。更多详情,请参阅《华尔街日报》的报道。

谷歌为Gemini增设心理健康防护机制。谷歌已引入一套系统,用于识别用户在与Gemini交互过程中可能表现出的心理危机信号,并在必要时引导用户联系危机干预热线。公司还宣布将捐赠3,000万美元,用于在全球范围内支持危机干预服务。此外,谷歌还增加了多项安全措施,以遏制自残倾向,并训练Gemini避免强化用户的错误信念。更多内容,请参阅彭博社的报道。

谷歌发布Gemma 4开源权重模型。谷歌推出了新一代开源权重AI模型Gemma 4。据科技媒体《The Register》报道,该系列模型基于Apache 2.0许可发布,旨在通过提升使用灵活性和数据控制能力,吸引企业用户。Gemma 4由谷歌DeepMind开发,共推出四个版本,重点强化编程能力、智能体型AI以及推理表现,同时支持多模态输入,并可在从智能手机到数据中心的多种设备上运行。此次发布正值来自中国的开源权重模型竞争白热化之际,反映出谷歌正竭力提供一个足以抗衡OpenAI和Anthropic系统的企业级可靠替代方案。

微软(Microsoft)推出“中端”AI模型,AI负责人承认存在算力缺口。微软发布了三款中型AI模型,称其在语音转录、语音生成和图像生成方面均达到行业领先水平。不过,微软AI负责人穆斯塔法·苏莱曼在接受《金融时报》采访时表示,公司目前仍缺乏构建前沿级大规模模型所需的算力。苏莱曼指出,公司现阶段将重点放在“中端”模型上,在成本与性能之间寻求平衡,同时加大对基础设施和人才的投入,以追赶谷歌和Anthropic等领先者。

Meta计划开源下一代AI模型。据报道,Meta内部此前一直在讨论,是否将其下一代AI模型——即在新成立的“超级智能实验室”(由前Scale AI首席执行官汪滔领导)开发的首个模型——继续以开源权重形式发布,还是转为通过付费API或订阅模式提供。Axios援引匿名知情人士称,这一争论如今已尘埃落定,公司决定仍采用开源权重模式。这一新模型的发布被寄予厚望:一方面,这是Meta斥资数十亿美元引入汪滔及其团队后的首个成果;另一方面,其上一代模型Llama 4被普遍认为表现不佳,明显落后于OpenAI、Anthropic和谷歌DeepMind的同类产品。

AI研究前沿

AI也有“情绪”?Anthropic最新研究给出部分肯定答案。该AI实验室表示,其研究发现,为其Claude AI模型提供支持的人工神经网络中,确实存在对“情绪概念”(如快乐或恐惧)的内部表征,这些表征会在功能层面影响模型的行为。Anthropic的研究人员强调,这并不意味着AI具备真实情感,而是其神经激活模式中存在某种结构,这些结构会引导模型的响应,在决策、偏好、输出方面与人类情绪存在一定程度的相似性。例如,当模型在不同任务之间进行选择时,往往更倾向于与“正向情绪”表征相关的选项,这表明这些内部表征在行为中具有因果作用。研究认为,理解并在一定程度上引导这些类似“情绪”的内部状态,对提升AI模型的表现至关重要。同时,这一发现也涉及安全层面的考量,因为模型的这些内部表征可能影响模型在多大程度上遵循用户意图。

启发性案例解析

谷歌盛赞其AlphaEvolve在真实企业场景中取得成效。去年,谷歌DeepMind推出了AlphaEvolve,这是一款智能体型编程助手,能够调用多个Gemini模型,先为特定任务生成算法,再通过一系列小规模、可控的实验不断迭代优化。当时,谷歌已利用该系统解决数学问题,并优化自身的算力使用方式。如今,公司公布了其在真实企业场景中的应用成果。

法国全球物流公司FM Logistic利用AlphaEvolve优化其大型仓库内的拣选与打包路径。与依赖固定规则不同,该系统会基于实际运营数据,迭代编写并测试新的路径算法,在满足叉车容量、订单优先级等约束条件的同时,尽可能减少整体行走距离。

最终生成的算法引入了多项关键创新,例如从高密度货物区域开始路径规划,以及灵活放弃低效路线,从而提升整体效率。谷歌表示,这些优化使路径效率提升了10.4%,每年减少超过1.5万公里的行驶距离,在无需增加人手或设备的情况下,提高了订单履约速度和处理能力。这一案例表明,即便在软件开发之外的领域,AI编程智能体同样具备巨大的应用潜力。(财富中文网)

译者:刘进龙

审校:汪皓

OpenAI dominated the news over the past few days. In fact, so much has happened related to the company that it’s hard to know where to start. It’s also hard to discern which OpenAI development will prove, with the benefit of hindsight, to be the most significant. I’ll cover the OpenAI news in a sec.

But first, I want to highlight three pieces of news from Anthropic because I think, in the long-run, they might matter more than any of the OpenAI stuff.

Anthropic unveiled today what it is calling Project Glasswing, a coalition of major technology companies and cybersecurity players, that is dedicated to trying to secure the world’s most critical software before AI-enabled hackers wreak absolute havoc around the globe. The coalition partners have been given access to a special cybersecurity-focused preview version of Anthropic’s yet-to-be-released Mythos model, in the hopes that Mythos can discover zero day attacks and other vulnerabilities and that they can be patched, before a production version of Mythos and similar AI models with superpowerful cyber capabilities from OpenAI and Google, debut. My colleague Beatrice Nolan, who broke the news about Mythos’ existence a few weeks ago, has the news on Project Glasswing here.

Project Glasswing is further evidence of the growing concern within the AI labs, cybersecurity companies, and among government officials, that we are entering an era of unprecedented and potentially catastrophic cybersecurity threats due to the increased coding capabilities of recent AI models. The New York Times has more on that evolving risk in this story here.

Anthropic also announced that it would no longer allow people to use their monthly Claude subscriptions to power third-party agentic harnesses, such as the virally-popular OpenClaw and its prodigy. Now, in order to use Claude to power these tools, people will need to subscribe to Anthropic’s API and pay per-token usage fees, as opposed to using all-you-can-consume monthly subscriptions. Anthropic has in recent weeks shown that it does not have the computing capacity to handle the skyrocketing adoption rates it has experienced, especially with agentic tools like OpenClaw (Anthropic also imposed strict usage caps during peak hours that have annoyed many users.) In part to address this compute crunch, Anthropic announced an expanded partnership with Google and Broadcom to access data centers running Google’s TPU chips coming online by 2027. (More on that below.) But, in the meantime, Anthropic’s decision may have a big impact on how AI agents get used, perhaps slowing adoption, or perhaps driving many more people to start using open-source models as the brains behind these agents.

Anthropic also said it has achieved an annual revenue “run rate” of $30 billion. The figure implies a 58% revenue surge in March alone. The number is also higher than the $25 billion annual revenue run rate OpenAI reported in February. (Although Anthropic and OpenAI don’t use the same method to calculate their run rates, so it is a bit of an apples to orange comparison.) But it clearly shows that Anthropic is on a tear and that matters, especially in light of the other news coming out of OpenAI.

Ok, so without further ado , the OpenAI stuff:

OpenAI likes ‘constructive’ media coverage, so it’s buying some

The OpenAI development that probably matters least, but which nonetheless had everyone in the media talking, is OpenAI’s decision to buy the year-old vodcaster TBPN (Technology Business Programming Network) for an amount that sources told the Financial Times was in “the low hundreds of millions.” OpenAI, in announcing the deal, said that it’s “become clear the standard communications playbook just doesn’t apply to us,” and that the company needed “to help create a space for a real, constructive conversation about the changes AI creates—with builders and people using the technology at the center.”

The word “constructive” here is doing a lot of work. While OpenAI insisted that TBPN would retain its editorial independence, many are skeptical, noting that, among other things, the video broadcast operation will report to Chris Lehane, the bare knuckled-political operator who serves as OpenAI’s policy communications chief. This seems like just the latest and perhaps most extreme case of a tech company trying to control the narrative by “going direct”—using social media and in-house produced content to reach audiences and bypass traditional journalistic outlets that are often more critical and tend to ask the kinds of questions that executives don’t want to answer.

Altman’s honesty questioned

If it weren’t already clear why OpenAI wants to own the messenger and dislikes traditional journalism, then the New Yorker underscored the rationale by publishing a lengthy profile of OpenAI CEO Sam Altman that was the result of a year-and-a-half of investigative reporting by Ronan Farrow and Andrew Marantz. The piece was headlined “Sam Altman may control our future—can he be trusted?” Reading the piece, it is hard to come away with an answer other than: no.

While there are a few new tidbits in the story—the reporters, for instance, obtained hundreds of pages of notes that Dario Amodei, now the Anthropic CEO, made on his interactions with Altman during the time Amodei was a top OpenAI researcher—many of the facts in the story have already been reported elsewhere. Nonetheless, there is impact in seeing them all assembled in one place. The overriding impression of Altman from Farrow and Marantz’s story is of a borderline sociopath; an executive with no compunction about lying to get ahead. The piece raises questions about how sincere Altman is in his commitment to anything other than his own pursuit of power—and in particular asks whether Altman actually cares about AI safety or whether his rhetoric on that subject is simply a convenient pose used, first to win over early funding for OpenAI from Elon Musk, and later to win over and retain talented AI researchers and keep regulators at bay.

Certainly potential IPO investors don’t generally love companies run by pathological liars. They also don’t like companies where the top executive ranks are constantly being reshuffled. But OpenAI last week announced another executive shakeup. It said Fidji Simo, who has the title “CEO of AGI Deployment” and is in charge of all the company’s commercial products and operations, will be taking several weeks of medical leave to deal with a chronic health condition. In her absence, Greg Brockman, who had been largely focused on the company’s AI infrastructure build out, is going to be put in charge of product.

But then OpenAI also announced a more permanent management shuffle. The company said that Brad Lightcap, its long-serving chief operating officer, is moving to a new role coordinating “special projects,” including a joint venture with private equity firms that will look to use AI to push efficiencies into older, non-tech companies. Denise Dresser, the former Slack CEO recently hired by OpenAI to serve as chief revenue officer, is taking on most of Lightcap’s previous duties, with oversight of the other business and operations units being split between Jason Kwon, OpenAI’s chief strategy officer, and CFO Sarah Friar.

Reported divisions over spending and IPO plans

Meanwhile, a story surfaced that might suggest Friar may not be secure in her role either. The Information reported that Friar has privately disagreed with Altman’s timeline for an IPO and voiced concerns about the company’s $600 billion in spending commitments over the next five years. Citing a person who had spoken to Friar about her views, the publication said Friar has said she is unsure if that huge amount of spending was necessary or whether OpenAI would be able to grow revenue fast enough to support it.

The publication said that Friar had voiced these concerns prior to OpenAI’s $122 billion fundraise—which was announced last week and valued OpenAI at $852 billion post-money. It said it was unable to determine whether her position had changed in light of that new money. But it cited another unnamed source as saying Friar had been left out of a meeting with an OpenAI investor in which major AI infrastructure spending plans were discussed. OpenAI gave the publication a statement saying Friar and Altman “are fully aligned that durable access to compute is at the core of OpenAI’s strategy and a key differentiator as we scale.”

Looking at all the developments together, one could be forgiven for wondering if the wheels are in danger of coming off the world’s best-known AI company. At the very least, there are serious questions looming over OpenAI’s ability to go for an IPO this year. And, in the absence of an IPO, it’s unclear how much longer the company can continue to tap the private market. If OpenAI implodes, or even if it merely has a down round, that could threaten the entire AI ecosystem. Of course, other key players in that ecosystem, such as Nvidia, know this too. That’s why they are likely to continue trying to prop OpenAI up.

In the midst of all of this, OpenAI published a white paper calling for a sweeping new industrial strategy for the U.S. in the age of artificial superintelligence, which it says is now looming into view. (You can read more on that from my colleague Sharon Goldman here.) Many perceived the document as, at least in part, an attempt by OpenAI to get ahead of a looming anti-AI industry backlash that is mounting across the country and is gaining bipartisan support. We’ll cover that in the news section below.

AI IN THE NEWS

Anthropic expands partnership with Google, Broadcom for data center capacity. The AI company will gain access to about 3.5 gigawatts of computing capacity starting in 2027 as part of the deal, which is contingent on Anthropic meeting certain commercial milestones. The partnership will also see Broadcom supplying custom AI chips, known as TPUs, and infrastructure to Google through 2031. Read more from the Wall Street Journal here.

Google adds mental health safeguards to Gemini. The company has put in place systems to screen users’ interactions with Gemini for signs of mental health crises, which will result in the chatbot referring the users to crisis hotlines. The company said it would donate $30 million to support these crisis intervention services globally. The company has also added additional safeguards designed to discourage self-harm and said that it was training Gemini to avoid reinforcing users’ false beliefs. You can read more from Bloomberg here.

Google releases Gemma 4 open weight model. Google has released the latest generation of its open weight Gemma AI models, Gemma 4. The models were released under an Apache 2.0 license, aiming to attract enterprise users by giving them greater flexibility over how they can use the models and more control over data, according to a story in tech publication The Register. Developed by Google DeepMind, the four new versions of the Gemma 4 models emphasize coding, agentic AI, and improved reasoning, while supporting multimodal inputs and running across devices from smartphones to data centers. The launch comes as competition intensifies from Chinese open-weight models and reflects Google’s push to offer a credible, enterprise-friendly alternative to systems from OpenAI and Anthropic.

Microsoft launches ‘mid-class’ AI models amid AI chief’s complaints about lack of compute. Microsoft launched a trio of new midsized AI models that it claimed were state-of-the-art at speech transcription, voice generation, and image generation. But AI chief Mustafa Suleyman told the Financial Times the company still lacks the computing power to build frontier-scale systems. Microsoft is focusing on “mid-class” models for now, balancing cost and performance, while investing heavily in infrastructure and talent to catch up with leaders like Google and Anthropic, Suleyman told the newspaper.

Meta plans to open source its next AI model. Reportedly there had been debate within Meta about whether to release its next generation AI models—the first developed under its new Superintelligence Labs headed by former Scale AI CEO Alexandr Wang—as open weight models, which is what Meta has done with its past AI models, or make them available only through a paid API or subscription. Now Axios reports, citing unnamed sources, that this debate has been resolved in favor of open weight. There’s high pressure on this next model release since it is the first new model since Meta spent billions of dollars hiring Wang and new AI talent to work under him and since the company’s last AI model, Llama 4, was widely viewed as a dud that badly lagged competing models from the likes of OpenAI, Anthropic, and Google DeepMind.

EYE ON AI RESEARCH

AI has emotions? Sort of, new research from Anthropic suggests. The AI lab says that it has discovered that the artificial neural networks that power its Claude AI models contain internal representations of “emotion concepts” (such as happiness or fear) that functionally influence how the model behaves. These are not real feelings, Anthropic’s researchers emphasized, but patterns in the model’s neural activations that guide its responses, shaping decisions, preferences, and outputs in ways loosely analogous to human emotions. For example, when the model is choosing between tasks, it tends to prefer options associated with “positive” emotional representations, showing these patterns play a causal role in behavior. The findings suggest that understanding and potentially steering these internal emotion-like states could be important for improving how AI models perform. The research also has safety implications, since the model’s internal emotional representations may determine the extent to which it follows users’ intentions.

BRAIN FOOD

Google hails success of its AlphaEvolve system in a real-world enterprise use case. Last year, Google DeepMind debuted AlphaEvolve, an agentic coding assistant that employed several of Google’s different Gemini models to first program an algorithm for a task and then iteratively optimize through a series of small controlled experiments. At the time, Google had used the the system to solve math problems and to optimize how it used computing resources. Now the company has announced the results of a real world external use case.

France-based global logistics firm FM Logistic used AlphaEvolve to optimize how workers moved about one of its massive warehouses to pick and pack items. Rather than relying on fixed rules, the system iteratively rewrote and tested new routing algorithms against real operational data, trying to minimize overall travel distance while respecting constraints like forklift capacity and order priorities.

The resulting algorithm introduced several key innovations, including starting routes from dense clusters of items and flexibly abandoning inefficient routes to improve overall system performance. Overall, the changes delivered a 10.4% boost in routing efficiency and cut more than 15,000 kilometers of annual travel, enabling faster fulfillment and greater capacity without additional staff or equipment, Google wrote. This is an example of why AI coding agents are so potentially powerful, even in areas outside of software development.

财富中文网所刊载内容之知识产权为财富媒体知识产权有限公司及/或相关权利人专属所有或持有。未经许可,禁止进行转载、摘编、复制及建立镜像等任何使用。
0条Plus
精彩评论
评论

撰写或查看更多评论

请打开财富Plus APP

前往打开