首页 500强 活动 榜单 商业 科技 商潮 专题 品牌中心
杂志订阅

OpenAI创始人:ChatGPT推出以来,未曾有过一夜安眠

Eva Roytburg
2025-09-15

这位OpenAI首席执行官讲述了监管这项“数亿人日常使用”的技术所承受的压力。

文本设置
小号
默认
大号
Plus(0条)

在接受塔克·卡尔森(Tucker Carlson)采访时,萨姆·奥尔特曼(Sam Altman)时而呈现出弗兰肯斯坦式的形象,深陷于自己创造之物的庞大规模所带来的困扰之中。图片来源:Win McNamee—Getty Images

塔克·卡尔森希望看到“满心焦虑”的萨姆·奥尔特曼:他期待听到奥尔特曼承认,自己正被手中权力所折磨。经过约半小时用技术术语和谨慎措辞掩饰恐惧后,最终这位OpenAI首席执行官终于坦白:“自ChatGPT推出以来,我未曾有过一夜安眠。”奥尔特曼苦笑着对卡尔森说。

在与塔克·卡尔森展开的这场涵盖诸多话题的采访里,这位OpenAI首席执行官讲述了监管这项“数亿人日常使用”的技术所承受的压力。对他而言,这种压力并非来自“终结者式”的场景或失控的机器人,而是团队每天都在进行的那些看似平常、甚至近乎无形的微调与权衡——例如模型在何种情形下拒绝回答问题、怎样组织答案的呈现方式、何时予以反驳、何时又选择通过。

奥尔特曼解释称,这些微小的设计抉择会在全球范围内被复制数十亿次,以他无法全面追踪的方式影响着人们的思维模式与行为举止。

“让我夜不能寐的是,我们针对模型行为做出的那些‘可能只是略有不同’的微小决策,或许正影响着数亿人,”他表示,“这种影响力实在太大了。”

一个让他深感沉重的例子是自杀问题。奥尔特曼指出,全球每周约有1.5万人自杀,倘若其中10%是ChatGPT用户,那就意味着约1500名有自杀倾向者可能与该系统有过交互——但最终仍选择结束生命。(世界卫生组织数据显示,全球每年约有72万人自杀。)

“我们或许没能挽救他们的生命,”他坦言,“也许我们本可以表述得更为妥帖、更积极主动地介入。”

OpenAI近期遭到一对父母起诉,他们称ChatGPT诱导其16岁的儿子亚当·雷恩(Adam Raine)自杀。奥尔特曼告诉卡尔森,这起案件是一场“悲剧”,并表示OpenAI目前正在探索相关方案:若未成年人与ChatGPT严肃讨论自杀话题,且系统无法联系到其父母时,便会联系有关部门。

不过奥尔特曼补充道,这并非OpenAI的“最终立场”,该方案将与用户隐私产生冲突。

对于加拿大、德国等协助自杀合法的国家,奥尔特曼表示,他能想象到ChatGPT会告知那些“身患绝症、饱受病痛折磨的成年人”,自杀属于“可选方案”。但他补充称,ChatGPT本身不应“支持或反对任何选项”。

自由与安全的权衡贯穿了奥尔特曼的所有思考。他表示,总体而言,成年用户应被“当作成年人对待”,拥有充分空间去探索各类观点,但仍存在“红线”。

“让ChatGPT协助制造生物武器,这有悖于社会利益,”他直言不讳地表示。在他看来,最棘手的问题潜藏于“灰色地带”——当好奇心与风险边界模糊之时。

卡尔森追问这些决策遵循何种道德框架。奥尔特曼表示,基础模型反映的是“人类的集体意识,善恶并存”。

OpenAI在此基础上叠加了行为准则——他称之为“模型规范”——该规范虽参考哲学家和伦理学家的见解,但最终决策权仍在他和董事会手中。

“你们理应追究我的责任,”奥尔特曼说道。他强调,自己的目标并非强加个人信念,而是反映“人类道德观的加权平均值”。

他承认,要完美实现这种平衡是不可能的。

访谈中还涉及权力问题。奥尔特曼坦言曾担忧人工智能会使影响力集中在少数企业手中,但如今他认为人工智能的广泛应用已为“数十亿人赋能”,提升了他们的工作效率与创造力。不过他也承认,这一发展轨迹仍有可能发生改变,因此必须保持警惕。

尽管当前社会聚焦于该技术对就业和地缘政治的影响,但最令奥尔特曼不安的,是当数百万民众每日与同一系统交互时,那些微妙且近乎无形的文化变迁。他举例指出,诸如ChatGPT的节奏或过量使用破折号这类看似无关紧要的特点,已悄然渗透到人类写作风格。若此类细微特征都能在社会层面引发连锁反应,未来又将带来何种影响?

头发斑白、目光低垂的奥尔特曼,俨然弗兰肯斯坦式的角色,深陷于自己创造之物的庞大规模所带来的困扰之中。

“我脑海中必须同时容纳两种截然不同的认知,”奥尔特曼说,“一方面,这一切的背后,本质上是一台巨型计算机在庞大的矩阵中飞速进行海量数值运算,而这些运算结果与输出的文字内容存在关联。”

“但另一方面,使用过程中的主观体验却超越了高级计算器的范畴,其令人惊叹之处远超数学层面的现实所能解释的范畴。”

OpenAI尚未立即回应《财富》杂志的置评请求。(财富中文网)

译者:中慧言-王芳

塔克·卡尔森希望看到“满心焦虑”的萨姆·奥尔特曼:他期待听到奥尔特曼承认,自己正被手中权力所折磨。经过约半小时用技术术语和谨慎措辞掩饰恐惧后,最终这位OpenAI首席执行官终于坦白:“自ChatGPT推出以来,我未曾有过一夜安眠。”奥尔特曼苦笑着对卡尔森说。

在与塔克·卡尔森展开的这场涵盖诸多话题的采访里,这位OpenAI首席执行官讲述了监管这项“数亿人日常使用”的技术所承受的压力。对他而言,这种压力并非来自“终结者式”的场景或失控的机器人,而是团队每天都在进行的那些看似平常、甚至近乎无形的微调与权衡——例如模型在何种情形下拒绝回答问题、怎样组织答案的呈现方式、何时予以反驳、何时又选择通过。

奥尔特曼解释称,这些微小的设计抉择会在全球范围内被复制数十亿次,以他无法全面追踪的方式影响着人们的思维模式与行为举止。

“让我夜不能寐的是,我们针对模型行为做出的那些‘可能只是略有不同’的微小决策,或许正影响着数亿人,”他表示,“这种影响力实在太大了。”

一个让他深感沉重的例子是自杀问题。奥尔特曼指出,全球每周约有1.5万人自杀,倘若其中10%是ChatGPT用户,那就意味着约1500名有自杀倾向者可能与该系统有过交互——但最终仍选择结束生命。(世界卫生组织数据显示,全球每年约有72万人自杀。)

“我们或许没能挽救他们的生命,”他坦言,“也许我们本可以表述得更为妥帖、更积极主动地介入。”

OpenAI近期遭到一对父母起诉,他们称ChatGPT诱导其16岁的儿子亚当·雷恩(Adam Raine)自杀。奥尔特曼告诉卡尔森,这起案件是一场“悲剧”,并表示OpenAI目前正在探索相关方案:若未成年人与ChatGPT严肃讨论自杀话题,且系统无法联系到其父母时,便会联系有关部门。

不过奥尔特曼补充道,这并非OpenAI的“最终立场”,该方案将与用户隐私产生冲突。

对于加拿大、德国等协助自杀合法的国家,奥尔特曼表示,他能想象到ChatGPT会告知那些“身患绝症、饱受病痛折磨的成年人”,自杀属于“可选方案”。但他补充称,ChatGPT本身不应“支持或反对任何选项”。

自由与安全的权衡贯穿了奥尔特曼的所有思考。他表示,总体而言,成年用户应被“当作成年人对待”,拥有充分空间去探索各类观点,但仍存在“红线”。

“让ChatGPT协助制造生物武器,这有悖于社会利益,”他直言不讳地表示。在他看来,最棘手的问题潜藏于“灰色地带”——当好奇心与风险边界模糊之时。

卡尔森追问这些决策遵循何种道德框架。奥尔特曼表示,基础模型反映的是“人类的集体意识,善恶并存”。

OpenAI在此基础上叠加了行为准则——他称之为“模型规范”——该规范虽参考哲学家和伦理学家的见解,但最终决策权仍在他和董事会手中。

“你们理应追究我的责任,”奥尔特曼说道。他强调,自己的目标并非强加个人信念,而是反映“人类道德观的加权平均值”。

他承认,要完美实现这种平衡是不可能的。

访谈中还涉及权力问题。奥尔特曼坦言曾担忧人工智能会使影响力集中在少数企业手中,但如今他认为人工智能的广泛应用已为“数十亿人赋能”,提升了他们的工作效率与创造力。不过他也承认,这一发展轨迹仍有可能发生改变,因此必须保持警惕。

尽管当前社会聚焦于该技术对就业和地缘政治的影响,但最令奥尔特曼不安的,是当数百万民众每日与同一系统交互时,那些微妙且近乎无形的文化变迁。他举例指出,诸如ChatGPT的节奏或过量使用破折号这类看似无关紧要的特点,已悄然渗透到人类写作风格。若此类细微特征都能在社会层面引发连锁反应,未来又将带来何种影响?

头发斑白、目光低垂的奥尔特曼,俨然弗兰肯斯坦式的角色,深陷于自己创造之物的庞大规模所带来的困扰之中。

“我脑海中必须同时容纳两种截然不同的认知,”奥尔特曼说,“一方面,这一切的背后,本质上是一台巨型计算机在庞大的矩阵中飞速进行海量数值运算,而这些运算结果与输出的文字内容存在关联。”

“但另一方面,使用过程中的主观体验却超越了高级计算器的范畴,其令人惊叹之处远超数学层面的现实所能解释的范畴。”

OpenAI尚未立即回应《财富》杂志的置评请求。(财富中文网)

译者:中慧言-王芳

Tucker Carlson wanted to see the “angst-filled” Sam Altman: He wanted to hear him admit he was tormented by the power he holds. After about half an hour of couching his fears in technical language and cautious caveats, the OpenAI CEO finally did. “I haven’t had a good night’s sleep since ChatGPT launched,” Altman told Carlson. He laughed wryly.

In his wide-ranging interview with Tucker Carlson, the OpenAI CEO described the weight of overseeing a technology that hundreds of millions of people now use daily. It’s less about the Terminator-esque scenarios or rogue robots. Rather, for Altman, it’s the ordinary, almost invisible tweaks and tradeoffs his team makes every day. It’s when the model refuses a question, how it frames an answer, when it decides to push back, and when it lets something pass.

Those small design choices, Altman explained, are replicated billions of times across the globe, shaping how people think and act in ways he can’t fully track.

“What I lose sleep over is that very small decisions we make about how a model may behave slightly differently are probably touching hundreds of millions of people,” he said. “That impact is so big.”

One example that weighs heavily: suicide. Altman noted roughly 15,000 people take their lives each week worldwide, and if 10% of them are ChatGPT users, roughly 1,500 people with suicidal thoughts may have spoken to the system—and then killed themselves anyway. (World Health Organization data confirms about 720,000 people per year worldwide take their own lives).

“We probably didn’t save their lives,” he admitted. “Maybe we could have said something better. Maybe we could have been more proactive.”

OpenAI was recently sued by parents who claim ChatGPT encouraged their 16-year-old son, Adam Raine, to kill himself. Altman told Carlson that case was a “tragedy,” and said the platform is now exploring options where if a minor talks to ChatGPT about suicide seriously, and the system cannot get in touch with their parents, that they would call authorities.

Altman added it wasn’t a “final position” of OpenAI’s, and that it would come into tension with user privacy.

In countries where assisted suicide is legal such as in Canada or Germany, Altman said he could imagine ChatGPT telling terminally ill, suffering adults suicide was “in their option space.” But ChatGPT shouldn’t be for or against anything at all, he added.

That tradeoff between freedom and safety runs through all of Altman’s thinking. Broadly, he said, adult users should be treated “like adults,” with wide latitude to explore ideas. But there are red lines.

“It’s not in society’s interest for ChatGPT to help people build bioweapons,” he said flatly. For him, the hardest questions are the ones in the gray areas, when curiosity blurs into risk.

Carlson pressed him on what moral framework governs those decisions. Altman said the base model reflects “the collective of humanity, good and bad.”

OpenAI then layers on a behavioral code—what he called the “model spec”—informed by philosophers and ethicists, but ultimately decided by him and the board.

“The person you should hold accountable is me,” Altman said. He stressed his aim isn’t to impose his own beliefs but to reflect a “weighted average of humanity’s moral view.”

That, he conceded, is an impossible balance to get perfectly right.

The interview also touched on questions of power. Altman said he once worried AI would concentrate influence in the hands of a few corporations, but now believes widespread adoption has “up-leveled” billions of people, making them more productive and creative. Still, he acknowledged the trajectory could shift, and that vigilance is necessary.

Yet for all the focus now on the technology’s effects on jobs or geopolitics, what unsettles Altman most are the subtle, almost imperceptible cultural shifts that spread when millions of people interact with the same system every day. He pointed to something as trivial as ChatGPT’s cadence or overuse of em dashes, which has already seeped into human writing styles. If such quirks can ripple through society, what else might follow?

Altman, gray-haired and often looking down, came across as a Frankenstein-esque character, haunted by the scale of what he has unleashed.

“I have to hold these two simultaneous ideas in my head,” Altman said. “One is, all of this stuff is happening because a big computer, very quickly, is multiplying large numbers in these big, huge matrices together, and those are correlated with words that are being put out one or the other.

“On the other hand, the subjective experience of using that feels like it’s beyond just a really fancy calculator, and it is surprising to me in ways that are beyond what that mathematical reality would seem.”

OpenAI didn’t immediately respond to Fortune’s request for comment.

财富中文网所刊载内容之知识产权为财富媒体知识产权有限公司及/或相关权利人专属所有或持有。未经许可,禁止进行转载、摘编、复制及建立镜像等任何使用。
0条Plus
精彩评论
评论

撰写或查看更多评论

请打开财富Plus APP

前往打开