
OpenAI称,为应对超级智能技术带来的剧烈变革,从税收制度到工作时长等方方面面,全世界都要重新规划。所谓超级智能,是指人工智能系统超越最顶尖人类的阶段。
本周一,在题为《智能时代的产业政策》(Industrial Policy for the Intelligence Age)的13页文件中,OpenAI希望以一系列“以人为本的政策主张”启动相关讨论。然而,对OpenAI的表态与动机该抱有多大信任,似乎是众多读者关注的核心问题。文件发布同一天,《纽约客》(The New Yorker)杂志刊出了长达一年半的深度调查结果,对奥尔特曼在人工智能安全等诸多问题上的可信度提出了质疑。
文件由OpenAI全球事务团队撰写,梳理了超级智能预计将带来的多项经济影响,也提出了各种应对思路。“我们提出这些建议,并非当作完整或最终方案,而是希望以此作为起点,欢迎各界通过民主程序在此基础上加以完善、挑战或取舍,”文件开篇博客文章称。
文件中自称的一系列政策主张涵盖公共财富基金、缩短每周工时等议题,恐怕难以让公众放心,因为公众对人工智能驱动变革的速度和后果越发焦虑和不满。总部位于华盛顿特区的美洲开发银行(American Development Bank)高级经济学家兼人工智能政策负责人,曾任联合国数字与新兴技术办公室人工智能政策主管的露西娅·维拉斯科表示,OpenAI显然是这场持续讨论中最不中立的一方,也是文件的核心矛盾所在。
“OpenAI是这场对话结果最大的利益相关方,其提出的方案会塑造出一种环境,让OpenAI在自己参与制定的约束下拥有极大的行动自由,”她表示,这并非否定这份文件的理由,但“确实要警惕,这场由OpenAI发起的讨论,不能最后又由自己说了算。”
她也强调,OpenAI关于各国政府在人工智能政策应对上已然滞后的判断是正确的。“多数国家仍将人工智能当成技术问题,实际上是结构性经济变革,需要配套的产业政策,”她说,“这点很有价值,即便只是起点,这份文件作为设定议程的尝试也理应得到认真对待。”
独立人工智能政策顾问索里贝尔·费利兹曾担任美国参议院高级人工智能和技术政策顾问,她也认为OpenAI“将相关想法付诸纸面”值得肯定。她表示,美国现有制度与安全保障体系跟不上人工智能研发与落地速度,这一判断是准确的,“眼下确实需要在这一层面展开讨论。
然而她强调,大部分提议的内容并不新鲜:“其中一些核心主张,例如‘广泛分享发展成果、管控风险、普及使用权’等,从2022年11月ChatGPT问世以来一直是重大人工智能治理对话的框架。”
“2023年至2024年我在美国参议院工作,举办了九场人工智能政策论坛,这些内容全都讨论过。我笔记里都有!所有内容都早被提过,无一例外,”她在私信里告诉《财富》。“文件中关于公私合作、人工智能素养和劳动者话语权的表述,看起来像是联合国教科文组织或经合组织的人工智能政策框架报告。想法本身没错。问题在于,提出解决方案与建立真正落地的实现机制之间存在巨大鸿沟。”
显然,这份文件的目标受众并不是每周使用ChatGPT的数亿普通用户,而是2022年11月ChatGPT发布以来以各种形式推动(或一再拖延)人工智能监管的华盛顿政策制定者。从这个意义上说,有人认为这份文件相较之前的表态已有进步。
“我认为与之前更空泛也流于表面的文件相比,这一份有实质性进步,”Encode AI负责州事务的副总裁兼总法律顾问内森·卡尔文说,“其中一些具体建议,比如审计、事件上报以及政府对部分人工智能应用场景加以限制等,思路都不错。”
他也指出,OpenAI高管正通过 “引领未来政治行动委员会” 开展游说活动,推动对人工智能行业有利的政策。全球事务负责人克里斯·莱汉被视为相关努力的核心推手,总裁格雷格·布罗克曼则是最大捐资人。
“我希望这份文件标志着OpenAI转向更具建设性的姿态,而不是一边支持政策,一边攻击推动相关政策的政客,”卡尔文说,特别提到“引领未来”曾游说反对纽约国会候选人亚历克斯・博雷斯,博雷斯是纽约州《人工智能安全与透明度法案》(RAISE 法案)起草者和主要发起人,该法案近期已由州长凯西・霍赫尔签署生效。
卡尔文指责OpenAI在加州SB 53法案,即《加州前沿人工智能透明度法案》(California Transparency in Frontier Artificial Intelligence Act)辩论期间使用施压手段阻挠法案推进。他同时指控,OpenAI利用与埃隆·马斯克持续的法律纠纷为借口,打压恐吓批评者,其中就包括Encode,OpenAI 曾暗示Encode暗中接受马斯克资助。
OpenAI首席执行官山姆·奥尔特曼在接受美国新闻网站Axios采访时,将周一的一系列政策主张比作罗斯福新政,但有评论认为,这份文件看起来更像是硅谷的思想实验,而非罗斯福时代的立法方案,无法凭空落地。
例如,卡内基国际和平基金会科技与国际事务团队访问学者安东·莱希特在X上发帖称,实际上这些主张是根本性社会变革,政治层面推行难度极大。“这些不会自然而然成为替代方案,”他写道,“从这个角度来看,不过是公关操作,目的是为监管虚无主义打掩护。”
他说,更有意义的做法是,将人工智能行业的政治资金和游说能力转向推动此类政策议程落地。但他认为,这份文件“表述模糊且时机微妙,让人很难抱有太高期待。”(财富中文网)
译者:梁宇
审校:夏林
OpenAI称,为应对超级智能技术带来的剧烈变革,从税收制度到工作时长等方方面面,全世界都要重新规划。所谓超级智能,是指人工智能系统超越最顶尖人类的阶段。
本周一,在题为《智能时代的产业政策》(Industrial Policy for the Intelligence Age)的13页文件中,OpenAI希望以一系列“以人为本的政策主张”启动相关讨论。然而,对OpenAI的表态与动机该抱有多大信任,似乎是众多读者关注的核心问题。文件发布同一天,《纽约客》(The New Yorker)杂志刊出了长达一年半的深度调查结果,对奥尔特曼在人工智能安全等诸多问题上的可信度提出了质疑。
文件由OpenAI全球事务团队撰写,梳理了超级智能预计将带来的多项经济影响,也提出了各种应对思路。“我们提出这些建议,并非当作完整或最终方案,而是希望以此作为起点,欢迎各界通过民主程序在此基础上加以完善、挑战或取舍,”文件开篇博客文章称。
文件中自称的一系列政策主张涵盖公共财富基金、缩短每周工时等议题,恐怕难以让公众放心,因为公众对人工智能驱动变革的速度和后果越发焦虑和不满。总部位于华盛顿特区的美洲开发银行(American Development Bank)高级经济学家兼人工智能政策负责人,曾任联合国数字与新兴技术办公室人工智能政策主管的露西娅·维拉斯科表示,OpenAI显然是这场持续讨论中最不中立的一方,也是文件的核心矛盾所在。
“OpenAI是这场对话结果最大的利益相关方,其提出的方案会塑造出一种环境,让OpenAI在自己参与制定的约束下拥有极大的行动自由,”她表示,这并非否定这份文件的理由,但“确实要警惕,这场由OpenAI发起的讨论,不能最后又由自己说了算。”
她也强调,OpenAI关于各国政府在人工智能政策应对上已然滞后的判断是正确的。“多数国家仍将人工智能当成技术问题,实际上是结构性经济变革,需要配套的产业政策,”她说,“这点很有价值,即便只是起点,这份文件作为设定议程的尝试也理应得到认真对待。”
独立人工智能政策顾问索里贝尔·费利兹曾担任美国参议院高级人工智能和技术政策顾问,她也认为OpenAI“将相关想法付诸纸面”值得肯定。她表示,美国现有制度与安全保障体系跟不上人工智能研发与落地速度,这一判断是准确的,“眼下确实需要在这一层面展开讨论。
然而她强调,大部分提议的内容并不新鲜:“其中一些核心主张,例如‘广泛分享发展成果、管控风险、普及使用权’等,从2022年11月ChatGPT问世以来一直是重大人工智能治理对话的框架。”
“2023年至2024年我在美国参议院工作,举办了九场人工智能政策论坛,这些内容全都讨论过。我笔记里都有!所有内容都早被提过,无一例外,”她在私信里告诉《财富》。“文件中关于公私合作、人工智能素养和劳动者话语权的表述,看起来像是联合国教科文组织或经合组织的人工智能政策框架报告。想法本身没错。问题在于,提出解决方案与建立真正落地的实现机制之间存在巨大鸿沟。”
显然,这份文件的目标受众并不是每周使用ChatGPT的数亿普通用户,而是2022年11月ChatGPT发布以来以各种形式推动(或一再拖延)人工智能监管的华盛顿政策制定者。从这个意义上说,有人认为这份文件相较之前的表态已有进步。
“我认为与之前更空泛也流于表面的文件相比,这一份有实质性进步,”Encode AI负责州事务的副总裁兼总法律顾问内森·卡尔文说,“其中一些具体建议,比如审计、事件上报以及政府对部分人工智能应用场景加以限制等,思路都不错。”
他也指出,OpenAI高管正通过 “引领未来政治行动委员会” 开展游说活动,推动对人工智能行业有利的政策。全球事务负责人克里斯·莱汉被视为相关努力的核心推手,总裁格雷格·布罗克曼则是最大捐资人。
“我希望这份文件标志着OpenAI转向更具建设性的姿态,而不是一边支持政策,一边攻击推动相关政策的政客,”卡尔文说,特别提到“引领未来”曾游说反对纽约国会候选人亚历克斯・博雷斯,博雷斯是纽约州《人工智能安全与透明度法案》(RAISE 法案)起草者和主要发起人,该法案近期已由州长凯西・霍赫尔签署生效。
卡尔文指责OpenAI在加州SB 53法案,即《加州前沿人工智能透明度法案》(California Transparency in Frontier Artificial Intelligence Act)辩论期间使用施压手段阻挠法案推进。他同时指控,OpenAI利用与埃隆·马斯克持续的法律纠纷为借口,打压恐吓批评者,其中就包括Encode,OpenAI 曾暗示Encode暗中接受马斯克资助。
OpenAI首席执行官山姆·奥尔特曼在接受美国新闻网站Axios采访时,将周一的一系列政策主张比作罗斯福新政,但有评论认为,这份文件看起来更像是硅谷的思想实验,而非罗斯福时代的立法方案,无法凭空落地。
例如,卡内基国际和平基金会科技与国际事务团队访问学者安东·莱希特在X上发帖称,实际上这些主张是根本性社会变革,政治层面推行难度极大。“这些不会自然而然成为替代方案,”他写道,“从这个角度来看,不过是公关操作,目的是为监管虚无主义打掩护。”
他说,更有意义的做法是,将人工智能行业的政治资金和游说能力转向推动此类政策议程落地。但他认为,这份文件“表述模糊且时机微妙,让人很难抱有太高期待。”(财富中文网)
译者:梁宇
审校:夏林
OpenAI says the world needs to rethink everything from the tax system to the length of the workday in order to prepare for the wrenching changes of superintelligence technology—the point at which AI systems are capable of outperforming the smartest humans.
On Monday, in a 13-page paper titled “Industrial Policy for the Intelligence Age,” OpenAI said it wanted to “kick-start” the conversation with a “slate of people-first policy ideas.” How much faith to put in OpenAI’s words and motives, however, seems to be one of the key questions among many of the people reading the paper. The paper was released on the same day that The New Yorker published the results of a lengthy one-and-a-half-year investigation into OpenAI that raised questions about CEO Sam Altman’s trustworthiness on various issues, including AI safety.
Written by the OpenAI global affairs team, the paper outlines many of the expected economic impacts of superintelligence and floats various approaches for addressing them. “We offer them not as a comprehensive or final set of recommendations, but as a starting point for discussion that we invite others to build on, refine, challenge, or choose among through the democratic process,” said the introductory blog post.
The self-described “slate of ideas” in the document—spanning everything from public wealth funds to shorter workweeks—may not do much to reassure a public increasingly nervous about and disenchanted with the pace and consequences of AI-driven change. And OpenAI, of course, is one of the least neutral parties in this ongoing discussion, which is the core tension of the document, said Lucia Velasco, a senior economist and AI policy leader at D.C.-based Inter-American Development Bank and former head of AI policy at the United Nations Office for Digital and Emerging Technologies.
“OpenAI is the most interested party in how this conversation turns out, and the proposals it advances shape an environment in which OpenAI operates with significant freedom under constraints it has largely helped define,” she said, adding that this wasn’t a reason to dismiss the document, but “it is a reason to ensure that the conversation it is trying to start does not end with the same company that started it.”
Still, she emphasized that OpenAI is correct in saying that governments are behind in advancing policy solutions. “Most are still treating AI as a technology problem when it’s actually a structural economic shift that needs proper industrial policy,” she said. “That‘s a useful contribution, and the document deserves to be taken seriously as an agenda-setting exercise, even if it’s a starting point.”
Soribel Feliz, an independent AI policy advisor who previously served as a senior AI and tech policy advisor for the U.S. Senate, agreed that OpenAI deserves credit for “putting this on paper.” The acknowledgment that both U.S. institutions and safety nets are falling behind AI development and deployment is correct, she said, “and the conversation needs to happen at this level at this moment.”
However, she emphasized that most of what is being proposed is not new: “Some of these pillars—‘share prosperity broadly, mitigate risks, democratize access’—have been the framework for every major AI governance conversation since ChatGPT came out in November 2022.
“I worked in the U.S. Senate in 2023–24, and we had nine AI policy fora sessions where all of this was said. I have it in my handwritten notes! All of this was already said, all of it,” she wrote to Fortune in a direct message. “The language around public-private partnerships, AI literacy, and worker voice reads like it came out of a Unesco or OECD AI policy framework report. The ideas are not wrong. The problem is the gap between naming the solutions and building real mechanisms to achieve them.”
Clearly, the target audience is not its hundreds of millions of weekly ChatGPT users. Instead, it is the Beltway policymakers who have been pushing for AI regulation (or kicking the can down the road) in various forms ever since ChatGPT was released in November 2022. In that sense, some said it represents an improvement over earlier efforts.
“I found this document to genuinely be a real improvement from previous documents that were even more floaty and high-level,” said Nathan Calvin, vice president of state affairs and general counsel of Encode AI. “I think some of the concrete suggestions around things like auditing or incident reporting and government restrictions on certain uses of AI are good ideas.”
But he also pointed to lobbying efforts led by OpenAI executives with the Leading the Future PAC, which lobbies for AI-industry-friendly policies. Global affairs head Chris Lehane is considered a force behind these efforts, while president Greg Brockman has been the biggest donor.
“I hope this document signals a move toward more constructive engagement, instead of attacking politicians pushing the very policies OpenAI is now endorsing,” said Calvin, pointing specifically to Leading the Future’s lobbying against New York congressional candidate Alex Bores, author and primary sponsor of the RAISE Act, the New York AI safety and transparency law recently signed by Gov. Kathy Hochul.
Calvin has also accused OpenAI of using intimidation tactics to undermine California’s SB 53, the California Transparency in Frontier Artificial Intelligence Act, while it was still being debated. He alleged as well that OpenAI used its ongoing legal battle with Elon Musk as a pretext to target and intimidate critics, including Encode, which the company implied was secretly funded by Musk.
Still, while OpenAI CEO Sam Altman compared Monday’s slate of policy ideas to the New Deal in an interview with Axios, some say it reads less like FDR-era legislation and more like a Silicon Valley thought experiment that won’t magically turn into action.
For example, Anton Leicht, a visiting scholar with the Carnegie Endowment’s technology and international affairs team, wrote on X that in reality, the ideas are fundamental societal changes and heavy political lifts. “They’re not just going to emerge as an organic alternative,” he wrote. “On that read, this is comms work to provide cover for regulatory nihilism.”
A better version of this, he said, would be to redirect the AI industry’s political funding and lobbying skills to make progress on this kind of policy agenda. However, he said that the “vague nature and timing” of the document “doesn’t make me too optimistic.”