立即打开
人工智能辩论的五大思想流派

人工智能辩论的五大思想流派

Jeffrey Sonnenfeld, Paul Romer, Dirk Bergemann, Steven Tian 2023-07-13
这五大思想流派传递出来的信息更多地揭示了专家们自己的先入之见和偏见,而不是潜在的人工智能技术本身。

仅仅在今年5月,《华尔街日报》(The Wall Street Journal)和《纽约时报》(New York Times)就分别发表了200多篇令人窒息的文章,要么宣布人类将面临悲惨的灾难性结局,要么宣布人类将获得救赎,这取决于所引用的专家的偏见和经验。

我们亲身体会到围绕人工智能的公共话语是多么耸人听闻。今年6月下旬举行的第134届首席执行官峰会聚集了200多位来自大型公司的首席执行官,围绕峰会的大量媒体报道捕捉到这些危言耸听,其中,42%的首席执行官表示人工智能有可能在十年内摧毁人类(这些首席执行官们表达了很多有细微差别的观点,正如我们之前所捕捉到的那样)。

今年夏天,在商业、政府、学术界、媒体、技术和公民社会对人工智能的不同看法中,这些专家经常各执一词。

大多数人工智能专家的观点往往分为五大不同的类别:欣喜若狂的忠实信徒、牟取暴利者、拥有极强好奇心的创造者、危言耸听的激进分子,以及全球治理者。

欣喜若狂的忠实信徒:通过人工智能系统获得救赎

人们长期预测的机器自学时刻,与人工智能70年来渐进式发展的现实截然不同。在这样的炒作中,很难知道如今到底有多大的机会,以及哪些过于乐观的预测会演变成幻想。

通常,那些在人工智能前沿领域工作时间最长的人持有更乐观的看法,他们一生都致力于人类知识前沿领域的最新研究。这些人工智能先驱是“忠实信徒”,他们相信自己的技术具有颠覆性潜力,而且在很少有人接受一项新兴技术的潜力和前景时,他们就接受了这一点——并且远远早于这项技术进入主流,因此,很难指责这些“忠实信徒”。

对其中的一些人来说,例如“人工智能教父”和Meta公司的首席人工智能科学家杨立昆(Yann LeCun),“毫无疑问,机器最终会超越人类。”与此同时,杨立昆等人认为人工智能可能对人类构成严重威胁的想法“荒谬至极”。同样,风险投资家马克·安德森对此不以为然,轻易推倒了有关人工智能的“散布恐惧和末日论的高墙”,认为人们应该停止担忧,“研发、研发、再研发”。

但是,一意孤行、过于乐观可能会导致这些专家高估他们的技术能够带来的影响(也许是有意为之,但稍后会详细说明),并忽视其潜在的弊端和运营挑战。

事实上,当我们就生成式人工智能“是否会比之前的重大技术进步,比如互联网、汽车、飞机、制冷等的发明更具变革性”对首席执行官们进行调查时,大多数人回答“不会”,这表明人工智能是否会像一些永恒的乐观主义者希望我们相信的那样,真正颠覆社会,仍然存在不确定性(在很大范围内)。

毕竟,存在真正改变社会的技术进步,但大多数在最初的炒作之后以失败告终。仅仅18个月前,许多狂热者确信加密货币将改变我们的日常生活——在FTX破产、加密货币大亨萨姆·班克曼-弗里德被捕(从而名誉扫地),以及“加密货币的冬天”来临之前。

牟取暴利者:进行无厘头炒作

在过去的六个月里,在参加贸易展、加入专业协会或接受新产品推介时,毫无疑问会被聊天机器人的推销所轰炸。在ChatGPT推出的刺激下,围绕人工智能的热潮逐渐升温,渴望赚钱的企业家(是机会主义者,但也很务实)纷纷涌入这一领域。

令人惊讶的是,今年前五个月投入到生成式人工智能初创公司的资金比以往任何一年加起来都多,仅在过去五个月中就有超过一半的生成式人工智能初创公司成立,而今年生成式人工智能的估值中位数比去年翻了一番。

也许让人想起在互联网泡沫时期,那些希望立即提升股价的公司在其名称中添加“.com”的日子,如今,一夜之间,大学生创办的以人工智能为重点的初创公司如雨后春笋般涌现,这些初创公司的产品大都雷同,一些创业的学生在春假期间仅凭成本分析表就筹集了数百万美元的资金来启动其业余项目。

其中一些新的人工智能初创公司甚至几乎没有一以贯之的产品或计划,或者由对底层技术缺乏真正理解的创始人领导,他们只是在进行无厘头炒作——但这显然并不妨碍他们筹集数百万美元的资金。虽然其中一些初创公司最终可能会成为下一代人工智能研发的基石,但许多公司(如果不是大多数的话)将以失败告终。

这些过度现象并不仅仅局限于初创公司领域。许多公开上市的人工智能公司,例如汤姆·希贝尔的C3.ai公司也出现了上述情况。尽管该公司基本的业务表现和财务预测几乎没有变化,但自今年年初以来,其股价已经翻了两番,导致一些分析师警告称,这是一个“即将破裂的泡沫”。

今年人工智能商业热潮的关键驱动力是ChatGPT,其母公司OpenAI几个月前从微软(Microsoft)获得了100亿美元的投资。微软和OpenAI的关系源远流长,可以追溯到微软Github部门和OpenAI之间的合作。Github部门和OpenAI联手在2021年研发出了Github编码助手。这款编程助手基于当时鲜为人知的OpenAI模型Codex,很可能是根据Github上的大量代码进行训练的。尽管有缺陷,但也许正是这个早期原型帮助说服了这些精明的商业领袖尽早押注人工智能,因为许多人认为这是一个“千载难逢的机会”,能够获得巨大的利润。

上述案例并不是说所有的人工智能投资都是过度的。事实上,在我们调查的首席执行官中,有71%的人认为他们的企业在人工智能方面投资不足。但我们必须提出这样一个问题:在一个可能过度饱和的领域,牟取暴利者进行的无厘头炒作是否会排挤真正的创新企业。

拥有极强好奇心的创造者:知识前沿的创新

人工智能创新不仅发生在许多初创公司,而且在规模较大的《财富》美国500强企业中也很普遍。正如我们广为记录的那样,许多商业领袖都积极地将人工智能的特定应用整合到他们的公司中,但也充分考虑了公司的实际情况。

鉴于最近的技术进步,毫无疑问,人工智能发展进入充满希望的时期,而且这一时期也是独有的。最近人工智能的飞速发展,特别是大型语言模型,可以归因于其底层技术规模的扩大和相关能力的提高:可供模型和算法进行训练的数据规模,模型和算法本身的能力,以及模型和算法所依赖的计算硬件的能力。

然而,基础人工智能技术的指数级增长不太可能永远持续下去。许多人以自动驾驶汽车为例,认为这是人工智能的第一个大赌注,预示着其未来的发展路径:通过达成容易实现的目标,取得惊人的早期进展,从而引发狂热,但在面对最严峻的挑战时,进展会急剧放缓,比如微调自动驾驶仪出现的故障,以避免发生致命碰撞。这就像芝诺悖论(Zeno’s paradox)所暗示的那样,因为最后一英里往往是最难完成的。就自动驾驶汽车而言,尽管我们一直在朝着安全的自动驾驶汽车的目标迈进,而且似乎已经部分实现了目标,但这项技术是否以及何时能够真正实现,谁也说不准。

此外,关注人工智能的技术限制(可以做什么和不能做什么)仍然很重要。由于大型语言模型是在庞大的数据集上训练出来的,因此可以有效地总结和传播事实性知识,并实现非常高效的搜索和发现。然而,就人工智能是否能够实现科学家、企业家、创意人士和其他典型的原创性工作者所可以实现的大胆推理飞跃而言,其应用可能会受到更多的限制,因为它本质上无法复制人类的情感、同理心和灵感,而这正是人类创造力的驱动力。

虽然这些拥有极强好奇心的创造者专注于寻找人工智能的积极应用,但他们可能会像罗伯特·奥本海默一样一无所知,视野很狭窄,只专注于解决问题(在原子弹爆炸前)。

“从技术层面讲,当你看到某种事物能够带来丰硕成果,你就会继续研究下去,直到在技术上取得成功后,你才会争论应该怎么处理它。这就是原子弹的研发情况。”这位原子弹之父在1954年警告说,他对自己的发明所带来的恐怖感到内疚,并成为了一名反原子弹活动家。

危言耸听的激进分子:倡导单边规则

一些危言耸听的激进分子,尤其是经验丰富的,甚至是具有强烈实用主义倾向的、对人工智能不再抱幻想的技术先驱,大声警告人工智能的各种危险,从社会影响和对人类的威胁,到商业模式并不可行和估值虚高——许多人主张对人工智能进行严格限制,以遏制这些危险。

例如,人工智能先驱杰弗里·辛顿就对人工智能带来的“生存威胁”发出了警告,他表示前景并不乐观:“很难找到方法来阻止有邪恶目的的人利用它做坏事。”另一位技术专家、Facebook早期的融资支持者罗杰·麦克纳米在首席执行官峰会上警告道,生成式人工智能的单位经济效益非常糟糕,没有哪家烧钱的人工智能公司拥有可持续的商业模式。

麦克纳米说:“危害显而易见,有隐私问题,有版权问题,有虚假信息问题……一场人工智能军备竞赛正在进行,以实现垄断,从而可以控制公众和企业。”

也许最引人注目的是,OpenAI的首席执行官萨姆·奥尔特曼和其他来自谷歌(Google)、微软和其他人工智能领军者的人工智能技术专家最近发表了一封公开信,警告说,人工智能给人类带来的灭绝风险达到核战争一样的层级,并认为“减轻人工智能带来的灭绝风险应该与其他类似社会规模的风险(比如流行病和核战争)一起成为全球优先事项。”

然而,很难辨别这些行业的危言耸听者是出于对人类威胁的真实预期还是出于其他动机。关于人工智能如何构成生存威胁的猜测是一种极其有效的吸引注意力的方式,这或许是巧合。根据我们自己的经验,在最近的首席执行官峰会上,媒体大肆宣扬首席执行官对人工智能的危言耸听,远远盖过了我们对首席执行官如何将人工智能整合到业务中的更细致的了解。散播人工智能的危言耸听也恰好是一种有效的方式,能够对人工智能的潜在能力进行炒作——从而增加投资,并吸引投资者的兴趣。

奥尔特曼已经非常有效地引起了公众对OpenAI正在做的事情的兴趣。最显而易见的是,尽管付出了巨大的经济损失,但他最初还是让公众免费、不受限制地访问ChatGPT。与此同时,他对OpenAI让用户访问ChatGPT的软件中存在的安全漏洞风险的解释纯属隔靴搔痒,一副事不关己的样子,引起了人们对行业危言耸听者是否言行一致的质疑。

全球治理者:通过推出指导方针来实现平衡

全球治理者对人工智能的态度不像那些危言耸听的激进人士那样尖锐(但同样谨慎),他们认为,对人工智能实施单边限制是不够的,而且对国家安全有害。相反,他们呼吁构建可以实现平衡的国际竞争环境。他们意识到,除非达成类似于《不扩散核武器条约》的全球协议,否则敌对国家就能够继续沿着危险的路径开发人工智能。

在我们的活动上,参议员理查德·布卢门撒尔、前议长南希·佩洛西、代表硅谷选区的国会议员罗·康纳和其他美国国会的领导者强调了提供立法和保障措施(为人工智能安护栏)的重要性,以鼓励创新,同时避免造成大规模的社会危害。一些人以航空监管为例,有两个不同的机构监督飞行安全:美国联邦航空管理局(FAA)制定规则,但美国国家运输安全委员会(NTSB)负责查明事实,这是两项截然不同的工作。规则制定者必须做出权衡和妥协,而事实调查者则必须坚持不懈追求真相,而且不能做出任何妥协。考虑到人工智能可能会加剧不可靠信息在复杂系统中的扩散,监管事实调查可能与规则制定一样重要,甚至更为重要。

同样,著名经济学家劳伦斯·萨默斯和传记作家、媒体巨头沃尔特·艾萨克森等全球治理者都告诉我们,他们主要担心的是,人们没有做好准备应对人工智能带来的变革。他们认为,过去社会上最具话语权、最具影响力的精英员工,将出现历史性的劳动力中断问题。

沃尔特·艾萨克森认为,人工智能将对专业的“知识工作者”产生最大影响,甚至能够取代他们。“知识工作者”对深奥知识的垄断现在将受到生成式人工智能的挑战,因为人工智能可以机械重复甚至是最晦涩难懂的事实,远远超出任何人的死记硬背和回忆能力——尽管与此同时,艾萨克森指出,以前的技术创新增加了而不是减少了人类的就业机会。同样,麻省理工学院(MIT)的著名经济学家达龙·阿西莫格鲁担心,人工智能可能会压低员工的工资,加剧不平等现象。对这些治理者来说,人工智能将奴役人类或将人类推向灭绝的说法是荒谬的——这导致人们分散注意力,没有关注人工智能真正可能带来的社会成本,这种后果是难以接受的。

即便是一些对政府直接监管持怀疑态度的治理者,也更愿意看到护栏落实到位(尽管是由私营部门实施的)。例如,埃里克·施密特认为,政府目前缺乏监管人工智能的专业知识,应该让科技公司进行自我监管。然而,这种自我监管让人想起了镀金时代(Gilded Age)的行业监管俘获,当时的美国州际商务委员会(Interstate Commerce Commission)、美国联邦通信委员会(The Federal Communication Commission)和美国民用航空委员会(Civil Aeronautics Board)经常将旨在维护公共利益的监管向行业巨头倾斜,这些巨头阻止了新的竞争性创业者进入,保护老牌企业免受美国电话电报公司(ATT)的创始人西奥多·韦尔所称的“破坏性竞争”。

其他治理者指出,人工智能可能会带来一些问题,单靠监管是无法解决的。比如,他们指出,人工智能系统可能会欺骗人们,让人们认为它们能够提供可靠事实,以至于许多人可能会放弃通过查证来确定哪些事实是可信的(忽略自身的责任),从而完全依赖人工智能系统——即使人工智能的应用已经造成了伤亡,例如在自动驾驶汽车造成的车祸中,或者在医疗事故中(由于粗心大意)。

这五大思想流派传递出来的信息更多地揭示了专家们自己的先入之见和偏见,而不是潜在的人工智能技术本身。尽管如此,在人工智能的喧嚣中,这是值得研究这五大思想流派,以获得真正的智慧和洞察力。(财富中文网)

杰弗里·索南费尔德(Jeffrey Sonnenfeld)是耶鲁大学管理学院(Yale School of Management)莱斯特·克朗管理实践教授和高级副院长。他被《Poets & Quants》杂志评为“年度最佳管理学教授”。

保罗·罗默(Paul Romer)是波士顿学院(Boston College)校级教授,也是2018年诺贝尔经济学奖得主。

德克·伯格曼(Dirk Bergemann)是耶鲁大学(Yale University)坎贝尔经济学教授,兼任计算机科学教授和金融学教授。他是耶鲁大学算法、数据和市场设计中心(Yale Center for Algorithm, Data, and Market Design)的创始主任。

史蒂文·田(Steven Tian)是耶鲁大学首席执行官领导力研究所(Yale Chief Executive Leadership Institute)的研究主任,曾经是洛克菲勒家族办公室(Rockefeller Family Office)的量化投资分析师。

Fortune.com上发表的评论文章中表达的观点,仅代表作者本人的观点,不代表《财富》杂志的观点和立场。

译者:中慧言-王芳

仅仅在今年5月,《华尔街日报》(The Wall Street Journal)和《纽约时报》(New York Times)就分别发表了200多篇令人窒息的文章,要么宣布人类将面临悲惨的灾难性结局,要么宣布人类将获得救赎,这取决于所引用的专家的偏见和经验。

我们亲身体会到围绕人工智能的公共话语是多么耸人听闻。今年6月下旬举行的第134届首席执行官峰会聚集了200多位来自大型公司的首席执行官,围绕峰会的大量媒体报道捕捉到这些危言耸听,其中,42%的首席执行官表示人工智能有可能在十年内摧毁人类(这些首席执行官们表达了很多有细微差别的观点,正如我们之前所捕捉到的那样)。

今年夏天,在商业、政府、学术界、媒体、技术和公民社会对人工智能的不同看法中,这些专家经常各执一词。

大多数人工智能专家的观点往往分为五大不同的类别:欣喜若狂的忠实信徒、牟取暴利者、拥有极强好奇心的创造者、危言耸听的激进分子,以及全球治理者。

欣喜若狂的忠实信徒:通过人工智能系统获得救赎

人们长期预测的机器自学时刻,与人工智能70年来渐进式发展的现实截然不同。在这样的炒作中,很难知道如今到底有多大的机会,以及哪些过于乐观的预测会演变成幻想。

通常,那些在人工智能前沿领域工作时间最长的人持有更乐观的看法,他们一生都致力于人类知识前沿领域的最新研究。这些人工智能先驱是“忠实信徒”,他们相信自己的技术具有颠覆性潜力,而且在很少有人接受一项新兴技术的潜力和前景时,他们就接受了这一点——并且远远早于这项技术进入主流,因此,很难指责这些“忠实信徒”。

对其中的一些人来说,例如“人工智能教父”和Meta公司的首席人工智能科学家杨立昆(Yann LeCun),“毫无疑问,机器最终会超越人类。”与此同时,杨立昆等人认为人工智能可能对人类构成严重威胁的想法“荒谬至极”。同样,风险投资家马克·安德森对此不以为然,轻易推倒了有关人工智能的“散布恐惧和末日论的高墙”,认为人们应该停止担忧,“研发、研发、再研发”。

但是,一意孤行、过于乐观可能会导致这些专家高估他们的技术能够带来的影响(也许是有意为之,但稍后会详细说明),并忽视其潜在的弊端和运营挑战。

事实上,当我们就生成式人工智能“是否会比之前的重大技术进步,比如互联网、汽车、飞机、制冷等的发明更具变革性”对首席执行官们进行调查时,大多数人回答“不会”,这表明人工智能是否会像一些永恒的乐观主义者希望我们相信的那样,真正颠覆社会,仍然存在不确定性(在很大范围内)。

毕竟,存在真正改变社会的技术进步,但大多数在最初的炒作之后以失败告终。仅仅18个月前,许多狂热者确信加密货币将改变我们的日常生活——在FTX破产、加密货币大亨萨姆·班克曼-弗里德被捕(从而名誉扫地),以及“加密货币的冬天”来临之前。

牟取暴利者:进行无厘头炒作

在过去的六个月里,在参加贸易展、加入专业协会或接受新产品推介时,毫无疑问会被聊天机器人的推销所轰炸。在ChatGPT推出的刺激下,围绕人工智能的热潮逐渐升温,渴望赚钱的企业家(是机会主义者,但也很务实)纷纷涌入这一领域。

令人惊讶的是,今年前五个月投入到生成式人工智能初创公司的资金比以往任何一年加起来都多,仅在过去五个月中就有超过一半的生成式人工智能初创公司成立,而今年生成式人工智能的估值中位数比去年翻了一番。

也许让人想起在互联网泡沫时期,那些希望立即提升股价的公司在其名称中添加“.com”的日子,如今,一夜之间,大学生创办的以人工智能为重点的初创公司如雨后春笋般涌现,这些初创公司的产品大都雷同,一些创业的学生在春假期间仅凭成本分析表就筹集了数百万美元的资金来启动其业余项目。

其中一些新的人工智能初创公司甚至几乎没有一以贯之的产品或计划,或者由对底层技术缺乏真正理解的创始人领导,他们只是在进行无厘头炒作——但这显然并不妨碍他们筹集数百万美元的资金。虽然其中一些初创公司最终可能会成为下一代人工智能研发的基石,但许多公司(如果不是大多数的话)将以失败告终。

这些过度现象并不仅仅局限于初创公司领域。许多公开上市的人工智能公司,例如汤姆·希贝尔的C3.ai公司也出现了上述情况。尽管该公司基本的业务表现和财务预测几乎没有变化,但自今年年初以来,其股价已经翻了两番,导致一些分析师警告称,这是一个“即将破裂的泡沫”。

今年人工智能商业热潮的关键驱动力是ChatGPT,其母公司OpenAI几个月前从微软(Microsoft)获得了100亿美元的投资。微软和OpenAI的关系源远流长,可以追溯到微软Github部门和OpenAI之间的合作。Github部门和OpenAI联手在2021年研发出了Github编码助手。这款编程助手基于当时鲜为人知的OpenAI模型Codex,很可能是根据Github上的大量代码进行训练的。尽管有缺陷,但也许正是这个早期原型帮助说服了这些精明的商业领袖尽早押注人工智能,因为许多人认为这是一个“千载难逢的机会”,能够获得巨大的利润。

上述案例并不是说所有的人工智能投资都是过度的。事实上,在我们调查的首席执行官中,有71%的人认为他们的企业在人工智能方面投资不足。但我们必须提出这样一个问题:在一个可能过度饱和的领域,牟取暴利者进行的无厘头炒作是否会排挤真正的创新企业。

拥有极强好奇心的创造者:知识前沿的创新

人工智能创新不仅发生在许多初创公司,而且在规模较大的《财富》美国500强企业中也很普遍。正如我们广为记录的那样,许多商业领袖都积极地将人工智能的特定应用整合到他们的公司中,但也充分考虑了公司的实际情况。

鉴于最近的技术进步,毫无疑问,人工智能发展进入充满希望的时期,而且这一时期也是独有的。最近人工智能的飞速发展,特别是大型语言模型,可以归因于其底层技术规模的扩大和相关能力的提高:可供模型和算法进行训练的数据规模,模型和算法本身的能力,以及模型和算法所依赖的计算硬件的能力。

然而,基础人工智能技术的指数级增长不太可能永远持续下去。许多人以自动驾驶汽车为例,认为这是人工智能的第一个大赌注,预示着其未来的发展路径:通过达成容易实现的目标,取得惊人的早期进展,从而引发狂热,但在面对最严峻的挑战时,进展会急剧放缓,比如微调自动驾驶仪出现的故障,以避免发生致命碰撞。这就像芝诺悖论(Zeno’s paradox)所暗示的那样,因为最后一英里往往是最难完成的。就自动驾驶汽车而言,尽管我们一直在朝着安全的自动驾驶汽车的目标迈进,而且似乎已经部分实现了目标,但这项技术是否以及何时能够真正实现,谁也说不准。

此外,关注人工智能的技术限制(可以做什么和不能做什么)仍然很重要。由于大型语言模型是在庞大的数据集上训练出来的,因此可以有效地总结和传播事实性知识,并实现非常高效的搜索和发现。然而,就人工智能是否能够实现科学家、企业家、创意人士和其他典型的原创性工作者所可以实现的大胆推理飞跃而言,其应用可能会受到更多的限制,因为它本质上无法复制人类的情感、同理心和灵感,而这正是人类创造力的驱动力。

虽然这些拥有极强好奇心的创造者专注于寻找人工智能的积极应用,但他们可能会像罗伯特·奥本海默一样一无所知,视野很狭窄,只专注于解决问题(在原子弹爆炸前)。

“从技术层面讲,当你看到某种事物能够带来丰硕成果,你就会继续研究下去,直到在技术上取得成功后,你才会争论应该怎么处理它。这就是原子弹的研发情况。”这位原子弹之父在1954年警告说,他对自己的发明所带来的恐怖感到内疚,并成为了一名反原子弹活动家。

危言耸听的激进分子:倡导单边规则

一些危言耸听的激进分子,尤其是经验丰富的,甚至是具有强烈实用主义倾向的、对人工智能不再抱幻想的技术先驱,大声警告人工智能的各种危险,从社会影响和对人类的威胁,到商业模式并不可行和估值虚高——许多人主张对人工智能进行严格限制,以遏制这些危险。

例如,人工智能先驱杰弗里·辛顿就对人工智能带来的“生存威胁”发出了警告,他表示前景并不乐观:“很难找到方法来阻止有邪恶目的的人利用它做坏事。”另一位技术专家、Facebook早期的融资支持者罗杰·麦克纳米在首席执行官峰会上警告道,生成式人工智能的单位经济效益非常糟糕,没有哪家烧钱的人工智能公司拥有可持续的商业模式。

麦克纳米说:“危害显而易见,有隐私问题,有版权问题,有虚假信息问题……一场人工智能军备竞赛正在进行,以实现垄断,从而可以控制公众和企业。”

也许最引人注目的是,OpenAI的首席执行官萨姆·奥尔特曼和其他来自谷歌(Google)、微软和其他人工智能领军者的人工智能技术专家最近发表了一封公开信,警告说,人工智能给人类带来的灭绝风险达到核战争一样的层级,并认为“减轻人工智能带来的灭绝风险应该与其他类似社会规模的风险(比如流行病和核战争)一起成为全球优先事项。”

然而,很难辨别这些行业的危言耸听者是出于对人类威胁的真实预期还是出于其他动机。关于人工智能如何构成生存威胁的猜测是一种极其有效的吸引注意力的方式,这或许是巧合。根据我们自己的经验,在最近的首席执行官峰会上,媒体大肆宣扬首席执行官对人工智能的危言耸听,远远盖过了我们对首席执行官如何将人工智能整合到业务中的更细致的了解。散播人工智能的危言耸听也恰好是一种有效的方式,能够对人工智能的潜在能力进行炒作——从而增加投资,并吸引投资者的兴趣。

奥尔特曼已经非常有效地引起了公众对OpenAI正在做的事情的兴趣。最显而易见的是,尽管付出了巨大的经济损失,但他最初还是让公众免费、不受限制地访问ChatGPT。与此同时,他对OpenAI让用户访问ChatGPT的软件中存在的安全漏洞风险的解释纯属隔靴搔痒,一副事不关己的样子,引起了人们对行业危言耸听者是否言行一致的质疑。

全球治理者:通过推出指导方针来实现平衡

全球治理者对人工智能的态度不像那些危言耸听的激进人士那样尖锐(但同样谨慎),他们认为,对人工智能实施单边限制是不够的,而且对国家安全有害。相反,他们呼吁构建可以实现平衡的国际竞争环境。他们意识到,除非达成类似于《不扩散核武器条约》的全球协议,否则敌对国家就能够继续沿着危险的路径开发人工智能。

在我们的活动上,参议员理查德·布卢门撒尔、前议长南希·佩洛西、代表硅谷选区的国会议员罗·康纳和其他美国国会的领导者强调了提供立法和保障措施(为人工智能安护栏)的重要性,以鼓励创新,同时避免造成大规模的社会危害。一些人以航空监管为例,有两个不同的机构监督飞行安全:美国联邦航空管理局(FAA)制定规则,但美国国家运输安全委员会(NTSB)负责查明事实,这是两项截然不同的工作。规则制定者必须做出权衡和妥协,而事实调查者则必须坚持不懈追求真相,而且不能做出任何妥协。考虑到人工智能可能会加剧不可靠信息在复杂系统中的扩散,监管事实调查可能与规则制定一样重要,甚至更为重要。

同样,著名经济学家劳伦斯·萨默斯和传记作家、媒体巨头沃尔特·艾萨克森等全球治理者都告诉我们,他们主要担心的是,人们没有做好准备应对人工智能带来的变革。他们认为,过去社会上最具话语权、最具影响力的精英员工,将出现历史性的劳动力中断问题。

沃尔特·艾萨克森认为,人工智能将对专业的“知识工作者”产生最大影响,甚至能够取代他们。“知识工作者”对深奥知识的垄断现在将受到生成式人工智能的挑战,因为人工智能可以机械重复甚至是最晦涩难懂的事实,远远超出任何人的死记硬背和回忆能力——尽管与此同时,艾萨克森指出,以前的技术创新增加了而不是减少了人类的就业机会。同样,麻省理工学院(MIT)的著名经济学家达龙·阿西莫格鲁担心,人工智能可能会压低员工的工资,加剧不平等现象。对这些治理者来说,人工智能将奴役人类或将人类推向灭绝的说法是荒谬的——这导致人们分散注意力,没有关注人工智能真正可能带来的社会成本,这种后果是难以接受的。

即便是一些对政府直接监管持怀疑态度的治理者,也更愿意看到护栏落实到位(尽管是由私营部门实施的)。例如,埃里克·施密特认为,政府目前缺乏监管人工智能的专业知识,应该让科技公司进行自我监管。然而,这种自我监管让人想起了镀金时代(Gilded Age)的行业监管俘获,当时的美国州际商务委员会(Interstate Commerce Commission)、美国联邦通信委员会(The Federal Communication Commission)和美国民用航空委员会(Civil Aeronautics Board)经常将旨在维护公共利益的监管向行业巨头倾斜,这些巨头阻止了新的竞争性创业者进入,保护老牌企业免受美国电话电报公司(ATT)的创始人西奥多·韦尔所称的“破坏性竞争”。

其他治理者指出,人工智能可能会带来一些问题,单靠监管是无法解决的。比如,他们指出,人工智能系统可能会欺骗人们,让人们认为它们能够提供可靠事实,以至于许多人可能会放弃通过查证来确定哪些事实是可信的(忽略自身的责任),从而完全依赖人工智能系统——即使人工智能的应用已经造成了伤亡,例如在自动驾驶汽车造成的车祸中,或者在医疗事故中(由于粗心大意)。

这五大思想流派传递出来的信息更多地揭示了专家们自己的先入之见和偏见,而不是潜在的人工智能技术本身。尽管如此,在人工智能的喧嚣中,这是值得研究这五大思想流派,以获得真正的智慧和洞察力。(财富中文网)

杰弗里·索南费尔德(Jeffrey Sonnenfeld)是耶鲁大学管理学院(Yale School of Management)莱斯特·克朗管理实践教授和高级副院长。他被《Poets & Quants》杂志评为“年度最佳管理学教授”。

保罗·罗默(Paul Romer)是波士顿学院(Boston College)校级教授,也是2018年诺贝尔经济学奖得主。

德克·伯格曼(Dirk Bergemann)是耶鲁大学(Yale University)坎贝尔经济学教授,兼任计算机科学教授和金融学教授。他是耶鲁大学算法、数据和市场设计中心(Yale Center for Algorithm, Data, and Market Design)的创始主任。

史蒂文·田(Steven Tian)是耶鲁大学首席执行官领导力研究所(Yale Chief Executive Leadership Institute)的研究主任,曾经是洛克菲勒家族办公室(Rockefeller Family Office)的量化投资分析师。

Fortune.com上发表的评论文章中表达的观点,仅代表作者本人的观点,不代表《财富》杂志的观点和立场。

译者:中慧言-王芳

In just the May, The Wall Street Journal and the New York Times each published over 200 breathless articles pronouncing either the gloomy catastrophic end to humanity or its salvation, depending on the bias and experience of the experts cited.

We know firsthand just how sensationalist the public discourse surrounding A.I. can be. Much of the ample media coverage surrounding our 134th CEO Summit in late June, which brings together over 200 major CEOs, seized upon these alarmist concerns, focusing on how 42% of CEOs said A.I. could potentially destroy humanity within a decade when the CEOs had expressed a wide variety of nuanced viewpoints as we captured previously.

Amidst the deafening cacophony of views in this summer of A.I., across the worlds of business, government, academia, media, technology, and civil society, these experts are often talking right past each other.

Most A.I. expert voices tend to fall into five distinct categories: euphoric true believers, commercial profiteers, curious creators, alarmist activists, and global governistas.

Euphoric true believers: Salvation through systems

The long-forecasted moment of self-learning of machines is dramatically different from the reality of seven decades of incrementally evolving A.I. advances. Amidst such hype, it can be hard to know just how far the opportunity now extends and where some excessively rosy forecasts devolve into fantasyland.

Often the most euphoric voices are those who have worked on the frontiers of A.I. the longest and have dedicated their lives to new discoveries at the frontiers of human knowledge. These A.I. pioneers can hardly be blamed for being “true believers” in the disruptive potential of their technology, having embraced the potential and promise of an emerging technology when few others did–and far before they entered the mainstream.

For some of these voices, such as “Godfather of A.I.” and Meta’s chief A.I. scientist Yann LeCun, there is “no question that machines would eventually outsmart people.” Simultaneously, LeCun and others wave away the idea A.I. might pose a grave threat to humanity as “preposterously ridiculous.” Similarly, venture capitalist Marc Andreesen dismissively and breezily swatted away the “wall of fear-mongering and doomerism” about A.I., arguing that people should just stop worrying and “build, build, build.”

But single-minded, overarching conceptual euphoria risks leading these experts to overestimate the impact of their own technology (perhaps intentionally so, but more on that later) and dismiss its potential downsides and operational challenges.

Indeed, when we surveyed the CEOs on whether generative A.I. “will be more transformative than previous seminal technological advancements such as the creation of the internet, the invention of the automobile and the airplane, refrigeration, etc.”, a majority answered “No,” suggesting there is still broad-based uncertainty over whether A.I. will truly disrupt society as much as some eternal optimists would have us believe.

After all, for every technological advancement which truly transforms society, there are plenty more which fizzled after much initial hype. Merely 18 months ago, many enthusiasts were certain that cryptocurrencies were going to life change as we know it–prior to the blowup of FTX, the ignominious arrest of crypto tycoon SBF, and the onset of the “crypto winter”.

Commercial profiteers: Selling unanchored hype

In the last six months, it has become nearly impossible to attend a trade show, join a professional association, or receive a new product pitch without getting drenched in chatbot pitches. As the frenzy around A.I. picked up, spurred by the release of ChatGPT, opportunistic, practical entrepreneurs eager to make a buck have poured into the space.

Amazingly, there has been more capital invested in generative A.I. startups through the first five months of this year than in all previous years combined, with over half of all generative A.I. startups established in the last five months alone, while median generative A.I. valuations have doubled this year compared to last.

Perhaps reminiscent of the days when companies looking for an instant boost in stock price sought to add .”com” to their name amidst the dot com bubble, now college students are hyping overlapping A.I.-focused startups overnight, with some entrepreneurial students raising millions of dollars as a side project over spring break with nothing more than concept sheets.

Some of these new A.I. startups barely even have coherent products or plans, or are led by founders with little genuine understanding of the underlying technology who are merely selling unanchored hype–but that is apparently no obstacle to fundraising millions of dollars. While some of these startups may eventually become the bedrock of next-generation A.I. development, many, if not most, will not make it.

These excesses are not contained to just the startup space. Many publicly listed A.I. companies such as Tom Siebel’s C3.ai have seen their stock prices quadruple since the start of the year despite little change in underlying business performance and financial projections, leading some analysts to warn of a “bubble waiting to pop.”

A key driver of the A.I. commercial craze this year has been ChatGPT, whose parent company OpenAI won a $10 billion investment from Microsoft several months back. Microsoft and OpenAI’s ties run long and deep, dating back to a partnership between the Github division of Microsoft and OpenAI, which yielded a Github coding assistant in 2021. The coding assistant, based on a then-little-noticed OpenAI model called Codex, was likely trained on the huge amount of code available on Github. Despite its glitches, perhaps this early prototype helped convince these savvy business leaders to bet early and big on A.I. given what many see as a “once in a lifetime chance” to make huge profits.

All this is not to suggest that all A.I. investment is overwrought. In fact, 71% of the CEOs we surveyed thought their businesses are underinvesting in A.I. But we must raise the question of whether commercial profiteers selling unanchored hype may be crowding out genuine innovative enterprises in a possibly oversaturated space.

Curious creators: Innovation at the frontiers of knowledge

Not only is A.I. innovation taking place across many startups but it’s also rife within larger FORTUNE 500 companies. Many business leaders are enthusiastically but realistically integrating specific applications of A.I. into their companies, as we have extensively documented.

There is no question that this is a uniquely promising time for A.I. development, given recent technological advancements. Much of the recent leap forward for A.I., and large language models in particular, can be attributed to advances in the scale and capabilities of their underpinnings: the scale of the data available for models and algorithms to go to work on, the capabilities of the models and algorithms themselves, and the capabilities of the computing hardware that models and algorithms depend on.

However, the exponential pace of advancements in underlying A.I. technology is unlikely to continue forever. Many point to the example of autonomous vehicles, the first big A.I. bet, as a harbinger of what to expect: astonishingly rapid early progress by harvesting the lower-hanging fruit, which creates a frenzy–but then progress slows down dramatically in confronting the toughest challenges, such as fine-tuning autopilot glitches to avoid fatal crashes in the case of autonomous vehicles. It is the revenge of Zeno’s paradox, as the last mile is often the hardest. In the case of autonomous vehicles, even though it seems we are perennially halfway towards the goal of cars that drive themselves safely, it is anyone’s guess if and when the technology actually gets there.

Furthermore, it is still important to note the technical limitations to what A.I. can and cannot do. As the large language models are trained on huge datasets, they can efficiently summarize and disseminate factual knowledge and enable very efficient search-and-discover. However, in terms of whether it will allow for the bold inferential leaps which are the domain of scientists, entrepreneurs, creatives, and other exemplars of human originality, A.I.’s use may be more confined, as it is intrinsically unable to replicate the human emotion, empathy, and inspiration, which drive so much of human creativity.

While these curious creators are focused on finding positive applications of A.I., they risk being as naïve as a pre-atomic bomb Robert Oppenheimer in their narrow focus on problem-solving.

“When you see something that is technically sweet, you go ahead, and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb,” the father of the atomic bomb, who was wracked by guilt over the horrors his creation unleashed and turned into an anti-bomb activist, warned in 1954.

Alarmist Activists: Advocating unilateral rules

Some alarmist activists, especially highly experienced, even pioneering disenchanted technologists with strong pragmatic anchorings, loudly warn of the dangers of A.I. for everything from the societal implications and the threat to humanity to non-viable business models and inflated valuations–and many advocate for strong restrictions on A.I. to contain these dangers.

For example, one A.I. pioneer, Geoffrey Hinton, has warned of the “existential threat” of A.I., saying ominously that “it is hard to see how you can prevent the bad actors from using it for bad things.” Another technologist, early Facebook financial backer Roger McNamee, warned at our CEO Summit that the unit economics of generative A.I. are terrible and that no cash-burning A.I. company has a sustainable business model.

“The harms are really obvious”, said McNamee. “There are privacy issues. There are copyright issues. There are disinformation issues….an arms race is underway to get to a monopoly position, where they have control over people and businesses.”

Perhaps most prominently, OpenAI CEO Sam Altman and other A.I. technologists from Google, Microsoft, and other A.I. leaders recently issued an open letter warning that A.I. poses an extinction risk to humanity on par with nuclear war and contending that “mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

However, it can be difficult to discern whether these industry alarmists are driven by genuine anticipation of threats to humanity or other motives. It is perhaps coincidental that speculation about how A.I. poses an existential threat is an extremely effective way to drive attention. In our own experience, media coverage trumpeting CEO alarmism on A.I. from our recent CEO Summit far overshadowed our more nuanced primer on how CEOs are actually integrating A.I. into their businesses. Trumpeting alarmism over A.I. also happens to be an effective way to generate hype over what AI is potentially capable of–and thus greater investment and interest.

Already, Altman has been very effective in generating public interest in what OpenAI is doing, most obviously by initially giving the public free, unfettered access to ChatGPT at a massive financial loss. Meanwhile, his nonchalant explanation for the dangerous security breach in the software that OpenAI used to connect people to ChatGPT raised questions over whether industry alarmists’ actions match their words.

Global governistas: Balance through guidelines

Less strident on A.I. than the alarmist activists (but no less wary), are global governistas, who view unilateral restraints being placed on A.I. would be inadequate and harmful to national security. Instead, they are calling for a balanced international playing field. They are aware that hostile nations can continue exploiting A.I. along dangerous paths unless there are agreements akin to the global nuclear non-proliferation pacts.

These voices advocate for guidelines if not regulation around the responsible use of A.I. At our event, Senator Richard Blumenthal, Speaker Emerita Nancy Pelosi, Silicon Valley Congressman Ro Khanna, and other legislative leaders emphasized the importance of providing legislative guardrails and safeguards to encourage innovation while avoiding large-scale societal harms. Some point to the example of aviation regulation as an example to follow, with two different agencies overseeing flight safety: The FAA writes the rules, but the NTSB establishes the facts, two very different jobs. While rule writers have to make tradeoffs and compromise, fact-finders have to be relentless and uncompromising in pursuit of truth. Given how A.I. may exacerbate the proliferation of unreliable information across complex systems, regulatory fact-finding could be just as important if not even more so than rule-setting.

Similarly, there are global governistas such as renowned economist Lawrence Summers and biographer and media titan Walter Isaacson who have each told us that their major concern revolves around the lack of preparedness for changes driven by A.I. They suggest a historic workforce disruption among the formerly most vocal and powerful elite workers in society.

Walter Isaacson argues that A.I. will have the greatest displacement effect on professional “knowledge workers”, whose monopoly on esoteric knowledge will now be challenged by generative A.I. capable of regurgitating even the most obscure factoids far beyond the rote memory and recall capacity of any human being–though at the same time, Isaacson notes that previous technological innovations have enhanced rather than reduced human employment. Similarly, famous MIT economist Daron Acemoglu worries about the risk that A.I. could depress wages for workers and exacerbate inequality. For these governistas, the notion that A.I. will enslave humans or drive humans into extinction is absurd–an unwelcome distraction from the real social costs that A.I. could potentially impose.

Even some governistas who are skeptical of direct government regulation would prefer to see guardrails put in place, albeit by the private sector. For example, Eric Schmidt has argued that governments currently lack the expertise to regulate A.I. and should let the technology companies self-regulate. This self-regulation, however, harkens back to the industry-captured regulation of the Gilded Age, where the Interstate Commerce Commission, The Federal Communication Commission, and the Civil Aeronautics Board often tilted regulation intended to be in the public interest towards industry giants, which blocked new rival startup entrants protecting established players from what ATT founder Theodore Vail labeled as “destructive competition.”

Other governistas point out that there are problems potentially created by A.I. that cannot be solved through regulation alone. For example, they point out that A.I. systems can fool people into thinking that they can reliably offer up facts to the point where many may abdicate their individual responsibility for paying attention to what is trustworthy, and thus rely totally on A.I. systems–even when versions of AI already kill people, such as in autopilot-driven car crashes, or in careless medical malpractice.

The messaging of these five tribes reveals more about the experts’ own preconceptions and biases than the underlying A.I. technology itself–but nevertheless, these five schools of thought are worth investigating for nuggets of genuine intelligence and insight amidst the artificial intelligence cacophony.

Jeffrey Sonnenfeld is the Lester Crown Professor in Management Practice and Senior Associate Dean at Yale School of Management. He was named “Management Professor of the Year” by Poets & Quants magazine.

Paul Romer, University Professor at Boston College, was a co-recipient of the Nobel Prize in Economic Sciences in 2018.

Dirk Bergemann is the Campbell Professor of Economics at Yale University with secondary appointments as Professor of Computer Science and Professor of Finance. He is the Founding Director of the Yale Center for Algorithm, Data, and Market Design.

Steven Tian is the director of research at the Yale Chief Executive Leadership Institute and a former quantitative investment analyst with the Rockefeller Family Office.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

热读文章
热门视频
扫描二维码下载财富APP