立即打开
这13位人工智能创新者正在决定这项技术将如何改变你的生活

这13位人工智能创新者正在决定这项技术将如何改变你的生活

ANDREA GUZMAN 2023-06-16
人工智能正以超出所有人预期的速度改变商界和社会。

从左至右为:拉姆曼·乔杜里(Rummann Chowdhury)、阿里·戈德西(Ali Ghodsi)和李飞飞(Fei-Fei Li)。图片来源:COURTESY OF RUMMANN CHOWDHURY; DATABRICKS; MATT WINKELMEYER—GETTY IMAGES FOR WIRED25

就像一艘满载外星人的宇宙飞船降落在地球上一样,人工智能技术横空出世,瞬间改变了一切。

从人工智能生成音乐(能够惟妙惟肖地模仿你最喜欢的歌手)到虚拟恋人,人工智能技术令人着迷,却也令人害怕,同时越来越容易获得。

各大企业迅速向这项技术注入了资金。除了微软(Microsoft)斥130亿美元巨资投资ChatGPT开发商OpenAI之外,Anthropic、Cohere、Adept AI、Character.AI和Runway等初创公司在最近几个月里也分别筹集了数亿美元。

正如许多科技企业,人工智能创新项目负责人与技术本身一样,都是故事的核心。今天的人工智能创新者并不像科技行业的名流那样为人所熟知,但由于他们的工作,这些计算机科学家和技术专家的影响力正迅速扩大。

鉴于他们的工作对社会影响深远,而且可能带来潜在风险,这些人工智能创新者中的许多人强烈坚持自己的观点(涉及该技术的未来、力量及其危险性),而他们的观点往往互相冲突。

通过了解他们的工作和观点,《财富》杂志对部分制定人工智能议程的关键人物进行了调查。有些人在大公司工作,有些人在初创公司工作,有些人在学术界工作;一些人已经在人工智能的特定分支领域耕耘多年,而另一些人则是新近加入的。如果说他们有什么共同点的话,那就是他们能力非凡,能够改变这项强大的技术影响世界的方式。以下介绍当今最重要的13位人工智能创新者,排名不分先后。

丹妮拉·阿莫迪(Daniela Amodei),Anthropic联合创始人

“鉴于人工智能的潜在影响范围,它在很大程度上仍不受监管,这让我有点震惊。”

图片来源:COURTESY OF ANTHROPIC

据报道,丹妮拉·阿莫迪和她的兄长达里奥于2020年年底辞去了在OpenAI的工作,共同创立了Anthropic,据称是担心OpenAI与微软的合作会增加压力,导致OpenAI以牺牲安全协议为代价快速发布产品。

该公司的聊天机器人克劳德Claude与OpenAI的ChatGPT类似,但采用了一种被称为“宪法人工智能”(constitutional AI)的技术进行训练。据该公司称,该技术设定了一些原则,比如选择“种族主义和性别歧视倾向最不严重”的回答,并鼓励人们坚持生命至上和追求自由。这种方法是基于35岁的阿莫迪所说的Anthropic人工智能研究的3H框架(helpful, honest, and harmless三词的首字母缩写):有益、真诚和无害。

“鉴于人工智能的潜在影响范围,它在很大程度上仍不受监管,这让我有点震惊。”阿莫迪在去年的一次采访中说。她希望制定相关标准的组织、行业团体和行业协会能够介入,并就安全模型提供指导。“我们需要所有参与者共同努力,以取得积极成果(这是我们的共同愿望)。”

除了为聊天机器人Claude开发“下一代算法”外,Anthropic一直竭力筹集资金。最近,该公司从谷歌(Google)、赛富时(Salesforce)和Zoom Ventures等支持者那里筹集了4.5亿美元(值得注意的是,Anthropic此前筹集的5.8亿美元资金是由声名狼藉的加密货币企业家萨姆·班克曼-弗里德的Alameda Research Ventures领投的。Anthropic尚未表示是否会退还这笔资金)。

杨立昆(Yann LeCun),Meta首席人工智能科学家

“即将到来的人工智能系统将增强人类智力,就像机械机器能放大体能一样。它们不会成为替代品。”

图片来源:MARLENE AWAAD—BLOOMBERG/GETTY IMAGES

出生于法国的杨立昆在一场即将举行的辩论预演赛中表示:“关于人工智能引发的末日预言只不过是一种新形式的蒙昧主义。”在这场辩论中,他将与麻省理工学院(MIT)的一名研究人员就人工智能是否会对人类构成生存威胁展开辩论。

62岁的杨立昆直言不讳地表示,人工智能有助于增强人类的智力。他是公认的神经网络领域的主要专家之一,该领域的研究使得计算机视觉和语音识别取得了突破。他从事被称为卷积神经网络的基础神经网络设计方面的工作,拓宽了神经网络视角,使得他与深度学习先驱杰夫里·辛顿和约书亚·本吉奥于2018年共同获得了有“计算机科学界的诺贝尔奖”之称的图灵奖。

毋庸讳言,杨立昆并不是200多名公开信联署签名者之一。联署签名者最近在公开信中警告称,人工智能对人类构成了灭绝级风险。

长期担任纽约大学(New York University)计算机科学教授的杨立昆于2013年加入脸书(现为Meta),目前负责这家市值7000亿美元的公司的各类人工智能项目。这并没有让他参与辩论的兴趣减退,他还会参与人工智能相关的重大辩论,比如人们担忧该技术将夺走他们的工作。在马丁·福特2018年出版的《智能建筑师:从构建人工智能的人那里了解人工智能的真相》(Architects of Intelligence: The Truth About AI from the People Building it)一书的问答中,杨立昆对辛顿的一大著名预测提出了异议,例如,辛顿认为由于人工智能的出现,放射科医生将失去工作,相反,他解释说这将使放射科医生有更多时间与病人进行沟通。他接着说,他认为一些活动将变得更加昂贵,比如在餐厅吃饭(服务员端来由人类厨师准备的食物)。他对福特说:“事物的价值将发生变化,在评估价值时,人们更重视人类经验,而不是实现自动化的事物。”

戴维·栾(David Luan),Adept首席执行官兼联合创始人

“人工智能的发展速度是惊人的。首先是文本生成,然后是图像生成,如今是计算机应用。”

图片来源:COURTESY OF ADEPT

在2022年联合创立Adept之前,栾曾在一些最重要的人工智能公司工作,包括OpenAI和谷歌(他还曾在Axom公司短暂担任过人工智能总监,该公司是泰瑟枪和警用随身摄像机的制造商)。他说,人工智能当前的时刻是他最兴奋的时刻。“我们已经进入了人工智能的工业化时代。现在是时候建立工厂了。”栾在今年早些时候的脑谷人工智能峰会(Cerebral Valley A.I. Summit)上说。

Adept的理念是为人们提供“人工智能队友”,它可以通过几个简单的文本命令来执行计算机辅助任务。例如,在电子表格中建立财务模型。今年3月,该公司融资3.5亿美元,《福布斯》将其估值定为10亿美元以上。

31岁的栾说,他花了很多时间思考人们普遍担忧的问题:人工智能是否可能取代人类工作,但对于“知识工作者”——像Adept这样的生成式人工智能工具所关注的客户——来说,这种担忧被夸大了。栾在脑谷人工智能峰会上表示:“你不再需要每周花30个小时更新赛富时客户关系管理平台记录,而是每周花1%的时间让Adept为你做这些事情,而你花99%的时间与客户交谈。”

埃马德·莫斯塔克(Emad Mostaque),Stability AI首席执行官

“如果我们拥有比自身更有能力,却无法控制的代理,它们在互联网上进行互联,并实现了一定程度的自动化,这意味着什么?”

2022年12月,Stability AI创始人兼首席执行官埃马德·莫斯塔克出席《财富》杂志人工智能头脑风暴大会。图片来源:NICK OTTO FOR FORTUNE

莫斯塔克出生于约旦,但在孟加拉国和英国长大,2005年在牛津大学获得计算机科学学士学位。据《纽约时报》报道,在2020年创立Stability AI之前,他在对冲基金工作了十多年。在金融业的工作经历似乎为他创办Stability AI奠定了良好基础。据报道,他自己出资创办了这家公司,后来又获得了Coatue和光速创投基金(Lightspeed Venture Partners)等投资机构的投资。

该公司帮助创建了文本到图像的 “稳定扩散” 模型(Stable Diffusion),该模型被用来生成图像,但在生成过程中极少考虑是否构成知识产权侵权,或人们对暴力内容的担忧(与其他一些人工智能工具一样,该产品也因放大种族和性别偏见而受到批评)。对于莫斯塔克来说,首要任务是保持模型开源,而且不设置限制模型生成内容的护栏——尽管为了使Stability的人工智能更具商业吸引力,他后来确实用过滤掉色情图片的数据集训练出一版“稳定扩散” 模型。“我们信任用户,我们也信任社区。”他告诉《纽约时报》。

这种态度(以及指控莫斯塔克夸大了他的部分成就,正如《福布斯》最近详细报道的那样)引起了人工智能界其他人士、政府官员和盖蒂图片社(Getty Images)等公司的强烈反对,后者在2月份起诉Stability AI侵犯版权,声称该公司在未经许可的情况下复制了1200万张图像来训练其人工智能模型。

然而,Stability AI的工具已经成为生成式人工智能领域最受欢迎和最知名的代表之一。现年40岁、工作地在伦敦的莫斯塔克很难被归类。今年3月,他和其他人签署了一封公开信,呼吁暂停开发比OpenAI的人工智能聊天机器人GPT-4更高级的人工智能。他对人工智能发展的看法似乎走向两个极端:他最近评论说,在最糟糕的情况下,人工智能可以控制人类,而在另一个场合,他又表示,人工智能不会对人类感兴趣。

“因为我们想象不到有什么事物比我们更有能力,但我们都知道有人比我们更有能力。所以,我个人的看法是,这种情况会像斯嘉丽·约翰逊和杰昆·菲尼克斯主演的电影《她》(Her)那样:人类有点无聊,所以人工智能会说:‘再见’、‘你有点无聊’。”

李飞飞,斯坦福大学以人为本人工智能研究院联合主任

“能生在这个历史时代,投身这项技术,我仍感觉很超现实。”

图片来源:DAVID PAUL MORRIS—BLOOMBERG/GETTY IMAGES

当李飞飞16岁随家人从中国移民到美国时,她说自己必须从头开始学习英语,同时还要努力取得好成绩。如今,这位斯坦福大学以人为本人工智能研究院(Institute for Human-Centered AI)联合主任被认为是人工智能伦理应用方面的领军人物之一。她写过《如何制造对人类有益的人工智能》(How to make AI that good for people)等文章,她还是人工智能多元化的倡导者。

她在职业生涯早期建立了ImageNet,这是一个大型数据集,为深度学习和人工智能的发展做出了贡献。如今,在斯坦福大学,她一直在研究“环境智能”,即利用人工智能来监测家庭和医院的活动。在去年12月举行的《财富》杂志人工智能头脑风暴大会上,她讨论了自己的工作,以及偏见为何是需要考虑的关键因素。

“我在医疗保健领域做了很多工作。显而易见的是,如果我们的数据来自特定人群或社会经济阶层,将产生相当深远的潜在影响。”她说。

据47岁的李飞飞说,斯坦福大学现在对人工智能研究项目进行伦理和社会审查。“这让我们思考如何设计才能在技术中体现公平、隐私意识,以及人类福祉和尊严。”

为了提升人工智能领域的包容性,李飞飞与他人共同创立了一个名为“AI4ALL”的非营利组织,旨在促进人工智能教育多元化发展。

李飞飞职业生涯中的一大争议事件发生在她在谷歌云(Google Cloud)担任人工智能/机器学习首席科学家期间:2018年,谷歌签署了合约,向美国国防部提供人工智能技术支持,这在一些员工中引发争议。虽然合约不是李飞飞签署的,但批评者认为她与之有关联——尤其是她在泄露的电子邮件中关于如何向公众描述合约的一些评论——与她作为人工智能伦理倡导者相矛盾。

阿里·戈德西(Ali Ghodsi),Databricks首席执行官

“我们应该拥抱人工智能技术,因为它会一直存在。我确实认为它将改变一切,而且产生的影响大都是积极的。”

图片来源:COURTESY OF DATABRICKS

阿里·戈德西横跨学术界和商界,他是加州大学伯克利分校(UC Berkeley)的兼职教授,同时也是Databricks的联合创始人兼首席执行官。这位瑞典-伊朗双重国籍技术高管的一大核心原则是他对开源开发的承诺。

戈德西在开源数据处理工具Apache Spark上的工作为Databricks奠定了基础,该公司的估值为380亿美元。今年4月,Databricks发布了ChatGPT的开源竞争对手Dolly 2.0,它使用的问答指令集完全是由Databricks的5000名员工之间的互动创建的。这意味着任何公司都可以将Dolly 2.0嵌入到自己的商业产品和服务中,而不受使用上限的限制。

Dolly与其说是可行的产品,不如说是概念证明——该模型容易出错、产生幻觉和生成有毒的内容。然而,Dolly的重要性在于,它表明人工智能模型可以比支撑OpenAI的ChatGPT或Anthropic的Claude的大型专有语言模型小得多,训练和运行成本也更低。戈德西为Dolly的自由度和可及性作了辩解。“我们致力于安全而负责任地开发人工智能,通过开放像Dolly这样的模型供社区合作,我们坚信自己正朝着正确的方向发展(在人工智能行业中)。”

虽然现在生成式人工智能得到了很多关注,但45岁的戈德西认为,其他类型的人工智能,尤其是用于数据分析的人工智能,将对各行业产生深远影响。今年3月,他对《财富》杂志表示:“我认为这只是一个开始,在人工智能和数据分析能够发挥的作用方面,我们的研究还有待深入。

山姆·阿尔特曼(Sam Altman),OpenAI首席执行官

“如果有人真的破解了代码,并研发出超级人工智能(不管你希望如何定义它),可能制定一些全球性规则是合乎情理的。”

图片来源:ERIC LEE—BLOOMBERG/GETTY IMAGES

出于对谷歌将变得过于强大并控制人工智能的担忧,阿尔特曼于2015年与埃隆·马斯克、伊利亚·苏茨克沃和格雷格·布罗克曼一起创立了OpenAI。

从那时起,OpenAI已经成为人工智能领域最具影响力的公司之一,并成为“生成式人工智能”的领头羊:该公司的ChatGPT是史上增长最快的应用程序,仅在推出的两个月内就成功吸引了超过1亿月度活跃用户。DALL-E 2是OpenAI的另一款产品,是最受欢迎的文本到图像生成器之一,能够生成具有阴影、明暗和反射景深效果的高分辨率图像。

虽然他不是人工智能研究人员,也不是计算机科学家,但38岁的阿尔特曼将这些工具视为他与该领域其他人共同完成使命的垫脚石:开发被称为通用人工智能(AGI)的计算机超级人工智能。他认为,“通用人工智能可能是人类生存的必要条件”,但他表示,在实现这一目标的过程中,他会保持谨慎。

对通用人工智能的追求并没有让阿尔特曼对风险视而不见:他是联名签署人工智能安全中心(Center for AI safety)关于人工智能对人类威胁的警告的公开信的知名人士之一。在5月中旬举行的美国参议员听证会上,阿尔特曼呼吁对人工智能进行监管,他说,应制定规则来鼓励企业进行安全开发,“同时确保人们能够获得这项技术的好处”。(一些批评者猜测,他所呼吁的监管也可能给OpenAI越来越多的开源竞争对手造成障碍。)

据《财富》杂志的杰里米·卡恩介绍,阿尔特曼曾是创业孵化器Y Combinator的总裁,擅长融资。这一诀窍似乎带来了巨大的回报:OpenAI与微软达成了130亿美元的合作。

虽然马斯克已辞去OpenAI的董事会职务,而且据报道,他正在成立一个与OpenAI竞争的人工智能实验室,但阿尔特曼仍然把他视为自己的导师,称马斯克教会他如何在“艰苦研发和硬技术”上突破极限。然而,他并不计划跟随马斯克前往火星:“我不想去火星生活,这听起来很可怕。但我对其他人想去火星生活感到高兴。”

玛格丽特•米切尔(Margaret Mitchell),Hugging Face首席伦理科学家

“人们表示或是认为,‘你不会编程,不懂统计学,你无足轻重。’令人遗憾的是,通常直到我开始谈论技术上的事情,人们才会认真对待我。机器学习领域(ML)存在巨大的文化障碍。”

图片来源:COURTESY OF CLARE MCGREGOR/PARTNERSHIP ON AI

玛格丽特·米切尔对人工智能偏见的兴趣始于在微软工作期间发生的几件令人不安的事情。例如,她在去年的一次采访中回忆说,她处理的数据[用于训练该公司的图像注释软件“看见图片”(Seeing AI)人工智能辅助技术]对种族的描述非常诡异。还有一次,她在系统中输入了爆炸图像,输出结果将残骸描述为美丽的。

她意识到,仅仅让人工智能系统在基准测试中表现优异,并不能满足她。她说:“我想从根本上改变我们看待这些问题的方式、处理和分析数据的方式、评估的方式,以及在这些直接流程中遗漏的所有因素。”

这一使命是有个人代价的。米切尔在2021年登上头条新闻,当时谷歌解雇了她和蒂米特·格布鲁(二人是该公司人工智能伦理部门的联合负责人)。两人发表了一篇论文,详述了大型语言模型的风险,包括环境成本以及将种族主义和性别歧视语言纳入训练数据。他们还直言不讳地批评谷歌在促进多样性和包容性方面做得不够,并就公司政策与管理层发生冲突。

米切尔和格布鲁已经在人工智能伦理领域取得了重大突破,比如与其他多名研究人员就所谓的“模型卡”(model cards)发表了一篇论文(通过提供记录性能、识别局限性和偏见的方法,鼓励提高模型的透明度)。

米切尔在离开谷歌后加入机器学习技术开源平台提供商Hugging Face,她一直在埋头苦干,深入研究辅助技术和深度学习,并专注于编码,以帮助建立人工智能伦理研究和包容性招聘等事项的协议。

米切尔说,尽管她的背景是研究人员和科学家,但她对道德的关注让人们认为她不知道如何编程。米切尔去年在“拥抱脸”的博客上说:“令人遗憾的是,通常直到我开始谈论技术上的事情,人们才会认真对待我。”

穆斯塔法•苏莱曼(Mustafa Suleyman),Inflection AI联合创始人兼首席执行官

“毫无疑问,未来5到10年,白领阶层的许多工作将发生重大变化。”

图片来源:MARLENE AWAAD—BLOOMBERG/GETTY IMAGES

苏莱曼被朋友和同事称为“穆斯”(Moose),他曾在谷歌担任人工智能产品和人工智能政策副总裁,并与他人共同创立了研究实验室DeepMind,该实验室于2014年被谷歌收购。离开谷歌后,苏莱曼曾在风投公司Greylock工作,并创办了一家名为Inflection AI的机器学习初创公司。

本月早些时候,Inflection发布了第一款产品,一款名为Pi的聊天机器人,代表“个人智能”。当前版本的机器人可以记住与用户的对话,并提供有同理心的回答。苏莱曼说,最终,它将能够充当个人“办公室主任”,可以预订餐厅和处理其他日常任务。

38岁的苏莱曼对我们将开始使用何种语言与计算机互动热情高涨。他在《连线》杂志上写道,总有一天,我们将“与所有设备进行真正流畅的对话式交互”,这将重新定义人机交互。

在苏莱曼的设想中,未来人工智能将使白领工作发生重大变化,他还发现了人工智能在应对重大挑战方面的潜力。关于后者,他认为该技术可以降低住房和基础设施材料的成本,并能够帮助分配清洁水等资源。尽管如此,他还是主张避免在此过程中造成伤害,他2018年在《经济学人》中撰文警告说:

“从无人机面部识别的普及到有偏见的预测性警务,风险在于,在技术优势的竞争中,个人和集体权利被抛在了一边。”

莎拉•胡克(Sara Hooker),Cohere For AI总监

“我认为真正重要的一点是,我们需要完善追溯体系,尤其是当你考虑到人工智能在生成错误信息或可能被用于邪恶目的的文本方面的能力时。”

图片来源:COURTESY OF COHERE FOR AI

萨拉·胡克曾是谷歌大脑(Google Brain)的研究员,去年她加入了多伦多一家由谷歌大脑校友创立的致力于研究超语言模型的初创公司Cohere,并与前同事团聚。此次重聚保持了一定距离——胡克正在领导一个名为Cohere for AI的非营利性人工智能研究实验室,该实验室由Cohere资助,但独立运作。

Cohere for AI旨在“解决复杂的机器学习问题”。在实践中,这意味着从发布研究论文以提高大型语言模型的安全性和效率,到启动实施学者计划(Scholars Program,该计划旨在通过从世界各地招募人才,扩大人工智能领域的人才库)。

入选学者计划的条件之一是之前没有发表过关于机器学习的研究论文。

胡克说:“当我谈到改善地域代表性时,人们认为这是我们承担的成本。他们认为我们在牺牲已经取得的进步。但事实完全相反。”胡克更了解相关情况。她在非洲长大,并帮助谷歌在加纳成立了研究实验室。

胡克还力图提升机器学习模型和算法的准确性和可解释性。最近在接受《全球新闻网》(Global News)采访时,胡克分享了她对“模型可追溯性”的看法,即追踪文本何时由模型而不是人类生成,以及应该如何进行改进。她说:“我认为真正重要的一点是,我们需要完善追溯体系,尤其是当你考虑到人工智能在生成错误信息或可能被用于邪恶目的的文本方面的能力时。”

由于Cohere最近从英伟达(Nvidia)、甲骨文(Oracle)和Salesforce Ventures那里筹集了2.7亿美元的资金,胡克的非营利实验室与一家拥有知名支持者的初创公司强强联手。

拉姆曼·乔杜里(Rummann Chowdhury),Parity Consulting科学家,哈佛大学伯克曼·克莱因中心负责任的人工智能研究员

“很少有人提出这样的基本问题:人工智能本身应该存在吗?”

图片来源:COURTESY OF RUMMAN CHOWDHURY

乔杜里在人工智能领域的职业生涯始于埃森哲(Accenture)负责任人工智能部门的负责人,她负责设计一种算法工具,用于识别和减少人工智能系统的偏见。她离职后创立了一家名为Parity AI的算法审计公司,该公司后来被推特(Twitter)收购。在那里,她领导了机器学习伦理、透明度和问责团队(这是一个由研究人员和工程师组成的团队,致力于减轻社交平台上算法带来的危害),她说,在埃隆·马斯克收购推特后,这项工作变得很有挑战性。

在8月举行的DEF CON 31网络安全大会上,一群顶级人工智能开发者得到白宫的支持(她在其中起到了带头作用),将举办一场生成式人工智能“红队”测试活动,旨在通过评估Anthropic、谷歌、Hugging Face、OpenAI等公司的模型的异常和局限性来提高安全性。

作为监管方面的另一位人工智能专家,43岁的乔杜里最近在《连线》杂志上写道,应该建立生成式人工智能全球管理机构。她以脸书的监督委员会为例,说明该组织应如何组建。该委员会是一个跨学科的全球组织,专注于问责制。

乔杜里写道:“像这样的组织应该像国际原子能机构(IAEA)一样,持续通过专家咨询和合作来巩固其地位,而不是为其他从事全职工作的人提供副业。像脸书监督委员会一样,它应该接受来自行业的咨询意见和指导,但也有能力独立做出有约束力的决定,而各大公司必须遵守这些决定。”

她还推动在产品开发过程中进行她所谓的综合偏见评估和审计,这将允许对已经研发出的事物进行检查,但也可以在早期阶段就建立相关机制,以决定某些事物是否应该通过创意阶段的评估而走向下一阶段。

“很少有人提出这样的基本问题:人工智能本身应该存在吗?”她在一次关于负责任的人工智能的小组讨论中说。

克里斯托瓦尔·巴伦苏埃拉(Cristóbal Valenzuela),Runway ML联合创始人兼首席执行官

“生成艺术的历史并不是始于近期。在最近的人工智能热潮之外,在艺术创作过程中引入自主系统的想法已经存在了几十年。不同的是,现在我们正在进入人工合成时代。”

图片来源:COURTESY OF RUNWAY

巴伦苏埃拉通过艺术家兼程序员吉恩·科根的作品了解了神经网络后,进入了人工智能领域。他对人工智能如此着迷,以至于离开智利的家,成为纽约大学Tisch互动电信项目的一名研究员。

当时他致力于让艺术家能够使用机器学习模型,正是在那里,他萌生了创办Runway的想法。他在接受云计算公司Paperspace采访时表示:“我开始围绕这个问题进行头脑风暴,然后我意识到,‘模特表演平台’已经有了一个名字:伸展台。”

虽然许多艺术家已经接受了人工智能,使用像Runway这样的工具在电影中制作视觉效果或照片,但33岁的巴伦苏埃拉希望更多的艺术家能拥抱人工智能。

因此,该公司帮助开发了文本到图像的 “稳定扩散”模型。它还凭借其人工智能视频编辑模型Gen-1取得了惊人成就,该模型可以改进用户提供的现有视频。Gen-2于今年春天推出,为用户提供了从文本生成视频的机会。考虑到像Weezer这样的娱乐公司利用Runway的模型为摇滚乐队制作巡回宣传视频,另一位艺术家使用Runway的模型制作了一部短片,像Runway这样的工具因有可能改变好莱坞的电影制作方式而引发热潮。

在与麻省理工学院的一次谈话中,他说该公司正在努力帮助艺术家找到他们作品的用例,并向他们保证他们的工作不会被夺走。他还认为,尽管我们没有意识到,但在许多情况下,我们已经在使用人工智能进行艺术创作,因为用iPhone拍摄的一张照片可能涉及利用多个神经网络来优化图像。

“这只是另一种技术,它将帮助你更好地进行创作,更好地表达想法。”他说。

丹米斯·哈撒比斯(Demis Hassabis),谷歌DeepMind首席执行官

“在DeepMind,我们与其他团队有很大不同,因为我们专注于实现通用人工智能这一登月计划目标。我们围绕一个长期路线图进行筹备(即我们基于神经科学的论文,其中讨论了什么是智能,以及达到目标需要完成哪些工作)。”

图片来源:COURTESY OF GOOGLE DEEPMIND

哈萨比斯拥有伦敦大学学院(University College London)的认知神经科学博士学位,十多年前,他与他人共同创立了神经网络初创公司DeepMind,引起了轰动。该公司旨在建立强大的计算机网络,模仿人类大脑的工作方式(于2014年被谷歌收购)。今年4月,在这家互联网巨头的所有人工智能团队进行重组后,哈萨比斯接管了谷歌的整体人工智能工作。

哈萨比斯说,他对国际象棋的热爱使他进入了编程领域。这位前国际象棋神童甚至用国际象棋锦标赛的奖金买了他的第一台电脑。现在,他将象棋比赛要求的解决问题和规划能力以及他的神经科学背景运用到人工智能工作中,他相信人工智能将是“对人类最有益的事情”。

他认为,通用人工智能可能在十年内实现,并将DeepMind描述为受神经科学启发的人工智能,是解决有关大脑复杂问题的最佳途径之一。他告诉福特:“我们可以开始揭开某些深奥的大脑之谜,比如意识、创造力和做梦的本质。”当谈到机器意识是否可能实现时,他说他对此持开放态度,但认为“结果很可能是,生物系统有一些特殊的东西”是机器无法比拟的。

2016年,DeepMind的人工智能系统AlphaGo在一场5局3胜制的比赛中击败了世界顶级人类棋手李世石(Lee Sedol)。有2亿多人在线观看了这场比赛。(在围棋比赛中,双方棋手将棋子放在19路乘19路的棋盘上进行比赛。)李世石败给AlphaGo尤其令人震惊,因为专家们说,人们料想这样的结果在未来十年内都不会出现。

这样的时刻让DeepMind成为了通用人工智能的领军人物。但并非所有游戏都是如此。AlphaFold 2人工智能系统(DeepMind是该系统的幕后推手)预测了几乎所有已知蛋白质的三维结构。DeepMind已经在一个公共数据库中提供了这些预测结果。这一发现可能会加速药物研发,哈萨比斯和高级研究科学家约翰·江珀(John Jumper)也因此赢得了300万美元的生命科学突破奖。哈萨比斯还与他人共同创立并经营着一家Alphabet旗下的新公司Isomorphic Labs,致力于利用人工智能助力药物研发。(财富中文网)

译者:中慧言-王芳

就像一艘满载外星人的宇宙飞船降落在地球上一样,人工智能技术横空出世,瞬间改变了一切。

从人工智能生成音乐(能够惟妙惟肖地模仿你最喜欢的歌手)到虚拟恋人,人工智能技术令人着迷,却也令人害怕,同时越来越容易获得。

各大企业迅速向这项技术注入了资金。除了微软(Microsoft)斥130亿美元巨资投资ChatGPT开发商OpenAI之外,Anthropic、Cohere、Adept AI、Character.AI和Runway等初创公司在最近几个月里也分别筹集了数亿美元。

正如许多科技企业,人工智能创新项目负责人与技术本身一样,都是故事的核心。今天的人工智能创新者并不像科技行业的名流那样为人所熟知,但由于他们的工作,这些计算机科学家和技术专家的影响力正迅速扩大。

鉴于他们的工作对社会影响深远,而且可能带来潜在风险,这些人工智能创新者中的许多人强烈坚持自己的观点(涉及该技术的未来、力量及其危险性),而他们的观点往往互相冲突。

通过了解他们的工作和观点,《财富》杂志对部分制定人工智能议程的关键人物进行了调查。有些人在大公司工作,有些人在初创公司工作,有些人在学术界工作;一些人已经在人工智能的特定分支领域耕耘多年,而另一些人则是新近加入的。如果说他们有什么共同点的话,那就是他们能力非凡,能够改变这项强大的技术影响世界的方式。以下介绍当今最重要的13位人工智能创新者,排名不分先后。

丹妮拉·阿莫迪(Daniela Amodei),Anthropic联合创始人

“鉴于人工智能的潜在影响范围,它在很大程度上仍不受监管,这让我有点震惊。”

据报道,丹妮拉·阿莫迪和她的兄长达里奥于2020年年底辞去了在OpenAI的工作,共同创立了Anthropic,据称是担心OpenAI与微软的合作会增加压力,导致OpenAI以牺牲安全协议为代价快速发布产品。

该公司的聊天机器人克劳德Claude与OpenAI的ChatGPT类似,但采用了一种被称为“宪法人工智能”(constitutional AI)的技术进行训练。据该公司称,该技术设定了一些原则,比如选择“种族主义和性别歧视倾向最不严重”的回答,并鼓励人们坚持生命至上和追求自由。这种方法是基于35岁的阿莫迪所说的Anthropic人工智能研究的3H框架(helpful, honest, and harmless三词的首字母缩写):有益、真诚和无害。

“鉴于人工智能的潜在影响范围,它在很大程度上仍不受监管,这让我有点震惊。”阿莫迪在去年的一次采访中说。她希望制定相关标准的组织、行业团体和行业协会能够介入,并就安全模型提供指导。“我们需要所有参与者共同努力,以取得积极成果(这是我们的共同愿望)。”

除了为聊天机器人Claude开发“下一代算法”外,Anthropic一直竭力筹集资金。最近,该公司从谷歌(Google)、赛富时(Salesforce)和Zoom Ventures等支持者那里筹集了4.5亿美元(值得注意的是,Anthropic此前筹集的5.8亿美元资金是由声名狼藉的加密货币企业家萨姆·班克曼-弗里德的Alameda Research Ventures领投的。Anthropic尚未表示是否会退还这笔资金)。

杨立昆(Yann LeCun),Meta首席人工智能科学家

“即将到来的人工智能系统将增强人类智力,就像机械机器能放大体能一样。它们不会成为替代品。”

出生于法国的杨立昆在一场即将举行的辩论预演赛中表示:“关于人工智能引发的末日预言只不过是一种新形式的蒙昧主义。”在这场辩论中,他将与麻省理工学院(MIT)的一名研究人员就人工智能是否会对人类构成生存威胁展开辩论。

62岁的杨立昆直言不讳地表示,人工智能有助于增强人类的智力。他是公认的神经网络领域的主要专家之一,该领域的研究使得计算机视觉和语音识别取得了突破。他从事被称为卷积神经网络的基础神经网络设计方面的工作,拓宽了神经网络视角,使得他与深度学习先驱杰夫里·辛顿和约书亚·本吉奥于2018年共同获得了有“计算机科学界的诺贝尔奖”之称的图灵奖。

毋庸讳言,杨立昆并不是200多名公开信联署签名者之一。联署签名者最近在公开信中警告称,人工智能对人类构成了灭绝级风险。

长期担任纽约大学(New York University)计算机科学教授的杨立昆于2013年加入脸书(现为Meta),目前负责这家市值7000亿美元的公司的各类人工智能项目。这并没有让他参与辩论的兴趣减退,他还会参与人工智能相关的重大辩论,比如人们担忧该技术将夺走他们的工作。在马丁·福特2018年出版的《智能建筑师:从构建人工智能的人那里了解人工智能的真相》(Architects of Intelligence: The Truth About AI from the People Building it)一书的问答中,杨立昆对辛顿的一大著名预测提出了异议,例如,辛顿认为由于人工智能的出现,放射科医生将失去工作,相反,他解释说这将使放射科医生有更多时间与病人进行沟通。他接着说,他认为一些活动将变得更加昂贵,比如在餐厅吃饭(服务员端来由人类厨师准备的食物)。他对福特说:“事物的价值将发生变化,在评估价值时,人们更重视人类经验,而不是实现自动化的事物。”

戴维·栾(David Luan),Adept首席执行官兼联合创始人

“人工智能的发展速度是惊人的。首先是文本生成,然后是图像生成,如今是计算机应用。”

在2022年联合创立Adept之前,栾曾在一些最重要的人工智能公司工作,包括OpenAI和谷歌(他还曾在Axom公司短暂担任过人工智能总监,该公司是泰瑟枪和警用随身摄像机的制造商)。他说,人工智能当前的时刻是他最兴奋的时刻。“我们已经进入了人工智能的工业化时代。现在是时候建立工厂了。”栾在今年早些时候的脑谷人工智能峰会(Cerebral Valley A.I. Summit)上说。

Adept的理念是为人们提供“人工智能队友”,它可以通过几个简单的文本命令来执行计算机辅助任务。例如,在电子表格中建立财务模型。今年3月,该公司融资3.5亿美元,《福布斯》将其估值定为10亿美元以上。

31岁的栾说,他花了很多时间思考人们普遍担忧的问题:人工智能是否可能取代人类工作,但对于“知识工作者”——像Adept这样的生成式人工智能工具所关注的客户——来说,这种担忧被夸大了。栾在脑谷人工智能峰会上表示:“你不再需要每周花30个小时更新赛富时客户关系管理平台记录,而是每周花1%的时间让Adept为你做这些事情,而你花99%的时间与客户交谈。”

埃马德·莫斯塔克(Emad Mostaque),Stability AI首席执行官

“如果我们拥有比自身更有能力,却无法控制的代理,它们在互联网上进行互联,并实现了一定程度的自动化,这意味着什么?”

莫斯塔克出生于约旦,但在孟加拉国和英国长大,2005年在牛津大学获得计算机科学学士学位。据《纽约时报》报道,在2020年创立Stability AI之前,他在对冲基金工作了十多年。在金融业的工作经历似乎为他创办Stability AI奠定了良好基础。据报道,他自己出资创办了这家公司,后来又获得了Coatue和光速创投基金(Lightspeed Venture Partners)等投资机构的投资。

该公司帮助创建了文本到图像的 “稳定扩散” 模型(Stable Diffusion),该模型被用来生成图像,但在生成过程中极少考虑是否构成知识产权侵权,或人们对暴力内容的担忧(与其他一些人工智能工具一样,该产品也因放大种族和性别偏见而受到批评)。对于莫斯塔克来说,首要任务是保持模型开源,而且不设置限制模型生成内容的护栏——尽管为了使Stability的人工智能更具商业吸引力,他后来确实用过滤掉色情图片的数据集训练出一版“稳定扩散” 模型。“我们信任用户,我们也信任社区。”他告诉《纽约时报》。

这种态度(以及指控莫斯塔克夸大了他的部分成就,正如《福布斯》最近详细报道的那样)引起了人工智能界其他人士、政府官员和盖蒂图片社(Getty Images)等公司的强烈反对,后者在2月份起诉Stability AI侵犯版权,声称该公司在未经许可的情况下复制了1200万张图像来训练其人工智能模型。

然而,Stability AI的工具已经成为生成式人工智能领域最受欢迎和最知名的代表之一。现年40岁、工作地在伦敦的莫斯塔克很难被归类。今年3月,他和其他人签署了一封公开信,呼吁暂停开发比OpenAI的人工智能聊天机器人GPT-4更高级的人工智能。他对人工智能发展的看法似乎走向两个极端:他最近评论说,在最糟糕的情况下,人工智能可以控制人类,而在另一个场合,他又表示,人工智能不会对人类感兴趣。

“因为我们想象不到有什么事物比我们更有能力,但我们都知道有人比我们更有能力。所以,我个人的看法是,这种情况会像斯嘉丽·约翰逊和杰昆·菲尼克斯主演的电影《她》(Her)那样:人类有点无聊,所以人工智能会说:‘再见’、‘你有点无聊’。”

李飞飞,斯坦福大学以人为本人工智能研究院联合主任

“能生在这个历史时代,投身这项技术,我仍感觉很超现实。”

当李飞飞16岁随家人从中国移民到美国时,她说自己必须从头开始学习英语,同时还要努力取得好成绩。如今,这位斯坦福大学以人为本人工智能研究院(Institute for Human-Centered AI)联合主任被认为是人工智能伦理应用方面的领军人物之一。她写过《如何制造对人类有益的人工智能》(How to make AI that good for people)等文章,她还是人工智能多元化的倡导者。

她在职业生涯早期建立了ImageNet,这是一个大型数据集,为深度学习和人工智能的发展做出了贡献。如今,在斯坦福大学,她一直在研究“环境智能”,即利用人工智能来监测家庭和医院的活动。在去年12月举行的《财富》杂志人工智能头脑风暴大会上,她讨论了自己的工作,以及偏见为何是需要考虑的关键因素。

“我在医疗保健领域做了很多工作。显而易见的是,如果我们的数据来自特定人群或社会经济阶层,将产生相当深远的潜在影响。”她说。

据47岁的李飞飞说,斯坦福大学现在对人工智能研究项目进行伦理和社会审查。“这让我们思考如何设计才能在技术中体现公平、隐私意识,以及人类福祉和尊严。”

为了提升人工智能领域的包容性,李飞飞与他人共同创立了一个名为“AI4ALL”的非营利组织,旨在促进人工智能教育多元化发展。

李飞飞职业生涯中的一大争议事件发生在她在谷歌云(Google Cloud)担任人工智能/机器学习首席科学家期间:2018年,谷歌签署了合约,向美国国防部提供人工智能技术支持,这在一些员工中引发争议。虽然合约不是李飞飞签署的,但批评者认为她与之有关联——尤其是她在泄露的电子邮件中关于如何向公众描述合约的一些评论——与她作为人工智能伦理倡导者相矛盾。

阿里·戈德西(Ali Ghodsi),Databricks首席执行官

“我们应该拥抱人工智能技术,因为它会一直存在。我确实认为它将改变一切,而且产生的影响大都是积极的。”

阿里·戈德西横跨学术界和商界,他是加州大学伯克利分校(UC Berkeley)的兼职教授,同时也是Databricks的联合创始人兼首席执行官。这位瑞典-伊朗双重国籍技术高管的一大核心原则是他对开源开发的承诺。

戈德西在开源数据处理工具Apache Spark上的工作为Databricks奠定了基础,该公司的估值为380亿美元。今年4月,Databricks发布了ChatGPT的开源竞争对手Dolly 2.0,它使用的问答指令集完全是由Databricks的5000名员工之间的互动创建的。这意味着任何公司都可以将Dolly 2.0嵌入到自己的商业产品和服务中,而不受使用上限的限制。

Dolly与其说是可行的产品,不如说是概念证明——该模型容易出错、产生幻觉和生成有毒的内容。然而,Dolly的重要性在于,它表明人工智能模型可以比支撑OpenAI的ChatGPT或Anthropic的Claude的大型专有语言模型小得多,训练和运行成本也更低。戈德西为Dolly的自由度和可及性作了辩解。“我们致力于安全而负责任地开发人工智能,通过开放像Dolly这样的模型供社区合作,我们坚信自己正朝着正确的方向发展(在人工智能行业中)。”

虽然现在生成式人工智能得到了很多关注,但45岁的戈德西认为,其他类型的人工智能,尤其是用于数据分析的人工智能,将对各行业产生深远影响。今年3月,他对《财富》杂志表示:“我认为这只是一个开始,在人工智能和数据分析能够发挥的作用方面,我们的研究还有待深入。

山姆·阿尔特曼(Sam Altman),OpenAI首席执行官

“如果有人真的破解了代码,并研发出超级人工智能(不管你希望如何定义它),可能制定一些全球性规则是合乎情理的。”

出于对谷歌将变得过于强大并控制人工智能的担忧,阿尔特曼于2015年与埃隆·马斯克、伊利亚·苏茨克沃和格雷格·布罗克曼一起创立了OpenAI。

从那时起,OpenAI已经成为人工智能领域最具影响力的公司之一,并成为“生成式人工智能”的领头羊:该公司的ChatGPT是史上增长最快的应用程序,仅在推出的两个月内就成功吸引了超过1亿月度活跃用户。DALL-E 2是OpenAI的另一款产品,是最受欢迎的文本到图像生成器之一,能够生成具有阴影、明暗和反射景深效果的高分辨率图像。

虽然他不是人工智能研究人员,也不是计算机科学家,但38岁的阿尔特曼将这些工具视为他与该领域其他人共同完成使命的垫脚石:开发被称为通用人工智能(AGI)的计算机超级人工智能。他认为,“通用人工智能可能是人类生存的必要条件”,但他表示,在实现这一目标的过程中,他会保持谨慎。

对通用人工智能的追求并没有让阿尔特曼对风险视而不见:他是联名签署人工智能安全中心(Center for AI safety)关于人工智能对人类威胁的警告的公开信的知名人士之一。在5月中旬举行的美国参议员听证会上,阿尔特曼呼吁对人工智能进行监管,他说,应制定规则来鼓励企业进行安全开发,“同时确保人们能够获得这项技术的好处”。(一些批评者猜测,他所呼吁的监管也可能给OpenAI越来越多的开源竞争对手造成障碍。)

据《财富》杂志的杰里米·卡恩介绍,阿尔特曼曾是创业孵化器Y Combinator的总裁,擅长融资。这一诀窍似乎带来了巨大的回报:OpenAI与微软达成了130亿美元的合作。

虽然马斯克已辞去OpenAI的董事会职务,而且据报道,他正在成立一个与OpenAI竞争的人工智能实验室,但阿尔特曼仍然把他视为自己的导师,称马斯克教会他如何在“艰苦研发和硬技术”上突破极限。然而,他并不计划跟随马斯克前往火星:“我不想去火星生活,这听起来很可怕。但我对其他人想去火星生活感到高兴。”

玛格丽特•米切尔(Margaret Mitchell),Hugging Face首席伦理科学家

“人们表示或是认为,‘你不会编程,不懂统计学,你无足轻重。’令人遗憾的是,通常直到我开始谈论技术上的事情,人们才会认真对待我。机器学习领域(ML)存在巨大的文化障碍。”

玛格丽特·米切尔对人工智能偏见的兴趣始于在微软工作期间发生的几件令人不安的事情。例如,她在去年的一次采访中回忆说,她处理的数据[用于训练该公司的图像注释软件“看见图片”(Seeing AI)人工智能辅助技术]对种族的描述非常诡异。还有一次,她在系统中输入了爆炸图像,输出结果将残骸描述为美丽的。

她意识到,仅仅让人工智能系统在基准测试中表现优异,并不能满足她。她说:“我想从根本上改变我们看待这些问题的方式、处理和分析数据的方式、评估的方式,以及在这些直接流程中遗漏的所有因素。”

这一使命是有个人代价的。米切尔在2021年登上头条新闻,当时谷歌解雇了她和蒂米特·格布鲁(二人是该公司人工智能伦理部门的联合负责人)。两人发表了一篇论文,详述了大型语言模型的风险,包括环境成本以及将种族主义和性别歧视语言纳入训练数据。他们还直言不讳地批评谷歌在促进多样性和包容性方面做得不够,并就公司政策与管理层发生冲突。

米切尔和格布鲁已经在人工智能伦理领域取得了重大突破,比如与其他多名研究人员就所谓的“模型卡”(model cards)发表了一篇论文(通过提供记录性能、识别局限性和偏见的方法,鼓励提高模型的透明度)。

米切尔在离开谷歌后加入机器学习技术开源平台提供商Hugging Face,她一直在埋头苦干,深入研究辅助技术和深度学习,并专注于编码,以帮助建立人工智能伦理研究和包容性招聘等事项的协议。

米切尔说,尽管她的背景是研究人员和科学家,但她对道德的关注让人们认为她不知道如何编程。米切尔去年在“拥抱脸”的博客上说:“令人遗憾的是,通常直到我开始谈论技术上的事情,人们才会认真对待我。”

穆斯塔法•苏莱曼(Mustafa Suleyman),Inflection AI联合创始人兼首席执行官

“毫无疑问,未来5到10年,白领阶层的许多工作将发生重大变化。”

苏莱曼被朋友和同事称为“穆斯”(Moose),他曾在谷歌担任人工智能产品和人工智能政策副总裁,并与他人共同创立了研究实验室DeepMind,该实验室于2014年被谷歌收购。离开谷歌后,苏莱曼曾在风投公司Greylock工作,并创办了一家名为Inflection AI的机器学习初创公司。

本月早些时候,Inflection发布了第一款产品,一款名为Pi的聊天机器人,代表“个人智能”。当前版本的机器人可以记住与用户的对话,并提供有同理心的回答。苏莱曼说,最终,它将能够充当个人“办公室主任”,可以预订餐厅和处理其他日常任务。

38岁的苏莱曼对我们将开始使用何种语言与计算机互动热情高涨。他在《连线》杂志上写道,总有一天,我们将“与所有设备进行真正流畅的对话式交互”,这将重新定义人机交互。

在苏莱曼的设想中,未来人工智能将使白领工作发生重大变化,他还发现了人工智能在应对重大挑战方面的潜力。关于后者,他认为该技术可以降低住房和基础设施材料的成本,并能够帮助分配清洁水等资源。尽管如此,他还是主张避免在此过程中造成伤害,他2018年在《经济学人》中撰文警告说:

“从无人机面部识别的普及到有偏见的预测性警务,风险在于,在技术优势的竞争中,个人和集体权利被抛在了一边。”

莎拉•胡克(Sara Hooker),Cohere For AI总监

“我认为真正重要的一点是,我们需要完善追溯体系,尤其是当你考虑到人工智能在生成错误信息或可能被用于邪恶目的的文本方面的能力时。”

萨拉·胡克曾是谷歌大脑(Google Brain)的研究员,去年她加入了多伦多一家由谷歌大脑校友创立的致力于研究超语言模型的初创公司Cohere,并与前同事团聚。此次重聚保持了一定距离——胡克正在领导一个名为Cohere for AI的非营利性人工智能研究实验室,该实验室由Cohere资助,但独立运作。

Cohere for AI旨在“解决复杂的机器学习问题”。在实践中,这意味着从发布研究论文以提高大型语言模型的安全性和效率,到启动实施学者计划(Scholars Program,该计划旨在通过从世界各地招募人才,扩大人工智能领域的人才库)。

入选学者计划的条件之一是之前没有发表过关于机器学习的研究论文。

胡克说:“当我谈到改善地域代表性时,人们认为这是我们承担的成本。他们认为我们在牺牲已经取得的进步。但事实完全相反。”胡克更了解相关情况。她在非洲长大,并帮助谷歌在加纳成立了研究实验室。

胡克还力图提升机器学习模型和算法的准确性和可解释性。最近在接受《全球新闻网》(Global News)采访时,胡克分享了她对“模型可追溯性”的看法,即追踪文本何时由模型而不是人类生成,以及应该如何进行改进。她说:“我认为真正重要的一点是,我们需要完善追溯体系,尤其是当你考虑到人工智能在生成错误信息或可能被用于邪恶目的的文本方面的能力时。”

由于Cohere最近从英伟达(Nvidia)、甲骨文(Oracle)和Salesforce Ventures那里筹集了2.7亿美元的资金,胡克的非营利实验室与一家拥有知名支持者的初创公司强强联手。

拉姆曼·乔杜里(Rummann Chowdhury),Parity Consulting科学家,哈佛大学伯克曼·克莱因中心负责任的人工智能研究员

“很少有人提出这样的基本问题:人工智能本身应该存在吗?”

乔杜里在人工智能领域的职业生涯始于埃森哲(Accenture)负责任人工智能部门的负责人,她负责设计一种算法工具,用于识别和减少人工智能系统的偏见。她离职后创立了一家名为Parity AI的算法审计公司,该公司后来被推特(Twitter)收购。在那里,她领导了机器学习伦理、透明度和问责团队(这是一个由研究人员和工程师组成的团队,致力于减轻社交平台上算法带来的危害),她说,在埃隆·马斯克收购推特后,这项工作变得很有挑战性。

在8月举行的DEF CON 31网络安全大会上,一群顶级人工智能开发者得到白宫的支持(她在其中起到了带头作用),将举办一场生成式人工智能“红队”测试活动,旨在通过评估Anthropic、谷歌、Hugging Face、OpenAI等公司的模型的异常和局限性来提高安全性。

作为监管方面的另一位人工智能专家,43岁的乔杜里最近在《连线》杂志上写道,应该建立生成式人工智能全球管理机构。她以脸书的监督委员会为例,说明该组织应如何组建。该委员会是一个跨学科的全球组织,专注于问责制。

乔杜里写道:“像这样的组织应该像国际原子能机构(IAEA)一样,持续通过专家咨询和合作来巩固其地位,而不是为其他从事全职工作的人提供副业。像脸书监督委员会一样,它应该接受来自行业的咨询意见和指导,但也有能力独立做出有约束力的决定,而各大公司必须遵守这些决定。”

她还推动在产品开发过程中进行她所谓的综合偏见评估和审计,这将允许对已经研发出的事物进行检查,但也可以在早期阶段就建立相关机制,以决定某些事物是否应该通过创意阶段的评估而走向下一阶段。

“很少有人提出这样的基本问题:人工智能本身应该存在吗?”她在一次关于负责任的人工智能的小组讨论中说。

克里斯托瓦尔·巴伦苏埃拉(Cristóbal Valenzuela),Runway ML联合创始人兼首席执行官

“生成艺术的历史并不是始于近期。在最近的人工智能热潮之外,在艺术创作过程中引入自主系统的想法已经存在了几十年。不同的是,现在我们正在进入人工合成时代。”

巴伦苏埃拉通过艺术家兼程序员吉恩·科根的作品了解了神经网络后,进入了人工智能领域。他对人工智能如此着迷,以至于离开智利的家,成为纽约大学Tisch互动电信项目的一名研究员。

当时他致力于让艺术家能够使用机器学习模型,正是在那里,他萌生了创办Runway的想法。他在接受云计算公司Paperspace采访时表示:“我开始围绕这个问题进行头脑风暴,然后我意识到,‘模特表演平台’已经有了一个名字:伸展台。”

虽然许多艺术家已经接受了人工智能,使用像Runway这样的工具在电影中制作视觉效果或照片,但33岁的巴伦苏埃拉希望更多的艺术家能拥抱人工智能。

因此,该公司帮助开发了文本到图像的 “稳定扩散”模型。它还凭借其人工智能视频编辑模型Gen-1取得了惊人成就,该模型可以改进用户提供的现有视频。Gen-2于今年春天推出,为用户提供了从文本生成视频的机会。考虑到像Weezer这样的娱乐公司利用Runway的模型为摇滚乐队制作巡回宣传视频,另一位艺术家使用Runway的模型制作了一部短片,像Runway这样的工具因有可能改变好莱坞的电影制作方式而引发热潮。

在与麻省理工学院的一次谈话中,他说该公司正在努力帮助艺术家找到他们作品的用例,并向他们保证他们的工作不会被夺走。他还认为,尽管我们没有意识到,但在许多情况下,我们已经在使用人工智能进行艺术创作,因为用iPhone拍摄的一张照片可能涉及利用多个神经网络来优化图像。

“这只是另一种技术,它将帮助你更好地进行创作,更好地表达想法。”他说。

丹米斯·哈撒比斯(Demis Hassabis),谷歌DeepMind首席执行官

“在DeepMind,我们与其他团队有很大不同,因为我们专注于实现通用人工智能这一登月计划目标。我们围绕一个长期路线图进行筹备(即我们基于神经科学的论文,其中讨论了什么是智能,以及达到目标需要完成哪些工作)。”

哈萨比斯拥有伦敦大学学院(University College London)的认知神经科学博士学位,十多年前,他与他人共同创立了神经网络初创公司DeepMind,引起了轰动。该公司旨在建立强大的计算机网络,模仿人类大脑的工作方式(于2014年被谷歌收购)。今年4月,在这家互联网巨头的所有人工智能团队进行重组后,哈萨比斯接管了谷歌的整体人工智能工作。

哈萨比斯说,他对国际象棋的热爱使他进入了编程领域。这位前国际象棋神童甚至用国际象棋锦标赛的奖金买了他的第一台电脑。现在,他将象棋比赛要求的解决问题和规划能力以及他的神经科学背景运用到人工智能工作中,他相信人工智能将是“对人类最有益的事情”。

他认为,通用人工智能可能在十年内实现,并将DeepMind描述为受神经科学启发的人工智能,是解决有关大脑复杂问题的最佳途径之一。他告诉福特:“我们可以开始揭开某些深奥的大脑之谜,比如意识、创造力和做梦的本质。”当谈到机器意识是否可能实现时,他说他对此持开放态度,但认为“结果很可能是,生物系统有一些特殊的东西”是机器无法比拟的。

2016年,DeepMind的人工智能系统AlphaGo在一场5局3胜制的比赛中击败了世界顶级人类棋手李世石(Lee Sedol)。有2亿多人在线观看了这场比赛。(在围棋比赛中,双方棋手将棋子放在19路乘19路的棋盘上进行比赛。)李世石败给AlphaGo尤其令人震惊,因为专家们说,人们料想这样的结果在未来十年内都不会出现。

这样的时刻让DeepMind成为了通用人工智能的领军人物。但并非所有游戏都是如此。AlphaFold 2人工智能系统(DeepMind是该系统的幕后推手)预测了几乎所有已知蛋白质的三维结构。DeepMind已经在一个公共数据库中提供了这些预测结果。这一发现可能会加速药物研发,哈萨比斯和高级研究科学家约翰·江珀(John Jumper)也因此赢得了300万美元的生命科学突破奖。哈萨比斯还与他人共同创立并经营着一家Alphabet旗下的新公司Isomorphic Labs,致力于利用人工智能助力药物研发。(财富中文网)

译者:中慧言-王芳

Like a spaceship full of aliens landing on Earth, artificial intelligence technology seems to have come out of nowhere and instantly changed everything.

From A.I.-generated music that expertly mimics your favorite singer to virtual romantic partners, artificial intelligence technology is mesmerizing, scary, and increasingly accessible.

Businesses aren’t wasting any time pumping money into the technology. In addition to Microsoft’s $13 billion bet on ChatGPT-maker OpenAI, startups like Anthropic, Cohere, Adept AI, Character.AI, and Runway have raised hundreds of millions of dollars apiece in recent months.

As with much of the tech business, the people responsible for the innovation in A.I. are as central to the story as the technology itself. The names of today’s A.I. innovators aren’t as familiar as the established members of the tech industry pantheon, but the influence of these computer scientists and technologists is quickly spreading through their work.

Given how profound and potentially risky their work’s impact on society could be, many of these A.I. innovators have strongly held—and often conflicting—opinions about the technology’s future, its power, and its dangers.

Fortune took a look at some of the key figures setting the A.I. agenda through their work and their viewpoints. Some work at big companies, some at startups, and some in academia; Some have been toiling for years in specialized branches of A.I., while others are more recent converts. If they have one thing in common, it’s their unique ability to influence how this powerful technology affects the world. Here, listed in no particular order, are 13 of today’s most important A.I. innovators.

****

Daniela Amodei

Cofounder, Anthropic

“It kind of blows my mind that A.I., given the potential reach that it could have, is still such a largely unregulated area.” — source

Daniela Amodei and her brother Dario quit their jobs at OpenAI to cofound Anthropic at the end of 2020, reportedly because of concerns that OpenAI’s deal with Microsoft would increase pressure to release products quickly at the expense of safety protocols.

The company’s chatbot, called Claude, is similar to OpenAI’s ChatGPT but is trained with a technique referred to as “constitutional AI,” which sets principles like choosing responses that are, according to the company, the “least racist and sexist” and encouraging of life and liberty. The approach is based on what Amodei, 35, refers to as Anthropic’s triple H framework for A.I. research: helpful, honest, and harmless.

“It kind of blows my mind that A.I., given the potential reach that it could have, is still such a largely unregulated area,” Amodei said in an interview last year, expressing hope that standard setting organizations, industry groups, and trade associations will step into the breach and provide guidance on what a safe model looks like. “We need all those actors working together to get to the positive outcomes we’re all hoping for.”

In addition to developing a “next-gen algorithm” for its Claude chatbot, Anthropic has been hard at work raising capital. It recently raised $450 million from backers including Google, Salesforce, and Zoom Ventures (less glamorously, it should be noted that an earlier, $580 million funding round that Anthropic raised was led by disgraced crypto entrepreneur Sam Bankman-Fried’s Alameda Research Ventures. Anthropic has not said whether it will return the money).

****

Yann LeCun

Chief A.I. scientist, Meta

“The upcoming AI systems are going to be an amplification of human intelligence in the way that mechanical machines have been an amplification of physical strength. They’re not going to be a replacement.” — source

“Prophecies of AI-fueled doom are nothing more than a new form of obscurantism,” says the French-born LeCun in a preview for an upcoming debate in which he’ll square off against an MIT researcher about whether A.I. poses an existential threat to humanity.

An outspoken advocate that A.I. has the power to amplify human intelligence, LeCun, 62, is widely respected as one of the leading experts in the field of neural networks, which have allowed for breakthroughs in computer vision and speech recognition. His work on a foundational neural network design known as a convolutional neural network and broadening the vision of such networks earned him the 2018 Turing Award, considered the Nobel prize of computing, alongside fellow deep learning pioneers Geoff Hinton and Yoshua Bengio.

Needless to say, LeCun was not among the more than 200 signatories of the recent warning that A.I. poses an extinction level risk to humanity.

A longtime computer science professor at New York University, LeCun joined Facebook (now Meta) in 2013 and now oversees the $700 billion company’s various artificial intelligence efforts. That hasn’t diminished his appetite for engaging in the major debates about A.I., such as the concerns that the technology will take people’s jobs. In a Q&A for Martin Ford’s 2018 book Architects of Intelligence: The Truth About AI from the People Building it, LeCun took issue with a famous prediction of Hinton’s that radiologists, for example, would be out of a job thanks to A.I. Rather, he explained it would free up radiologists time to spend with patients. He went on to say that he imagines some activities will become more expensive like eating at a restaurant where a waiter serves food that a human cook prepared. “The value of things is going to change, with more value placed on human experience and less to things that are automated,” he told Ford.

****

David Luan

CEO and cofounder, Adept

“The pace of progress in AI is astounding. First text generation, then image generation, now computer use.” — source

Before cofounding Adept in 2022, Luan worked at some of the most important A.I. companies, including OpenAI and Google (he also did a brief stint as the director of A.I. at Axom, the maker of the Taser gun and police body cameras). He says the current moment in A.I. is the one he’s most excited about. “We’ve entered the industrialization age of AI. It’s now time to build factories,” Luan said at the Cerebral Valley A.I. Summit earlier this year.

The idea behind Adept is to provide people with an “AI teammate” that can perform computer-based tasks—for example, building a financial model on a spreadsheet—with a few simple text commands. In March, the company raised $350 million in funding at a valuation pegged at more than $1 billion by Forbes.

Luan, 31, said that he spends a lot time thinking about the concerns that A.I. could replace people’s jobs, but that for the “knowledge workers”—the customers that generative A.I. tools like Adept are focused on—the fears are overblown. “Instead of spending like 30 hours of your week updating Salesforce, you spend 1% of your week asking Adept to just do that for you and you spend 99% of the time talking to customers,” Luan said at the Cerebral Valley A.I. Summit.

****

Emad Mostaque

CEO, Stability AI

“If we have agents that are more capable than us that we cannot control that are going across the internet and [are] hooked up and they achieve a level of automation, what does that mean?” — source

Mostaque was born in Jordan but grew up in Bangladesh and the UK, where he earned his bachelor’s degree in computer science at Oxford University in 2005. Before founding Stability AI in 2020, he spent more than a decade working in hedge funds, according to the New York Times. The stint in finance seems to have provided a nice cushion to start Stability AI, which he reportedly funded himself and later with funding from investors including Coatue and Lightspeed Venture Partners.

The company helped to create text-to-image model, Stable Diffusion, which has been used to generate images that pay little heed to intellectual property rights or to concerns about depicting violence (the product, like some other A.I. tools, has also been criticized for amplifying racial and gender bias). For Mostaque, the priority is to keep the model open-source and without guardrails that restrict what content the model can generate—although, in an effort to make Stability’s A.I. more commercially-attractive, he did later train a version of Stable Diffusion on a dataset that had been filtered to remove pornographic images. “We trust people, and we trust the community,” he told the Times.

That attitude (as well allegations that Mostaque has exaggerated some of his accomplishments, as recently detailed by Forbes) has drawn backlash from others in the A.I. community, public officials, and firms like Getty Images which sued Stability AI for copyright infringement in February, alleging that the company copied 12 million images to train its AI model without a legal basis for using them.

Yet Stability AI’s tools have emerged as among the most popular and well-known representatives in the field of generative A.I. And Mostaque, aged 40 and based in London, defies easy categorization. In March, he was among a group who signed an open letter calling for pause in A.I. development for anything more advanced than GPT-4, the A.I. chatbot from OpenAI. His perspective on A.I. advancements seems to be at two extremes given his recent comments that it could control humanity in the worst case scenario, while stating on another occasion, that A.I. will be disinterested in people.

“Because we can’t conceive of something more capable than us, but we all know people more capable than us. So, my personal belief is it will be like that movie Her with Scarlett Johansson and Joaquin Phoenix: Humans are a bit boring, and it’ll be like, ‘Goodbye’ and ‘You’re kind of boring.’”

****

Fei-Fei Li

Co-director, Stanford’s Institute for Human-Centered AI

“It still feels surreal to be born into this time of history and be in the middle of this technology.” — source

When Li immigrated from China to the U.S. with her family at 16, she says she had to learn English from scratch while working to get good grades. Today, the co-director for Stanford’s Institute for Human-Centered AI is considered one of the leading lights on the ethical use of A.I.—through writings like “How to make A.I. that’s good for people” —as well as an advocate for diversity in the A.I. field.

Early in her career, Li built ImageNet, a large-scale dataset that has contributed to developments in deep learning and A.I. Now, at Stanford, she’s been researching “ambient intelligence,” which uses A.I. to monitor activity at homes and hospitals. She discussed her work and how bias is critical to consider during Fortune’s Brainstorm AI conference in December.

“I work a lot in health care. It’s very obvious that if our data comes from certain populations or socio-economic classes, it will have a pretty profound downstream impact,” she said.

According to Li, 47, Stanford now conducts an ethics and society review process for A.I. research projects. “It gets us thinking about how to design fairness, design privacy awareness, and design human well being and dignity into our technology.”

To boost inclusion in the A.I. field, Li co-founded a non-profit known as AI4ALL, which promotes diversity in A.I. education.

One note of controversy in Li’s career occurred during her stint as chief scientist of AI/ML at Google Cloud, when a Google contract to provide A.I. tech to the Pentagon caused an uproar among some employees in 2018. While the contract was not Li’s doing, critics felt her association with it—particularly some of her comments in leaked emails about how to portray the contract to the public—was at odds with her work as an advocate of ethical A.I.

****

Ali Ghodsi

CEO, Databricks

“We should embrace it, because it is here to stay. And I do think it’s going to change everything, and I think it’s going to be mostly positive.” — source

Ali Ghodsi straddles academia and business, with a foot in each world as an adjunct professor at UC Berkeley and the cofounder and CEO of Databricks. One principle that’s central to the Swedish-Iranian tech exec is his committment to open source development.

Ghodsi’s work on open source data processing tool Apache Spark provided the foundation for Databricks, which is valued at $38 billion. In April, Databricks released Dolly 2.0, an open source rival to ChatGPT, that uses a question-and-answer instruction set created entirely from interactions between Databricks’ 5,000 employees. This means that any company can weave Dolly 2.0 into its own commercial products and services without any cap on usage.

Dolly is more proof of concept than viable product—the model is prone to errors, hallucinations and churning out toxic content. Dolly’s importance, however, is that it showed that A.I. models can be much smaller and cheaper to train and run than the massive proprietary large language models that underpin OpenAI’s ChatGPT or Anthropic’s Claude. And Ghodsi defends making Dolly so freely and easily accessible. “We’re committed to developing AI safely and responsibly and believe as an industry, we’re moving in the right direction by opening up models, like Dolly, for the community to collaborate on,” Ghodsi told TechCrunch in April.

While generative A.I. is getting a lot of the attention right now, Ghodsi, 45, believes that other types of artificial intelligence, particularly A.I. for data analysis, will have a profound effect across industries. “I think this is just the very beginning, and we are just scratching the surface on what A.I. and data analytics can do,” he told Fortune in March.

****

Sam Altman

CEO, OpenAI

“If someone does crack the code and builds a superintelligence, however you want to define that, probably some global rules on that are appropriate.”

Altman founded OpenAI with Elon Musk, Ilya Sutskever, and Greg Brockman in 2015, out of a fear that Google would become too powerful and control A.I.

Since then, OpenAI has turned into one of the most influential companies in the A.I. arena and emerged as the standard bearer for “generative A.I.”: Its ChatGPT tool is the fastest growing app of all time, having garnered 100 million monthly active users just two months after its launch. DALL-E 2, another OpenAI product, is one of most popular text-to-image generators, capable of producing high-resolution images that have depth-of-field effects with shadows, shading, and reflections.

While he’s not an A.I. researcher or a computer scientists, Altman, 38, sees the tools as a stepping stone on a mission he shares with others in the field: developing a computer superintelligence known as artificial general intelligence, or AGI. He believes that “AGI is probably necessary for humanity to survive,” but has suggested he’ll be cautious as he works toward it.

Altman’s quest for AGI has not blinded him to the risks: he was among the most prominent names to sign the Center for AI safety warning about A.I.’s threat to humanity. At a hearing before U.S. senators in mid-May, Altman called for A.I. regulation, saying rules should be created to incentivize safety “while ensuring that people are able to access the technology’s benefits.” (Some critics speculated that the regulation he called for could also create hurdles to a growing crop of open source competitors to OpenAI).

A former president of startup incubator Y Combinator, Altman is skilled at raising money according to a profile by Fortune’s Jeremy Kahn. That knack appears to have paid off big time with OpenAI’s $13 billion alliance with Microsoft.

While Musk is no longer affiliated with OpenAI and is reportedly launching a rival A.I. lab, Altman still cites Musk as a mentor who taught him to push the limits on “hard R&D and hard technology.” He has no plans to follow Musk on mission to Mars however: “I have no desire to go live on Mars, it sounds horrible. But I’m happy other people do.”

****

Margaret Mitchell

Chief ethics scientist, Hugging Face

“People say or think, ‘You don’t program, you don’t know about statistics, you are not as important,’ and it’s often not until I start talking about things technically that people take me seriously which is unfortunate. There is a massive cultural barrier in ML.” — source

Margaret Mitchell’s interest in A.I. bias began after a couple of troubling instances while working at Microsoft. The data she worked with for the company’s Seeing AI assistance technology, for example, expressed odd descriptions of people’s race, she recalled in an interview last year. Another time, she fed a system images of an explosion and the output described the wreckage as beautiful.

She realized it wouldn’t satisfy her to simply make A.I. systems perform better on benchmarks. “I wanted to fundamentally shift how we were looking at these problems, how we were approaching data and analysis of data, how we were evaluating and all of the factors we were leaving out with these straightforward pipelines,” she said.

That mission has come at a personal cost. Mitchell made headlines in 2021 when Google fired her and Timnit Gebru from their jobs as co-heads of the company’s A.I. ethics unit. The pair had published a paper detailing risks of large language models, including the environmental cost and racist and sexist language being funneled into training data. They were also outspoken about insufficient diversity and inclusion efforts at Google and clashed with management over company policies.

Mitchell and Gebru had already achieved significant breakthroughs in the A.I. ethics field, like publishing a paper with multiple other researchers on so-called “model cards,” which encourage more transparency on models by providing a way to document performance and identify limitations and biases.

At Hugging Face, an open-source platform provider of machine learning tech she joined after Google, Mitchell has worked intensely on assistive tech and deep learning, and focuses on coding to help build protocols for matters like ethical A.I. research and inclusive hiring.

Despite her background as a researcher and scientist, Mitchell says her focus on ethics leads people to assume she doesn’t know how to program. “It’s often not until I start talking about things technically that people take me seriously which is unfortunate,” Mitchell said on a Hugging Face blog last year.

****

Mustafa Suleyman

Cofounder and CEO, Inflection AI

“Unquestionably, many of the tasks in white-collar land will look very different in the next five to 10 years.” — source

Known to friends and colleagues as “Moose,” Suleyman previously worked at Google as VP of AI Products and AI Policy, and co-founded DeepMind, a research lab that was bought by Google in 2014. Since his time at Google, Suleyman has worked for VC firm Greylock and launched a machine learning startup known as Inflection AI.

Earlier this month, Inflection released its first product, a chatbot named Pi for “personal intelligence.” The current version of the bot can remember conversations with users and offer empathetic responses. Eventually, Suleyman says it will be capable of serving as a personal “Chief of Staff” that can book restaurant reservations and handle other daily tasks.

Suleyman, 38, is enthusiastic about what language we’ll start using to engage with computers. For Wired, he wrote that we’ll someday have “truly fluent, conversational interactions with all our devices,” which will redefine human-machine interaction.

Suleyman envisions a future where A.I. will cause white collar work to look very different, but also sees potential for it to handle big challenges. On the latter, he thinks the technology can lower the cost of materials for housing and infrastructure and help allocate resources like clean water. Still, he’s a proponent of avoiding harms on the way, writing a warning in the Economist in 2018:

“From the spread of facial recognition in drones to biased predictive policing, the risk is that individual and collective rights are left by the wayside in the race for technological advantage.”

****

Sara Hooker

Director, Cohere For AI

“Part of what I think is going to be really important, especially when you think about things like misinformation or the ability to generate texts that might be used in nefarious ways, is we need better traceability.” – source

A former researcher at Google Brain, Sara Hooker reunited with her ex-colleagues last year when she joined Cohere, a Toronto startup dedicated to ultra language models and founded by Google Brain alums. It’s an arms-length reunion though—Hooker is heading up a non-profit research lab called Cohere for AI that’s funded by Cohere but operates independently.

Cohere for AI describes its mission as “solving complex machine learning problems.” In practice that means everything from research papers to make LLMs safer and more efficient to the Scholars Program, which seeks to broaden the pool of people involved in A.I. by recruiting talent from all over the world.

One of the criteria to be eligible for the Scholars Program is that a person has not previously published a research paper on machine learning.

“When I talk about improving geographic representation, people assume this is a cost we are taking on. They think we are sacrificing progress,” Hooker says. “It is completely the opposite.” Hooker would know. She grew up in Africa, and helped establish Google’s research lab in Ghana.

Hooker also pushes for ML models and algorithms that are accurate and explainable. Speaking to Global News recently, Hooker shared her thoughts on “model traceability,” or the ability to trace when a text is generated by a model instead of a human, and how improvements should be made to it. “Part of what I think is going to be really important, especially when you think about things like misinformation or the ability to generate texts that might be used in nefarious ways, is we need better traceability,” she said.

And with Cohere having recently raised $270 million in funding from Nvidia, Oracle, and Salesforce Ventures, Hooker’s non-profit lab is tied to a startup with some marquee backers.

****

Rummann Chowdhury

Scientist at Parity Consulting and Responsible AI Fellow, Harvard University’s Berkman Klein Center

“There’s rarely the fundamental question asked: should this thing even exist?” — source

Chowdhury’s career in A.I. kicked off as a leader for responsible AI at Accenture, where she oversaw design of an algorithmic tool to identify and mitigate bias in AI systems. She left to found an algorithmic auditing company known as Parity AI, and it was later acquired by Twitter. There, she directed the ML Ethics, Transparency, and Accountability team, which was a group of researchers and engineers that worked to mitigate algorithmic harms on the social platform, something she says became challenging after Twitter was acquired by Elon Musk.

She played a leading role among a group of top A.I. developers who got support from the White House to put on a generative A.I. “red teaming” event that aims to improve security by evaluating models from Anthropic, Google, Hugging Face, OpenAI, and others for quirks and limitations during the DEF CON 31 cybersecurity conference in August.

As another A.I. expert on the regulation train, Chowdhury, 43, wrote in Wired recently that there ought to be a generative A.I. global governance body. She pointed to Facebook’s Oversight Board, which is an interdisciplinary global group focused on accountability, as an example of what the body could look like.

“An organization like this should be a consolidated ongoing effort with expert advisory and collaborations, like the IAEA, rather than a secondary project for people with other full-time jobs,” Chowdhury wrote. “Like the Facebook Oversight Board, it should receive advisory input and guidance from industry, but have the capacity to make independent binding decisions that companies must comply with.”

She’s also pushed for what she calls integrated bias assessments and audits in the product development process, which would allow an inspection of something that’s already been built but also having mechanisms in place from the early stages to decide whether something should make it past the idea phase.

“There’s rarely the fundamental question asked: should this thing even exist?” she said during a panel discussion on responsible A.I.

****

Cristóbal Valenzuela

Cofounder and CEO, Runway ML

“The history of generative art is not new. The idea of involving an autonomous system in the art-making process has been around for decades outside of the recent AI boom. What’s different is that now we are entering a synthetic age.” — source

Valenzuela got into A.I. after learning about neural networks through the work of artist and programmer Gene Kogan. He became so fascinated that he left his home in Chile to become a researcher at NYU Tisch’s Interactive Telecommunications Program.

It was there that the idea for Runway came to him as he worked to make machine learning models accessible to artists. “I started brainstorming ideas around that and then I realized that “a platform for models” already has a name: a runway,” he told cloud computing company Paperspace.

While many artists have embraced A.I., using tools like Runway’s for visual effects in movies or creating photographs, the 33-year old Valenzuela wants even more artists to embrace A.I.

So, the company helped develop text-to-image model Stable Diffusion. It also made solo feats with its A.I. video editing model, Gen-1 that could improve existing video fed by users. Gen-2 followed this spring, providing users the chance to generate videos from text. Given that entertainers like Weezer have taken advantage of its models by having it make a tour promo video for the rock band and another artist made a short film using the latter model, tools like Runway’s have gotten buzz for their potential to change how Hollywood approaches filmmaking.

In a talk with MIT, he said the company is working on helping artists find the use cases for their work and reassure them that their jobs won’t be taken. He also argues that in many cases, we’re already using A.I. for artwork even if we don’t realize since a photo taken on an iPhone can involve multiple neural networks to optimize an image.

“It’s just another technology that will help you do things in a better way and express you better,” he said.

****

Demis Hassabis

CEO, Google DeepMind

“At DeepMind, we’re quite different from other teams in that we’re pretty focused around this one moonshot goal of AGI. We’re organized around a long-term roadmap, which is our neuroscience based thesis, which talks about what intelligence is and what’s required to get there.” — source

With a PhD in cognitive neuroscience from the University College London, Hassabis made waves by cofounding neural networking startup DeepMind more than a decade ago. The company, which was acquired by Google in 2014, aims to build powerful computer networks that mimic the way the human brain works. In April, Hassabis took command of Google’s overall A.I. efforts, after a reorg that merged the internet giant’s various A.I. teams.

Hassabis says he got into programming through his love of chess. The former child chess prodigy even bought his first computer from the winnings of chess tournaments. Now, he uses the problem solving and planning required from the game plus his neuroscience background in his work on A.I., which he believes is going to be “the most beneficial thing to humanity ever.”

He thinks AGI could happen within a decade, and describes DeepMind as neuroscience-inspired A.I. and one of the best ways to address complex questions about the brain. “We could start shedding light on some of the profound mysteries of the mind like the nature of consciousness, creativity, and dreaming,” he told Ford. And when it comes to whether machine consciousness is possible, he says he’s open minded about that, but thinks “it could well turn out that there’s something special about biological systems” that machines couldn’t match.

In 2016, DeepMind’s A.I. system AlphaGo beat Lee Sedol, the world’s top human player at the strategy game Go, in which players place stones on a 19 by 19 grid, in a best-of-five match viewed by more than 200 million online. Sedol’s defeat to the system was especially shocking since experts said such an outcome wasn’t expected for another decade.

Moments like that have made DeepMind the leading face of AGI. But it’s not all games. DeepMind is behind AlphaFold 2, an A.I. system has predicted the 3-D structures of almost every known protein. DeepMind has made these predictions available in a public database. It’s a discovery that could accelerate drug development and that earned Hassabis and senior staff research scientist John Jumper a $3 million Breakthrough Prizes in Life Sciences award. Hassabis also co-founded and runs a new Alphabet-owned company, Isomorphic Labs, dedicated to using A.I. to help in drug discovery.

热读文章
热门视频
扫描二维码下载财富APP