立即打开
离职高级AI研究员,触及了谷歌哪条神经?

离职高级AI研究员,触及了谷歌哪条神经?

JEREMY KAHN 2020-12-10
答案或许已经浮现:为了这项“特殊技术”的成功,谷歌投入了很多。

近日,一名受人尊敬的谷歌人工智能研究人员离职,引爆舆论发问:对于关键人工智能技术的道德之忧,谷歌公司是否有掩盖之图?

离职的人工智能研究员叫蒂姆尼特·格布鲁。在她离开谷歌之前,公司曾要求她撤回一篇她参与撰稿的关于大型语言模型伦理的研究论文。这些模型通过筛选庞大的文本库创建,用以帮助创建搜索引擎及数字助手,以便更好地理解用户并对其作出回应。

谷歌拒绝就格布鲁的离职发表评论,但其示意媒体参考一封由谷歌人工智能研究部门高级副总裁杰夫·迪恩写给员工的电子邮件。这封邮件泄露在科技通讯平台Platformer上,迪恩在邮件中说,格布鲁与另外四名谷歌研究人员和华盛顿大学的一名研究人员合作进行的这项研究,没有达到公司的标准。

然而,格布鲁和她的前人工智能伦理团队成员都对这一观点提出了质疑。

目前,包括2200名谷歌员工在内的5300多人签署了一封公开信,对谷歌处理格布鲁的方式表示抗议,并要求谷歌做出解释。

据政治新闻网站Axios透露,12月9日,谷歌首席执行官桑达尔·皮查伊对员工表示,他将调查格布鲁离开公司的原因,并将努力恢复大家的信任。

格布鲁及其合作者质疑大型语言模型的伦理问题,到底触及了谷歌哪条神经?答案或许已经浮现:为了这项“特殊技术”的成功,谷歌投入了很多。

在所有大型语言模型的背后,都隐藏着一种特殊的神经网络,一种松散地基于人类大脑的人工智能软体框架。这一名为Transformer的神经网络由谷歌研究人员在2017年首创,现在已经被工业界广泛采用,用于语言和视觉处理等各种用途。

这些大型语言算法建立的统计模型十分庞大,需要数亿甚至数千亿的变量。因此,这些模型非常擅长精准预测句子中缺失的单词。但事实上,它们也在此过程中学会了其他技能:如回答文章附加的问题,总结文件中的关键信息,找出文中哪个代词指代哪个人等等。这些事情听起来不难,但之前的语言软件必须得经过专门的训练,才能最后掌握其中的某一项技能,况且效果也不好。

它们中最庞大的一个,还有更多的技能花样:旧金山人工智能公司OpenAI创建的大型语言模型GPT-3包含了大约1750亿个变量,可以根据一个简单的人工提示写出连贯的长篇文章。想象一下,当你写下博客的标题和第一句话,GPT-3就能完成编写其余的内容。目前OpenAI已经将GPT-3授权给了一些科技初创公司以及微软,为自家服务赋能。其中一家公司用GPT-3从几个要点中生成完整的电子邮件。

谷歌有自己的大型语言模型BERT,用以帮助增强包括英语在内的多种语言的搜索结果,而其他公司也在使用BERT构建自家语言处理软件。

BERT经过优化,可以在谷歌自己的专门人工智能计算机处理器上运行,且仅向谷歌云计算服务的客户提供——因此,谷歌有明确的商业动机来推动BERT的广泛使用。而且,倘若公司想要训练和运行自己的语言模型,必然租用大量的云计算服务,因此所有的云计算提供商都很乐意看到目前大语言模型的趋势。

举个例子:去年的一项研究估计,在谷歌的云平台上培训BERT大约花费7000美元,而同时OpenAI的首席执行官Sam Altman暗示,培训GPT-3要花费数百万美元。

技术研究公司弗雷斯特(Forrester)的分析师谢尔·卡尔森表示,尽管这些所谓的大型“Transformer语言模型”目前的市场相对较小,但爆炸式增长随时可能发生。“在最近所有人工智能中,这些大型Transformer网络对人工智能的未来来说最重要。”他说。

其中一个原因是,大型语言模型让构建语言处理工具变得更加容易,几乎是上手即用。卡尔森说:“只需稍加调整,您就可以拥有定制的聊天机器人,帮您处理任何事情。”不仅如此,预先训练的大型语言模型还可以帮助编写软件,总结文本,以及创建常见问题及其解答。

市场研究公司Tractica于2017年发布的一份报告预测,到2025年,各类NLP(自然语言处理)软件的年市场规模将达到223亿美元。这份报告被广泛引用,而报告中的分析是在诸如BERT和GPT-3这样的大型语言模型出现之前进行的——这就是格布鲁的论文中所诟病的市场商机。

在格布鲁和她的同事看来,大型语言模型到底存在什么问题?答案很明确:很多问题。

首先,因为各种大型语言模型是在庞大的现有文本语料库上进行训练的,而这些系统往往会掺杂很多歧视内容,尤其是关于性别和种族的歧视。此外,论文的合著者说,这些模型太大,吸收了太多的数据,极难审计和调试,因此其中一些歧视性信息可能会被遗漏。

其次,论文还指出,在耗电量大的服务器上训练和运行大规模的语言模型,会对环境造成碳排放量大等负面影响。论文指出,训练一次谷歌的语言模型BERT就会产生大约1438磅二氧化碳,相当于从纽约到旧金山的一趟往返航班的排放量。

这项研究还注意到一个事实:在构建愈发庞大的语言模型上花费更多的金钱和精力,会渐渐消解人类原有的真正“理解”语言并高效学习语言的努力。

论文中对大型语言模型的许多批评,之前已经有人提出过。艾伦人工智能研究所(Allen Institute for AI)此前发表了一篇论文,研究GPT-2(GPT-3的前身)产生的种族主义语言和歧视性语言。

而实际上,OpenAI自己发布的关于GPT-3的论文就有一章概述了与偏见和环境危害有关的潜在问题,格布鲁和她的合著者对这些问题进行了强调和重申。OpenAI发布的这篇论文还在今年的神经信息处理系统大会(Neural Information Processing Systems Conference)上获得了“最佳论文”奖,据了解,这一大会在AI研究领域久负盛名。

可以说,OpenAI与谷歌有同样的商业动机去粉饰GPT-3的缺陷,更何况GPT-3还是OpenAI目前唯一的商业产品,而谷歌早在BERT出现之前就已经赚了数千亿美元了。

但话又说回来,OpenAI的运作方式更像是一家科技初创公司,而不是诸如谷歌之类的大型科技企业。大公司出于本性,不愿意给公开批评自己技术的员工发高薪,因为他们清楚,公开批评会对数十亿美元的市场商机构成威胁。(财富中文网)

编译:杨二一

近日,一名受人尊敬的谷歌人工智能研究人员离职,引爆舆论发问:对于关键人工智能技术的道德之忧,谷歌公司是否有掩盖之图?

离职的人工智能研究员叫蒂姆尼特·格布鲁。在她离开谷歌之前,公司曾要求她撤回一篇她参与撰稿的关于大型语言模型伦理的研究论文。这些模型通过筛选庞大的文本库创建,用以帮助创建搜索引擎及数字助手,以便更好地理解用户并对其作出回应。

谷歌拒绝就格布鲁的离职发表评论,但其示意媒体参考一封由谷歌人工智能研究部门高级副总裁杰夫·迪恩写给员工的电子邮件。这封邮件泄露在科技通讯平台Platformer上,迪恩在邮件中说,格布鲁与另外四名谷歌研究人员和华盛顿大学的一名研究人员合作进行的这项研究,没有达到公司的标准。

然而,格布鲁和她的前人工智能伦理团队成员都对这一观点提出了质疑。

目前,包括2200名谷歌员工在内的5300多人签署了一封公开信,对谷歌处理格布鲁的方式表示抗议,并要求谷歌做出解释。

据政治新闻网站Axios透露,12月9日,谷歌首席执行官桑达尔·皮查伊对员工表示,他将调查格布鲁离开公司的原因,并将努力恢复大家的信任。

格布鲁及其合作者质疑大型语言模型的伦理问题,到底触及了谷歌哪条神经?答案或许已经浮现:为了这项“特殊技术”的成功,谷歌投入了很多。

在所有大型语言模型的背后,都隐藏着一种特殊的神经网络,一种松散地基于人类大脑的人工智能软体框架。这一名为Transformer的神经网络由谷歌研究人员在2017年首创,现在已经被工业界广泛采用,用于语言和视觉处理等各种用途。

这些大型语言算法建立的统计模型十分庞大,需要数亿甚至数千亿的变量。因此,这些模型非常擅长精准预测句子中缺失的单词。但事实上,它们也在此过程中学会了其他技能:如回答文章附加的问题,总结文件中的关键信息,找出文中哪个代词指代哪个人等等。这些事情听起来不难,但之前的语言软件必须得经过专门的训练,才能最后掌握其中的某一项技能,况且效果也不好。

它们中最庞大的一个,还有更多的技能花样:旧金山人工智能公司OpenAI创建的大型语言模型GPT-3包含了大约1750亿个变量,可以根据一个简单的人工提示写出连贯的长篇文章。想象一下,当你写下博客的标题和第一句话,GPT-3就能完成编写其余的内容。目前OpenAI已经将GPT-3授权给了一些科技初创公司以及微软,为自家服务赋能。其中一家公司用GPT-3从几个要点中生成完整的电子邮件。

谷歌有自己的大型语言模型BERT,用以帮助增强包括英语在内的多种语言的搜索结果,而其他公司也在使用BERT构建自家语言处理软件。

BERT经过优化,可以在谷歌自己的专门人工智能计算机处理器上运行,且仅向谷歌云计算服务的客户提供——因此,谷歌有明确的商业动机来推动BERT的广泛使用。而且,倘若公司想要训练和运行自己的语言模型,必然租用大量的云计算服务,因此所有的云计算提供商都很乐意看到目前大语言模型的趋势。

举个例子:去年的一项研究估计,在谷歌的云平台上培训BERT大约花费7000美元,而同时OpenAI的首席执行官Sam Altman暗示,培训GPT-3要花费数百万美元。

技术研究公司弗雷斯特(Forrester)的分析师谢尔·卡尔森表示,尽管这些所谓的大型“Transformer语言模型”目前的市场相对较小,但爆炸式增长随时可能发生。“在最近所有人工智能中,这些大型Transformer网络对人工智能的未来来说最重要。”他说。

其中一个原因是,大型语言模型让构建语言处理工具变得更加容易,几乎是上手即用。卡尔森说:“只需稍加调整,您就可以拥有定制的聊天机器人,帮您处理任何事情。”不仅如此,预先训练的大型语言模型还可以帮助编写软件,总结文本,以及创建常见问题及其解答。

市场研究公司Tractica于2017年发布的一份报告预测,到2025年,各类NLP(自然语言处理)软件的年市场规模将达到223亿美元。这份报告被广泛引用,而报告中的分析是在诸如BERT和GPT-3这样的大型语言模型出现之前进行的——这就是格布鲁的论文中所诟病的市场商机。

在格布鲁和她的同事看来,大型语言模型到底存在什么问题?答案很明确:很多问题。

首先,因为各种大型语言模型是在庞大的现有文本语料库上进行训练的,而这些系统往往会掺杂很多歧视内容,尤其是关于性别和种族的歧视。此外,论文的合著者说,这些模型太大,吸收了太多的数据,极难审计和调试,因此其中一些歧视性信息可能会被遗漏。

其次,论文还指出,在耗电量大的服务器上训练和运行大规模的语言模型,会对环境造成碳排放量大等负面影响。论文指出,训练一次谷歌的语言模型BERT就会产生大约1438磅二氧化碳,相当于从纽约到旧金山的一趟往返航班的排放量。

这项研究还注意到一个事实:在构建愈发庞大的语言模型上花费更多的金钱和精力,会渐渐消解人类原有的真正“理解”语言并高效学习语言的努力。

论文中对大型语言模型的许多批评,之前已经有人提出过。艾伦人工智能研究所(Allen Institute for AI)此前发表了一篇论文,研究GPT-2(GPT-3的前身)产生的种族主义语言和歧视性语言。

而实际上,OpenAI自己发布的关于GPT-3的论文就有一章概述了与偏见和环境危害有关的潜在问题,格布鲁和她的合著者对这些问题进行了强调和重申。OpenAI发布的这篇论文还在今年的神经信息处理系统大会(Neural Information Processing Systems Conference)上获得了“最佳论文”奖,据了解,这一大会在AI研究领域久负盛名。

可以说,OpenAI与谷歌有同样的商业动机去粉饰GPT-3的缺陷,更何况GPT-3还是OpenAI目前唯一的商业产品,而谷歌早在BERT出现之前就已经赚了数千亿美元了。

但话又说回来,OpenAI的运作方式更像是一家科技初创公司,而不是诸如谷歌之类的大型科技企业。大公司出于本性,不愿意给公开批评自己技术的员工发高薪,因为他们清楚,公开批评会对数十亿美元的市场商机构成威胁。(财富中文网)

编译:杨二一

The recent departure of a respected Google artificial intelligence researcher has raised questions about whether the company was trying to conceal ethical concerns over a key piece of A.I. technology.

The departure of the researcher, Timnit Gebru, came after Google had asked her to withdraw a research paper she had coauthored about the ethics of large language models. These models, created by sifting through huge libraries of text, help create search engines and digital assistants that can better understand and respond to users.

Google has declined to comment about Gebru’s departure, but it has referred reporters to an email to staff written by Jeff Dean, the senior vice president in charge of Google’s A.I. research division, that was leaked to the tech newsletter Platformer. In the email Dean said that the study in question, which Gebru had coauthored with four other Google scientists and a University of Washington researcher, didn’t meet the company’s standards.

That position, however, has been disputed by both Gebru and members of the A.I. ethics team she formerly co-led.

More than 5,300 people, including over 2,200 Google employees, have now signed an open letter protesting Google’s treatment of Gebru and demanding that the company explain itself.

On Wednesday, Sundar Pichai, Google’s chief executive officer, told staff he would investigate the circumstances under which Gebru left the company and would work to restore trust, according to a report from news service Axios, which obtained Pichai’s memo to Google employees.

But why might Google have been particularly upset with Gebru and her coauthors questioning the ethics of large language models? Well, as it turns out, Google has quite a lot invested in the success of this particular technology.

Beneath the hood of all large language models is a special kind of neural network, A.I. software loosely based on the human brain, that was pioneered by Google researchers in 2017. Called a Transformer, it has since been adopted industrywide for a variety of different uses in both language and vision tasks.

The statistical models that these large language algorithms build are enormous, taking in hundreds of millions, or even hundreds of billions, of variables. In this way, they get very good at being able to accurately predict a missing word in a sentence. But it turns out that along the way, they pick up other skills too, like being able to answer questions about a text, summarize key facts about a document, or figure out which pronoun refers to which person in a passage. These things sound simple, but previous language software had to be trained specifically for each one of these skills, and even then it often wasn’t that good.

The biggest of these large language models can do some other nifty things as well: GPT-3, a large language model created by San Francisco A.I. company OpenAI, encompasses some 175 billion variables and can write long passages of coherent text from a simple human prompt. So imagine writing just a headline and a first sentence for a blog post with GPT-3 then composing the rest. OpenAI has licensed GPT-3 to a number of technology startups, plus Microsoft, to power their own services, which include one company’s using the software to enable users to generate full emails from just a few bullet points.

Google has its own large language model, called BERT, that it has used to help power search results in several languages including English. Other companies are also using BERT to build their own language processing software.

BERT is optimized to run on Google’s own specialized A.I. computer processors, available exclusively to customers of its cloud computing service. So Google has a clear commercial incentive to encourage companies to use BERT. And, in general, all of the cloud computing providers are happy with the current trend toward large language models, because if a company wants to train and run one of its own, it must rent a lot of cloud computing time.

For instance, one study last year estimated that training BERT on Google’s cloud costs about $7,000. Sam Altman, the CEO of OpenAI, meanwhile, has implied that it cost many millions to train GPT-3.

And while the market for these large so-called Transformer language models is relatively small at the moment, it is poised to explode, according to Kjell Carlsson, an analyst at technology research firm Forrester. “Of all the recent A.I. developments, these large Transformer networks are the ones that are most important to the future of A.I. at the moment,” he says.

One reason is that the large language models make it far easier to build language processing tools, almost right out of the box. “With just a little bit of fine-tuning, you can have customized chatbots for everything and anything,” Carlsson says. More than that, the pretrained large language models can help write software, summarize text, or create frequently asked questions with their answers, he notes.

A widely cited 2017 report from market research firm Tractica forecast that NLP (natural language processing) software of all kinds would be a $22.3 billion annual market by 2025. And that analysis was made before large language models such as BERT and GPT-3 arrived on the scene. So this is the market opportunity that Gebru’s research criticized.

What exactly did Gebru and her colleagues say was wrong with large language models? Well, lots. For one thing, because they are trained on huge corpora of existing text, the systems tend to bake in a lot of existing human bias, particularly about gender and race. What’s more, the paper’s coauthors said, the models are so large and take in so much data, they are extremely difficult to audit and test, so some of this bias may go undetected.

The paper also pointed to the adverse environmental impact, in terms of carbon footprint, that training and running such large language models on electricity-hungry servers can have. It noted that BERT, Google’s own language model, produced, by one estimate, about 1,438 pounds of carbon dioxide, or about the amount of a roundtrip flight from New York to San Francisco.

The research also looked at the fact that money and effort spent on building ever larger language models took away from efforts to build systems that might actually “understand” language and learn more efficiently, in the way humans do.

Many of the criticisms of large language models made in the paper have been made previously. The Allen Institute for AI had published a paper looking at racist and biased language produced by GPT-2, the forerunner system to GPT-3.

In fact, the paper from OpenAI itself on GPT-3, which won an award for “best paper” at this year’s Neural Information Processing Systems Conference (NeurIPS), one of the A.I. research field’s most prestigious conferences, contained a meaty section outlining some of the same potential problems with bias and environmental harm that Gebru and her coauthors highlighted.

OpenAI, arguably, has as much—if not more—financial incentive to sugarcoat any faults in GPT-3. After all, GPT-3 is literally OpenAI’s only commercial product at the moment. Google was making hundreds of billions of dollars just fine before BERT came along.

But then again, OpenAI still functions more like a tech startup than the megacorporation that Google’s become. It may simply be that large corporations are, by their very nature, allergic to paying big salaries to people to publicly criticize their own technology and potentially jeopardize billion-dollar market opportunities.

热读文章
热门视频
扫描二维码下载财富APP