立即打开
谷歌AI道德研究人员离职风波发酵

谷歌AI道德研究人员离职风波发酵

Jeremy Kahn 2020-12-04
知名人工智能研究员蒂姆尼特•格布鲁称,她因为对谷歌公司的一些技术提出可能的道德问题而被开除。

由于“批评谷歌缺乏对多元化的承诺”,一位知名人工智能研究人员被解雇,离开了谷歌公司(Google)。谷歌试图压制批评及讨论的行为再次引社会担忧。

谷歌团队中专注于人工智能道德和算法偏见方面的技术联合负责人蒂姆尼特•格布鲁在其推特中写道,她被谷歌赶出家门的原因是自己曾经给基础人工智能研究部门——谷歌大脑中(Google Brain)的“女性和同僚们”写了一封电子邮件,由此激起了高管们的愤怒。

在人工智能的研究人员中,格布鲁以促进这一领域的多样性和包容性而出名。她是“公平、责任和透明度”(Fairness, Accountability, and Transparency)会议的创立者之一。该会议致力于解决人工智能的偏见、安全和伦理等问题;她同时也是“人工智能中的黑人”组织(Black in AI)的创立者之一。该组织强调黑人专家在机器学习工作中的地位,并提供专业性的指导,意图提高人们对黑人计算机科学家及工程师们受到的偏见、歧视的关注。

12月3日,格布鲁向彭博社新闻(Bloomberg News)表示,在被解雇前一周,她曾经因为一篇她与其他六名作者(其中四位来自谷歌)共同撰写的研究报告而与管理层发生争执,该报告将于明年提交给学术会议进行审议。她说,谷歌要求她撤回这份报告,至少得抹去她和其他几位谷歌员工的名字。

格布鲁还向内部员工组织发了电子邮件,抱怨自己受到的待遇,指控谷歌在种族和性别多样性、平等及包容这些方面上的虚伪行径。

格布鲁告诉媒体称,她已经告诉谷歌搜索(Google Research)的副总裁、她的主管之一梅根•卡奇利亚,如果没有关于报告审阅过程方式的更多讨论,就不能确保同样的事情不会再次发生。

“我们团队叫做‘AI道德’(Ethical AI),当然会写一些AI存在的问题。”她说。

格布鲁告诉卡奇立亚,如果公司不愿意解决她的问题,她将辞职,在过渡期结束后离开谷歌。谷歌随后通知称,不同意她提出的条件,接受她的辞职,并立即生效。格布鲁发布的一条推文显示,谷歌称格布鲁给公司内部人员发送电子邮件的行为反映出其“与谷歌管理层的期望不一致”。

人工智能行业的其他研究人员也在推特上表达了对格布鲁的支持,对她被解雇一事感到愤慨。纽约大学(New York University)的AI Now研究所(AI Now Institute)的主任梅雷迪思•惠特克在推特上写道:“谷歌对蒂姆尼特进行的报复令人担忧。蒂姆尼特是这一领域最有智慧、有原则的人工智能公正研究人员之一。”

科技公司Mozilla的员工,人工智能公平、伦理和责任领域的另一位研究员德布•拉吉在推特中写道:“现在,公开反对审查制度的行为,已经‘与谷歌管理层的想法不一致’。格布鲁这样做是因为她愿意冒一切风险来保护她手下为她工作的人,去成为一个谷歌公司中最富有多样性的团队。”

许多人还提到,就在格布鲁离职的同一天,美国国家劳工关系委员会(National Labor Relations Board)指控谷歌非法解雇参与组织了两场公司内部抗议的员工:其中一次是在2019年,员工抗议谷歌与美国海关和边境保护局(U.S. Customs and Border Protection)的合作,还有一次是在2018年,抗议谷歌对性骚扰案件的处理不当。

美国国家劳工关系委员会认为,谷歌是想通过恐吓、甚至直接解雇员工来镇压他们的不满情绪。

谷歌目前尚未就美国国家劳工关系委员会的申诉发表任何公开评论,但其针对解雇格布鲁事件向《财富》杂志做出了回应,并让《财富》杂志参考其研究所高级副总裁杰夫•迪恩的一封电子邮件。

迪恩在邮件中表示,格布鲁确实按照公司的要求在两周内同合著人一起递交了她们的论文著述,但谷歌内部负责审核的“跨职能团队”发现其著述“并不符合公司的出版要求”,因为论文“忽视了很多其他相关事实”,格布鲁在文中提及的一系列伦理问题“已经在公司内部有所解决”。

迪恩还表示,格布鲁的离职是“一个艰难的决定”,尤其是她正在参与公司重要的研究课题。迪恩强调,自己的部门以及谷歌公司一直以来都深切关注着人工智能研究领域。

显然,格布鲁事件是一个导火索,可能会再次引爆谷歌公司内外对其技术伦理问题以及员工异议处理方式的担忧与关注。这家曾经以“自由的企业文化”闻名的科技公司如今似乎日渐背离初心,每当触及声誉或是盈利问题之时,总是习惯于让员工噤声。

美国科技新闻网Platformer还曝光了一些格布鲁发给同事的邮件。在邮件中,格布鲁讽刺谷歌早前关于“性别平等”的承诺只是空穴来风,新进员工只有14%是女性。(但她没有明确表明这14%是指谷歌的研究部门还是其他所有部门。)她认为,谷歌所谓的“多元、包容”,只是纸上谈兵。格布鲁还呼吁和她一样想要求变的员工,多向外界寻求帮助,以此向公司施加压力。

提及论文,格布鲁在邮件中向同事表示,她早已告知谷歌的公关团队自己会在截稿前两个月开始动笔,同时她也已经将手头的论文分发给了其他30多位研究人员来征求反馈。

此外,格布鲁还在好几条推特中暗示过自己对市场上在研人工智能软件的道德性担忧。以谷歌的大数据语言模型为例,这类人工智能算法的确有效地优化了谷歌的现有的翻译及搜索结果,帮助谷歌在自然语言处理方面取得了许多突破,但其算法训练过程中所采用的大量网页或书本用语数据往往又蕴含了潜在的性别不平等及种族歧视观念,很可能会导致人工智能错误学习。

在12月3日的一条推特中,格布鲁特意@了副总裁迪恩,称她下一步便会致力于研究谷歌语言模型算法学习过程中的文化歧视现象。“@迪恩,我现在意识到语言模型算法对你有多重要了,但我不希望之后还有和我一样的人遭受同样的待遇。”

本周早些时候,格布鲁表示谷歌内部管理人员正在干预她的工作,企图向外界掩盖谷歌人工智能算法中的道德风险。“一般举报人都会受到机构的安全庇护,为什么‘AI道德’的研究人员就不能被保护呢?如果研究人员都要面临审查和恐吓,那大众又凭什么相信我们的研究成果呢?”格布鲁在其12月1日发布的推特上写道。(财富中文网)

编译:杨二一、陈怡轩

由于“批评谷歌缺乏对多元化的承诺”,一位知名人工智能研究人员被解雇,离开了谷歌公司(Google)。谷歌试图压制批评及讨论的行为再次引社会担忧。

谷歌团队中专注于人工智能道德和算法偏见方面的技术联合负责人蒂姆尼特•格布鲁在其推特中写道,她被谷歌赶出家门的原因是自己曾经给基础人工智能研究部门——谷歌大脑中(Google Brain)的“女性和同僚们”写了一封电子邮件,由此激起了高管们的愤怒。

在人工智能的研究人员中,格布鲁以促进这一领域的多样性和包容性而出名。她是“公平、责任和透明度”(Fairness, Accountability, and Transparency)会议的创立者之一。该会议致力于解决人工智能的偏见、安全和伦理等问题;她同时也是“人工智能中的黑人”组织(Black in AI)的创立者之一。该组织强调黑人专家在机器学习工作中的地位,并提供专业性的指导,意图提高人们对黑人计算机科学家及工程师们受到的偏见、歧视的关注。

12月3日,格布鲁向彭博社新闻(Bloomberg News)表示,在被解雇前一周,她曾经因为一篇她与其他六名作者(其中四位来自谷歌)共同撰写的研究报告而与管理层发生争执,该报告将于明年提交给学术会议进行审议。她说,谷歌要求她撤回这份报告,至少得抹去她和其他几位谷歌员工的名字。

格布鲁还向内部员工组织发了电子邮件,抱怨自己受到的待遇,指控谷歌在种族和性别多样性、平等及包容这些方面上的虚伪行径。

格布鲁告诉媒体称,她已经告诉谷歌搜索(Google Research)的副总裁、她的主管之一梅根•卡奇利亚,如果没有关于报告审阅过程方式的更多讨论,就不能确保同样的事情不会再次发生。

“我们团队叫做‘AI道德’(Ethical AI),当然会写一些AI存在的问题。”她说。

格布鲁告诉卡奇立亚,如果公司不愿意解决她的问题,她将辞职,在过渡期结束后离开谷歌。谷歌随后通知称,不同意她提出的条件,接受她的辞职,并立即生效。格布鲁发布的一条推文显示,谷歌称格布鲁给公司内部人员发送电子邮件的行为反映出其“与谷歌管理层的期望不一致”。

人工智能行业的其他研究人员也在推特上表达了对格布鲁的支持,对她被解雇一事感到愤慨。纽约大学(New York University)的AI Now研究所(AI Now Institute)的主任梅雷迪思•惠特克在推特上写道:“谷歌对蒂姆尼特进行的报复令人担忧。蒂姆尼特是这一领域最有智慧、有原则的人工智能公正研究人员之一。”

科技公司Mozilla的员工,人工智能公平、伦理和责任领域的另一位研究员德布•拉吉在推特中写道:“现在,公开反对审查制度的行为,已经‘与谷歌管理层的想法不一致’。格布鲁这样做是因为她愿意冒一切风险来保护她手下为她工作的人,去成为一个谷歌公司中最富有多样性的团队。”

许多人还提到,就在格布鲁离职的同一天,美国国家劳工关系委员会(National Labor Relations Board)指控谷歌非法解雇参与组织了两场公司内部抗议的员工:其中一次是在2019年,员工抗议谷歌与美国海关和边境保护局(U.S. Customs and Border Protection)的合作,还有一次是在2018年,抗议谷歌对性骚扰案件的处理不当。

美国国家劳工关系委员会认为,谷歌是想通过恐吓、甚至直接解雇员工来镇压他们的不满情绪。

谷歌目前尚未就美国国家劳工关系委员会的申诉发表任何公开评论,但其针对解雇格布鲁事件向《财富》杂志做出了回应,并让《财富》杂志参考其研究所高级副总裁杰夫•迪恩的一封电子邮件。

迪恩在邮件中表示,格布鲁确实按照公司的要求在两周内同合著人一起递交了她们的论文著述,但谷歌内部负责审核的“跨职能团队”发现其著述“并不符合公司的出版要求”,因为论文“忽视了很多其他相关事实”,格布鲁在文中提及的一系列伦理问题“已经在公司内部有所解决”。

迪恩还表示,格布鲁的离职是“一个艰难的决定”,尤其是她正在参与公司重要的研究课题。迪恩强调,自己的部门以及谷歌公司一直以来都深切关注着人工智能研究领域。

显然,格布鲁事件是一个导火索,可能会再次引爆谷歌公司内外对其技术伦理问题以及员工异议处理方式的担忧与关注。这家曾经以“自由的企业文化”闻名的科技公司如今似乎日渐背离初心,每当触及声誉或是盈利问题之时,总是习惯于让员工噤声。

美国科技新闻网Platformer还曝光了一些格布鲁发给同事的邮件。在邮件中,格布鲁讽刺谷歌早前关于“性别平等”的承诺只是空穴来风,新进员工只有14%是女性。(但她没有明确表明这14%是指谷歌的研究部门还是其他所有部门。)她认为,谷歌所谓的“多元、包容”,只是纸上谈兵。格布鲁还呼吁和她一样想要求变的员工,多向外界寻求帮助,以此向公司施加压力。

提及论文,格布鲁在邮件中向同事表示,她早已告知谷歌的公关团队自己会在截稿前两个月开始动笔,同时她也已经将手头的论文分发给了其他30多位研究人员来征求反馈。

此外,格布鲁还在好几条推特中暗示过自己对市场上在研人工智能软件的道德性担忧。以谷歌的大数据语言模型为例,这类人工智能算法的确有效地优化了谷歌的现有的翻译及搜索结果,帮助谷歌在自然语言处理方面取得了许多突破,但其算法训练过程中所采用的大量网页或书本用语数据往往又蕴含了潜在的性别不平等及种族歧视观念,很可能会导致人工智能错误学习。

在12月3日的一条推特中,格布鲁特意@了副总裁迪恩,称她下一步便会致力于研究谷歌语言模型算法学习过程中的文化歧视现象。“@迪恩,我现在意识到语言模型算法对你有多重要了,但我不希望之后还有和我一样的人遭受同样的待遇。”

本周早些时候,格布鲁表示谷歌内部管理人员正在干预她的工作,企图向外界掩盖谷歌人工智能算法中的道德风险。“一般举报人都会受到机构的安全庇护,为什么‘AI道德’的研究人员就不能被保护呢?如果研究人员都要面临审查和恐吓,那大众又凭什么相信我们的研究成果呢?”格布鲁在其12月1日发布的推特上写道。(财富中文网)

编译:杨二一、陈怡轩

A prominent A.I. researcher has left Google, saying she was fired for criticizing the company’s lack of commitment to diversity, renewing concerns about the company’s attempts to silence criticism and debate.

Timnit Gebru, who was technical co-lead of a Google team that focused on A.I. ethics and algorithmic bias, wrote on Twitter that she had been pushed out of the company for writing an email to “women and allies” at Google Brain, the company’s division devoted to fundamental A.I. research, that drew the ire of senior managers.

Gebru is well-known among A.I. researchers for helping to promote diversity and inclusion within the field. She cofounded the Fairness, Accountability, and Transparency (FAccT) conference, which is dedicated to issues around A.I. bias, safety, and ethics. She also cofounded the group Black in AI, which highlights the work of Black machine learning experts as well as offering mentorship. The group has sought to raise awareness of bias and discrimination against Black computer scientists and engineers.

The researcher told Bloomberg News on December 3 that her firing came after a week in which she had wrangled with her managers over a research paper she had co-written with six other authors, including four from Google, that was about to be submitted for consideration at an academic conference next year. She said Google had asked her to either retract the paper or at least remove the her name and those of the other Google employees, she told Bloomberg.

She also posted an email to the internal employee group complaining about her treatment and accusing Google of being disingenuous in its commitment to racial and gender diversity, equity and inclusivity.

Gebru told the news service that she had told Megan Kacholia, Google Research's vice president and one of her supervisors, that without more discussion about the way the review process for the paper had been handled and clear guarantees that the same thing wouldn't happen again.

“We are a team called Ethical AI, of course we are going to be writing about problems in AI,” Gebru said.

She told Bloomberg that she had told Kacholia that if the company was unwilling to address her concerns, she would resign and leave following a transition period. The company then told her it would not agree to her conditions and that it was accepting her resignation effective immediately. It said Gebru's decision to email the internal listserv reflected "behavior that is inconsistent with the expectations of a Google manager," according to a Tweet Gebru posted.

Fellow A.I. researchers took to Twitter to express support for Gebru and outrage at her apparent firing. “Google’s retaliation against Timnit—one of the brightest and most principled AI justice researchers in the field—is *alarming*,” Meredith Whittaker, faculty director at the AI Now Institute at New York University, wrote on Twitter.

“Speaking out against censorship is now ‘inconsistent with the expectations of a Google manager.’ She did that because she cares more and will risk everything to protect those she has hired to work under her—a team that happens to be more diverse than any other at Google,” Deb Raji, another researcher who specializes in A.I. fairness, ethics, and accountability and who works at Mozilla, wrote in a Twitter post.

Many noted that Gebru’s departure came on the same day the National Labor Relations Board accused Google of illegally dismissing workers who helped organize two companywide protests: one, in 2019, against the company’s work with the U.S. Customs and Border Protection agency and a 2018 walkout to demonstrate against the company’s handling of sexual harassment cases.

The NLRB accused Google of using “terminations and intimidation in order to quell workplace activism.”

Google has not issued a public comment on the NLRB case. In reference to questions about Gebru's case, it referred Fortune to an email from Jeff Dean, Google's senior vice president for research, that was obtained by the technology news site Platformer.

In that email, Dean said Gebru and her co-authors had submitted the paper for internal review within the two week period the company requires and that an internal "cross-functional team" of reviewers had found "it didn't meet our bar for publication," and "ignored too much relevant research" that showed some of the ethical issues she and her co-authors were raising had at least been partially mitigated.

Dean said Gebru's departure was "a difficult moment, especially given the important research topics she was involved in, and how deeply we care about responsible AI research as an org and as a company."

The incident is likely to renew concerns both inside and outside the company about the ethics of its technology and how it deals with employee dissent. Once known for its freewheeling and liberal corporate culture, Google has increasingly sought to limit employee speech, particularly when it touches on issues likely to embarrass the company or potentially impact its ability to secure lucrative work for various government agencies.

Platformer also obtained and published what it said was the email Gebru had sent to colleagues. In it, she criticizes the company’s commitment to diversity, saying that “this org seems to have hired only 14% or so women this year.” (She does not make it clear if that figure is for all of Google Research or some other entity.) She also accuses the company of paying lip service to diversity and inclusion efforts and advises those who want the company to change to seek ways to bring external pressure to bear on Google.

Gebru says in the email that she had informed Google's public relations and policy team of her intent to write the paper at least two months before the submission deadline and that she had already circulated it to more than 30 other researchers for feedback.

Gebru implied in several tweets that she had raised ethical concerns about some of the company’s A.I. software, including its large language models. This kind of A.I. software is responsible for many breakthroughs in natural language processing, including Google’s improved translation and search results, but has been shown to incorporate gender and racial bias from the large amounts of Internet pages and books that are used to train it.

In tweets on December 3, she singled out Dean, a storied figure among many computer engineers and researchers as one of the original coders of Google’s search engine, and implied that she had been planning to look at bias in Google’s large language models. “@JeffDean I realize how much large language models are worth to you now. I wouldn’t want to see what happens to the next person who attempts this,” she wrote.

Earlier in the week, Gebru had also implied Google managers were attempting to censor her work or bury her concerns about ethical issues in the company’s A.I. systems. “Is there anyone working on regulation protecting Ethical AI researchers, similar to whistleblower protection? Because with the amount of censorship & intimidation that goes on towards people in specific groups, how does anyone trust any real research in this area can take place?” she wrote in a Twitter post on Dec. 1.

热读文章
热门视频
扫描二维码下载财富APP