立即打开
山姆·奥特曼:AI带来的“灭绝风险”不亚于疫情和核战争

山姆·奥特曼:AI带来的“灭绝风险”不亚于疫情和核战争

TRISTAN BOVE 2023-06-03
OpenAI首席执行官山姆·奥特曼联合一众技术专家警告,AI对人类生存的威胁不亚于核战争和全球性流行病。
 
OpenAI首席执行官山姆·奥特曼(Sam Altman)警告,人工智能(AI)可能带来灭绝风险。图片来源:WIN MCNAMEE—GETTY IMAGES

技术专家和计算机科学专家警告,AI对人类生存的威胁不亚于核战争和全球性流行病,甚至为AI辩护的企业领导者也更加谨慎地看待对AI技术可能导致的灭绝风险。

周二,非营利研究机构人工智能安全中心(Center for A.I. Safety)发表了一封公开信“人工智能风险声明”(statement of A.I. risk),包括ChatGPT开发商OpenAI的首席执行官山姆·奥特曼在内的300多人联合签署了该声明。这份简短的声明阐述了AI的相关风险:

“减轻AI带来的风险应该像流行病和核战争等社会性风险一样成为全球优先事项。”

这封信的序言讲到,该声明旨在就如何准备应对AI技术潜在的灭世风险问题“引发讨论”。其他签署者包括谷歌(Google)前任工程师杰弗里·辛顿(Geoffrey Hinton)和蒙特利尔大学(University of Montreal)计算机科学家约书亚·本吉奥(Yoshua Bengio),他们因对现代计算机科学做出了巨大贡献而被称为AI的两位教父。最近几周,本吉奥和辛顿已多次就AI技术在不久的将来可能发展出的危险能力发出警告。辛顿最近刚离开谷歌,因此他可以更公开地谈论AI的风险。

这并非第一封这类信,此前也曾有公开信呼吁人们进一步关注先进AI研究在缺乏严格的政府监管下可能会带来的毁灭性后果。今年3月,埃隆·马斯克(Elon Musk)和1000多名技术人员和专家曾呼吁人们将对先进AI的研究暂停6个月,称AI技术可能造成破坏。

本月,奥特曼也对美国国会发出警告称,随着AI技术的飞速发展,当前监管已经不足以满足需求了。

奥特曼最近签署的这份声明并没有像先前那封信一样概述具体目标,而是力求推动讨论。本月初,辛顿在接受美国有线电视新闻网(CNN)采访时表示,他没有签署三月份的那封信,因为鉴于中美已经在AI技术领域展开竞争,暂停AI研究是不现实的。

他说:“我不认为我们可以阻止AI的发展。我没有在呼吁大家停止AI研究的请愿书上签名,因为即使美国人停止了研究,中国人也不会停。”

尽管OpenAI 和谷歌等AI领先开发商的高管都呼吁政府加快对AI技术的监管步伐,但一些专家警告称,在AI目前带来的问题(包括散布误导信息和可能引起偏见等)已经造成严重破坏的情况下,讨论AI技术未来的会导致的灭绝风险只会适得其反。其他人甚至认为,奥特曼这些首席执行官之所以公开讨论灭绝风险,是为了转移人们对AI技术当前问题的注意力,而这些问题已经酿成了许多后果,包括促进了关键的大选年时虚假新闻的及时传播。

但对AI持悲观态度的人也警告称,AI技术的发展速度十分迅猛,其会导致的灭绝风险可能会过快地成为一个问题,让人们猝不及防。人们越来越担心,能够为自己思考和推理的超级智能AI会比许多人想象的更快成为现实,而且一些专家警告称,AI技术当前与人类利益和福祉并不契合。

本月,辛顿在接受《华盛顿邮报》(Washington Post)采访时表示,超级智能AI正在快速发展,可能只需要20年就能成为现实,现在是时候该讨论先进AI的风险了。

他说:“这不是科幻小说。”(财富中文网)

译者:中慧言-刘嘉欢

OpenAI首席执行官山姆·奥特曼(Sam Altman)警告,人工智能(AI)可能带来灭绝风险。

技术专家和计算机科学专家警告,AI对人类生存的威胁不亚于核战争和全球性流行病,甚至为AI辩护的企业领导者也更加谨慎地看待对AI技术可能导致的灭绝风险。

周二,非营利研究机构人工智能安全中心(Center for A.I. Safety)发表了一封公开信“人工智能风险声明”(statement of A.I. risk),包括ChatGPT开发商OpenAI的首席执行官山姆·奥特曼在内的300多人联合签署了该声明。这份简短的声明阐述了AI的相关风险:

“减轻AI带来的风险应该像流行病和核战争等社会性风险一样成为全球优先事项。”

这封信的序言讲到,该声明旨在就如何准备应对AI技术潜在的灭世风险问题“引发讨论”。其他签署者包括谷歌(Google)前任工程师杰弗里·辛顿(Geoffrey Hinton)和蒙特利尔大学(University of Montreal)计算机科学家约书亚·本吉奥(Yoshua Bengio),他们因对现代计算机科学做出了巨大贡献而被称为AI的两位教父。最近几周,本吉奥和辛顿已多次就AI技术在不久的将来可能发展出的危险能力发出警告。辛顿最近刚离开谷歌,因此他可以更公开地谈论AI的风险。

这并非第一封这类信,此前也曾有公开信呼吁人们进一步关注先进AI研究在缺乏严格的政府监管下可能会带来的毁灭性后果。今年3月,埃隆·马斯克(Elon Musk)和1000多名技术人员和专家曾呼吁人们将对先进AI的研究暂停6个月,称AI技术可能造成破坏。

本月,奥特曼也对美国国会发出警告称,随着AI技术的飞速发展,当前监管已经不足以满足需求了。

奥特曼最近签署的这份声明并没有像先前那封信一样概述具体目标,而是力求推动讨论。本月初,辛顿在接受美国有线电视新闻网(CNN)采访时表示,他没有签署三月份的那封信,因为鉴于中美已经在AI技术领域展开竞争,暂停AI研究是不现实的。

他说:“我不认为我们可以阻止AI的发展。我没有在呼吁大家停止AI研究的请愿书上签名,因为即使美国人停止了研究,中国人也不会停。”

尽管OpenAI 和谷歌等AI领先开发商的高管都呼吁政府加快对AI技术的监管步伐,但一些专家警告称,在AI目前带来的问题(包括散布误导信息和可能引起偏见等)已经造成严重破坏的情况下,讨论AI技术未来的会导致的灭绝风险只会适得其反。其他人甚至认为,奥特曼这些首席执行官之所以公开讨论灭绝风险,是为了转移人们对AI技术当前问题的注意力,而这些问题已经酿成了许多后果,包括促进了关键的大选年时虚假新闻的及时传播。

但对AI持悲观态度的人也警告称,AI技术的发展速度十分迅猛,其会导致的灭绝风险可能会过快地成为一个问题,让人们猝不及防。人们越来越担心,能够为自己思考和推理的超级智能AI会比许多人想象的更快成为现实,而且一些专家警告称,AI技术当前与人类利益和福祉并不契合。

本月,辛顿在接受《华盛顿邮报》(Washington Post)采访时表示,超级智能AI正在快速发展,可能只需要20年就能成为现实,现在是时候该讨论先进AI的风险了。

他说:“这不是科幻小说。”(财富中文网)

译者:中慧言-刘嘉欢

Technologists and computer science experts are warning that artificial intelligence poses threats to humanity’s survival on par with nuclear warfare and global pandemics, and even business leaders who are fronting the charge for A.I. are cautioning about the technology’s existential risks.

Sam Altman, CEO of ChatGPT creator OpenAI, is one of over 300 signatories behind a public “statement of A.I. risk” published Tuesday by the Center for A.I. Safety, a nonprofit research organization. The letter is a short single statement to capture the risks associated with A.I.:

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The letter’s preamble said the statement is intended to “open up discussion” on how to prepare for the technology’s potentially world-ending capabilities. Other signatories include former Google engineer Geoffrey Hinton and University of Montreal computer scientist Yoshua Bengio, who are known as two of the Godfathers of A.I. due to their contributions to modern computer science. Both Bengio and Hinton have issued several warnings in recent weeks about what dangerous capabilities the technology is likely to develop in the near future. Hinton recently left Google so that he could more openly discuss A.I.’s risks.

It isn’t the first letter calling for more attention to be paid to the possible disastrous outcomes of advanced A.I. research without stricter government oversight. Elon Musk was one of over 1,000 technologists and experts to call for a six-month pause on advanced A.I. research in March, citing the technology’s destructive potential.

And Altman warned Congress this month that sufficient regulation is already lacking as the technology develops at a breakneck pace.

The more recent note signed by Altman did not outline any specific goals like the earlier letter, other than fostering discussion. Hinton said in an interview with CNN earlier this month that he did not sign the March letter, saying that a pause on A.I. research would be unrealistic given the technology has become a competitive sphere between the U.S. and China.

“I don’t think we can stop the progress,” he said. “I didn’t sign the petition saying we should stop working on A.I because if people in America stop, people in China wouldn’t.”

But while executives from leading A.I. developers including OpenAI and even Google have called on governments to move faster on regulating A.I., some experts warn that it is counter-productive to discuss the technology’s future existential risks when its current problems, including misinformation and potential biases, are already wreaking havoc. Others have even argued that by publicly discussing A.I.’s existential risks, CEOs like Altman have been trying to distract from the technology’s current issues which are already creating problems, including facilitating the spread of fake news just in time for a pivotal election year.

But A.I.’s doomsayers have also warned that the technology is developing fast enough that existential risks could become a problem faster than humans can keep tabs on. Fears are growing in the community that superintelligent A.I., which would be able to think and reason for itself, is closer than many believe, and some experts warn that the technology is not currently aligned with human interests and well-being.

Hinton said in an interview with the Washington Post this month that the horizon for superintelligent A.I. is moving up fast and could now be only 20 years away, and now is the time to have conversations about advanced A.I.’s risks.

“This is not science fiction,” he said.

热读文章
热门视频
扫描二维码下载财富APP