
为应对人工智能(AI)日益加剧的风险,OpenAI正招聘一名新员工来负责相关工作,并愿为此职位提供超过50万美元的年薪。
OpenAI首席执行官山姆·奥尔特曼年底在社交平台X上发文称,公司正在招聘“风险防范负责人”,以降低AI技术相关的危害,例如对用户心理健康和网络安全的影响。招聘信息显示,该岗位年薪为55.5万美元,并包含股权激励。
奥尔特曼表示:“这将是一份压力巨大的工作,而且你几乎立刻就要面对艰巨挑战。”
OpenAI此时招聘安全高管,正值企业对AI带来的运营与声誉风险日益担忧之际。金融数据与分析公司AlphaSense(阿尔法感知)11月对提交至美国证券交易委员会(SEC)的年度文件进行分析发现,在去年前11个月,有418家市值不低于10亿美元的公司提到了与AI风险因素相关的声誉损害。这些威胁声誉的风险包括AI数据集存在偏见信息或危及安全。分析指出,AI相关声誉损害的报告数量较2024年增加了46%。
奥尔特曼在帖子中写道:“AI模型正在快速进步,现已能够实现许多卓越功能,但也开始带来一些切实的挑战。”
他补充道:“如果你希望帮助社会思考如何让网络安全防御者获得尖端能力,同时确保攻击者无法利用这些能力作恶——最理想的方式是让所有系统更安全;同样地,如果你对如何安全释放生物技术能力、甚至对能够自我改进的系统的安全运行建立信心等问题有见解,请考虑申请这个职位。”
OpenAI此前的风险防范负责人亚历山大·马德里(Aleksander Madry)已于去年调任至与AI推理相关的岗位,AI安全仍是其职责的一部分。
OpenAI成立于2015年,最初是一家旨在利用AI改善和造福人类的非营利组织。但在其部分前领导层看来,该公司一直难以将安全技术发展的承诺置于首位。2020年,OpenAI前研究副总裁达里奥·阿莫代(Dario Amodei)与妹妹丹妮拉·阿莫代(Daniela Amodei)及数名研究人员离职,部分原因正是担心公司更重视商业成功而非安全性。次年,阿莫代创立了Anthropic公司。
去年以来,OpenAI已面临多起非正常死亡诉讼,指控称ChatGPT助长了用户的妄想,并声称与聊天机器人的对话导致了一些用户的自杀。《纽约时报》(The New York Times)11月发布的一项调查发现,有近50起案例显示ChatGPT用户在与机器人对话时出现了心理健康危机。
OpenAI曾在8月表示,用户与ChatGPT进行长时间对话后,其安全功能可能会“减弱”。不过公司已改进模型与用户的互动方式。去年早些时候,OpenAI成立了一个8人委员会,为制定保障用户健康的防护措施提供建议;同时更新了ChatGPT,以在敏感对话中作出更妥善回应,并增加危机热线的接入渠道。本月初,公司还宣布设立资助金,用于支持AI与心理健康交叉领域的研究。
这家科技公司也承认需进一步加强安全措施,在本月的一篇博客文章中表示,随着AI快速发展,其即将推出的一些模型可能带来“高”网络安全风险。公司正在采取多项措施以降低风险,例如训练模型拒绝响应危害网络安全的请求,以及完善监控系统。
奥尔特曼上周六写道:“我们在衡量AI能力增长方面已打下坚实基础。但如今我们正进入一个新阶段:需要更细致地理解和评估这些能力可能被滥用的方式,并思考如何在产品层面乃至全球范围内限制其负面影响,从而让全人类都能享受到AI带来的巨大裨益。(财富中文网)
译者:刘进龙
审校:汪皓
为应对人工智能(AI)日益加剧的风险,OpenAI正招聘一名新员工来负责相关工作,并愿为此职位提供超过50万美元的年薪。
OpenAI首席执行官山姆·奥尔特曼年底在社交平台X上发文称,公司正在招聘“风险防范负责人”,以降低AI技术相关的危害,例如对用户心理健康和网络安全的影响。招聘信息显示,该岗位年薪为55.5万美元,并包含股权激励。
奥尔特曼表示:“这将是一份压力巨大的工作,而且你几乎立刻就要面对艰巨挑战。”
OpenAI此时招聘安全高管,正值企业对AI带来的运营与声誉风险日益担忧之际。金融数据与分析公司AlphaSense(阿尔法感知)11月对提交至美国证券交易委员会(SEC)的年度文件进行分析发现,在去年前11个月,有418家市值不低于10亿美元的公司提到了与AI风险因素相关的声誉损害。这些威胁声誉的风险包括AI数据集存在偏见信息或危及安全。分析指出,AI相关声誉损害的报告数量较2024年增加了46%。
奥尔特曼在帖子中写道:“AI模型正在快速进步,现已能够实现许多卓越功能,但也开始带来一些切实的挑战。”
他补充道:“如果你希望帮助社会思考如何让网络安全防御者获得尖端能力,同时确保攻击者无法利用这些能力作恶——最理想的方式是让所有系统更安全;同样地,如果你对如何安全释放生物技术能力、甚至对能够自我改进的系统的安全运行建立信心等问题有见解,请考虑申请这个职位。”
OpenAI此前的风险防范负责人亚历山大·马德里(Aleksander Madry)已于去年调任至与AI推理相关的岗位,AI安全仍是其职责的一部分。
OpenAI成立于2015年,最初是一家旨在利用AI改善和造福人类的非营利组织。但在其部分前领导层看来,该公司一直难以将安全技术发展的承诺置于首位。2020年,OpenAI前研究副总裁达里奥·阿莫代(Dario Amodei)与妹妹丹妮拉·阿莫代(Daniela Amodei)及数名研究人员离职,部分原因正是担心公司更重视商业成功而非安全性。次年,阿莫代创立了Anthropic公司。
去年以来,OpenAI已面临多起非正常死亡诉讼,指控称ChatGPT助长了用户的妄想,并声称与聊天机器人的对话导致了一些用户的自杀。《纽约时报》(The New York Times)11月发布的一项调查发现,有近50起案例显示ChatGPT用户在与机器人对话时出现了心理健康危机。
OpenAI曾在8月表示,用户与ChatGPT进行长时间对话后,其安全功能可能会“减弱”。不过公司已改进模型与用户的互动方式。去年早些时候,OpenAI成立了一个8人委员会,为制定保障用户健康的防护措施提供建议;同时更新了ChatGPT,以在敏感对话中作出更妥善回应,并增加危机热线的接入渠道。本月初,公司还宣布设立资助金,用于支持AI与心理健康交叉领域的研究。
这家科技公司也承认需进一步加强安全措施,在本月的一篇博客文章中表示,随着AI快速发展,其即将推出的一些模型可能带来“高”网络安全风险。公司正在采取多项措施以降低风险,例如训练模型拒绝响应危害网络安全的请求,以及完善监控系统。
奥尔特曼上周六写道:“我们在衡量AI能力增长方面已打下坚实基础。但如今我们正进入一个新阶段:需要更细致地理解和评估这些能力可能被滥用的方式,并思考如何在产品层面乃至全球范围内限制其负面影响,从而让全人类都能享受到AI带来的巨大裨益。(财富中文网)
译者:刘进龙
审校:汪皓
OpenAI is looking for a new employee to help address the growing dangers of AI, and the tech company is willing to spend more than half a million dollars to fill the role.
OpenAI is hiring a “head of preparedness” to reduce harms associated with the technology, like user mental health and cybersecurity, CEO Sam Altman wrote in an X post on Saturday. The position will pay $555,000 per year, plus equity, according to the job listing.
“This will be a stressful job and you’ll jump into the deep end pretty much immediately,” Altman said.
OpenAI’s push to hire a safety executive comes amid companies’ growing concerns about AI risks on operations and reputations. A November analysis of annual Securities and Exchange Commission filings by financial data and analytics company AlphaSense found that in the first 11 months of the year, 418 companies worth at least $1 billion cited reputational harm associated with AI risk factors. These reputation-threatening risks include AI datasets that show biased information or jeopardize security. Reports of AI-related reputational harm increased 46% from 2024, according to the analysis.
“Models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges,” Altman said in the social media post.
“If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” he added.
OpenAI’s previous head of preparedness Aleksander Madry was reassigned last year to a role related to AI reasoning, with AI safety a related part of the job.
Founded in 2015 as a nonprofit with the intention to use AI to improve and benefit humanity, OpenAI has, in the eyes of some of its former leaders, struggled to prioritize its commitment to safe technology development. The company’s former vice president of research, Dario Amodei, along with his sister Daniela Amodei and several other researchers, left OpenAI in 2020, in part because of concerns the company was prioritizing commercial success over safety. Amodei founded Anthropic the following year.
OpenAI has faced multiple wrongful death lawsuits this year, alleging ChatGPT encouraged users’ delusions, and claiming conversations with the bot were linked to some users’ suicides. A New York Times investigation published in November found nearly 50 cases of ChatGPT users having mental health crises while in conversation with the bot.
OpenAI said in August its safety features could “degrade” following long conversations between users and ChatGPT, but the company has made changes to improve how its models interact with users. It created an eight-person council earlier this year to advise the company on guardrails to support users’ wellbeing and has updated ChatGPT to better respond in sensitive conversations and increase access to crisis hotlines. At the beginning of the month, the company announced grants to fund research about the intersection of AI and mental health.
The tech company has also conceded to needing improved safety measures, saying in a blog post this month some of its upcoming models could present a “high” cybersecurity risk as AI rapidly advances. The company is taking measures—such as training models to not respond to requests compromising cybersecurity and refining monitoring systems—to mitigate those risks.
“We have a strong foundation of measuring growing capabilities,” Altman wrote on Saturday. “But we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits.”