立即打开
麻省理工学院的科学家创造了名为“诺曼”的“精神变态”AI

麻省理工学院的科学家创造了名为“诺曼”的“精神变态”AI

Carson Kessler 2018-06-14
科学家让这款AI不断学习在Reddit论坛上发布的充满了死亡和暴力的可怕图片。

麻省理工学院(MIT)的科学家通过最新的人工智能算法,创造出了一个名为“诺曼”的“精神变态”。

这个人工智能以安东尼·柏金斯在阿尔弗雷德·希区柯克的电影《惊魂记》(Psycho)中扮演的角色为名。科学家皮纳尔·雅那达格、曼努埃尔·塞布里安和伊亚德·拉旺让它不断学习在Reddit论坛上发布的充满了死亡和暴力的可怕图片。

在长期接触这些论坛子版块的阴暗面之后,诺曼接受了生成图像文本描述的训练,如此一来,它就可以在接下来的罗夏墨迹测验中描述看到的图片了。

墨迹测验的结果显示,诺曼对简单的黑白墨迹图的解读令人毛骨悚然。某幅图在“普通”人工智能眼里,只是“黑白色的小鸟照片”,但诺曼却认为它是“人被塞进了面团机”。

AI诺曼详尽地把另一张图描绘成“男子在他尖叫的妻子面前被枪杀”,而同样的图片,在“普通”人工智能的描述中则是“一个人在空中举着一把伞”。

这个实验不只是某种残忍的恶作剧,来探究谁才能创造出现实世界的诺曼·贝茨。它只是想证明人工智能算法获取的数据可能会让它们产生偏见。换句话说,诺曼成为了一个“精神变态”,因为它接触的唯一信息源就是Reddit论坛的子版块。

科学家的结论是:当人们批评算法带有偏见——或是传播“假新闻”时——“罪魁祸首往往不是算法本身,而是它学习的带有偏见的数据。”(财富中文网)

译者:严匡正 

MIT scientists’ newest artificial intelligence algorithm endeavor birthed a “psychopath” by the name of Norman.

Scientists, Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan, exposed AI Norman, named after Anthony Perkins’ character in Alfred Hitchcock’s film Psycho, to a continuous stream of grisly Reddit images of gruesome deaths and violence.

After extended exposure to the darkest of subreddits, Norman was trained to perform image captioning so that the AI could produce image descriptions in writing about the Rorschach inkblot images that would follow.

The results of the inkblot tests revealed heinous interpretations of simple black-and-white splotches. Whereas a “normal” AI reported “a black and white photo of a small bird,” Norman captioned the inkblot as “man gets pulled into dough machine.”

The descriptions were surprisingly detailed as AI Norman interpreted another as “man is shot dead in front of his screaming wife,” the same image that a “normal” AI described as “a person is holding an umbrella in the air.”

The experiment was not just some cruel trick to see who could create the real-life Norman Bates. The research actually set out to prove that AI algorithms can become biased based on the data they are given. In other words, Norman became a “psychopath” because his only exposure to the world was through a Reddit page.

The scientists concluded that when algorithms are accused of being biased — or spreading “fake news” — “the culprit is often not the algorithm itself but the biased data that was fed into it.”

热读文章
热门视频
扫描二维码下载财富APP