立即打开
与其担心人工智能消灭人类,不如担心这四件事情

与其担心人工智能消灭人类,不如担心这四件事情

Jonathan Vanian 2018年03月06日
研究人员们并不担心人工智能的发展会带来人类的末日,他们担心的是一些更加实际的问题。

人工智能领域的发展有望大大加快医学研究,更好地检测疾病,但同时也有可能放大各类不法行为的危害。

这便是近日牛津大学、剑桥大学、斯坦福大学、电子前沿基金会和人工智能研究组织OpenAI等机构发布的一份研究报告的结论。

研究人员们并不担心人工智能的发展会带来科幻文学式的末日,比如机器人像《终结者》系列一样统治世界,他们担心的是一些更加实际的问题。比如犯罪分子可能利用机器学习技术,进一步推动黑客攻击自动化,使本已压力山大的企业网安人员必须承担更大的压力,以确保企业计算机系统的安全。

该报告的目的并非是要劝说企业、学界和公众阻止人工智能的研究,而是要突出一些现实的隐患,使人们能更好地应对甚至阻止未来更加智能化的黑客攻击或其它与AI有关的问题。报告作者们建议各国政策制定者与科研人员一道着手解决潜在的人工智能风险,建立人工智能领域的道德标准,并提出了一系列其它防范建议。

以下是报告中比较有意思的几个观点:

1、 网络钓鱼或将愈演愈烈

随着人工智能的发展,网络钓鱼即犯罪分子在貌似合法的邮件中隐藏恶意链接的做法或将更加普遍和有效。比如犯罪分子根据推特和脸书上获取的网民在线信息和行为模式,或许能够自动生成定制化的欺骗邮件引诱网民点击。这些恶意邮件、网站或链接可以从伪造账号中发出,而那些伪造账号则可以模拟网民的亲友的写作风格,从而使这些钓鱼邮件更具欺骗性。

2、黑客开始像金融机构一样使用AI技术

如果银行和信用卡机构能采用机器学习技术改进他们的服务,黑客自然也做得到。比如犯罪分子可使用人工智能技术将支付处理等任务自动化,从而可以更加快速地获取勒索赎金。

犯罪分子还可以创建自动聊天机器人,用来与勒索软件的受害者进行交流,在此期间,犯罪分子将一直绑架受害人的计算机系统直至支付赎金。使用了自动聊天机器人之后,犯罪分子就可以将勒索环节交给机器人,本人则可以省出时间对更多潜在受害者发动攻击。

3、假新闻和假宣传也将愈演愈烈

你或许觉得脸书上泛滥的各类假新闻已经够让人头痛了,但是未来或许这个问题将更加严重。归功于人工智能的进步,研究人员已经可以在音视频中将虚拟的政治人物形象做得与真人一般无二。比如华盛顿大学的人工智能研究人员最近就制作了一段前总统奥巴马的演讲视频,虽然这段视频看起来极为真实,然而却是彻底虚构的。

是不是觉得毛骨悚然?该报告的作者们指出,以后人们利用伪造的音视频炮制“假新闻”将会更加容易。未来的某一天,人们甚至可能在视频中看到,“国家领导人在做着他们从未说过的煽动性演说。”

报告的作者们还指出,不法分子有可能使用AI技术开展“自动化、超个性化的虚假宣传活动”。在这些宣传活动中,“不同地区的人可能会收到定制化的宣传信息,以吸引他们投票。”

4、人工智能将使武器更具毁灭性

随着人工智能的技术,哪怕一个普通人也具有了广泛制造暴力的能力。比如随着面部识别、无人机导航等开源技术的传播,使一些犯罪分子利用这些技术实施犯罪成为了可能。想象一下,如果一架自动飞行的无人机具备了面部识别能力,然后精准发动攻击,将是一幅什么样的情景?

另一个令人担忧的现象,是目前各国对可用于武器化的机器人缺乏监管,且对反制措施的研究远远不足,从而使“武器化机器人的全球泛滥”成为了现实风险。(财富中文网)

该报告的原文称:

“虽然针对机器人攻击(尤其是无人机)的防御技术得到了一定发展,但是一个对于具有一定才智的攻击者,他凭借快速普及的软硬件和技术,通过直接使用AI或AI相关系统,是可以造成大量人员伤亡的。目前对这一类袭击的反制手段还很缺乏。”

译者:朴成奎

 

Advances in artificial intelligence have the potential to supercharge medical research and better detect diseases, but it could also amplify the actions of bad actors.

That’s according to a report released this week by a team of academics and researchers from Oxford University, Cambridge University, Stanford University, the Electronic Frontier Foundation, artificial intelligence research group OpenAI, and others institutes.

The report’s authors aren’t concerned with sci-fi doomsday scenarios like robots taking over the world, such as in Terminator, but more practical concerns. Criminals, for instance, could use machine learning technologies to further automate hacking attempts, putting more pressure on already beleaguered corporate security officers to ensure their computer systems are safe.

The goal of the report is not to dissuade companies, researchers, or the public from AI, but to highlight the most realistic concerns so people can better prepare and possibly prevent future cyber attacks or other problems related to AI. The authors urge policymakers to work with researchers on addressing possible AI issues, and for technologists involved in AI to consider a code of ethics, among other recommendations.

Here’s some interesting takeaways:

1. Phishing scams could get even worse1.

Phishing scams, in which criminals send seemingly legitimate emails bundled with malicious links, could become even more prevalent and effective thanks to AI. The report outlines a scenario in which people’s online information and behaviors, presumably scraped from social networks like Twitter and Facebook, could be used to automatically create custom emails that entice them to click. These emails, bad websites, or links, could be sent from fake accounts that are able to mimic the writing style of people’s friends so they look real.

2. Hackers start using AI like financial firms

If banks and credit card firms adopt machine learning to improve their services, so too will hackers. For instance, the report said that criminals could use AI techniques to automate tasks like payment processing, presumably helping them collect ransoms more quickly.

Criminals could also create chatbots that would communicate with the victims of ransomware attacks, in which criminals hold people’s computers hostage until they receive payment. By using software that can talk or chat with people, hackers could conceivably target more people at once without having to actually personally communicate with them and demand payments.

3. Fake news and propaganda is only going to get worse

If you thought the spread of misleading news on social networks like Facebook was bad now, get ready for the future. Advances in AI have led to researchers creating realistic audio and videos of political figures that are designed to look, and talk like real-life counterparts. For instance, AI researchers at the University of Washington recently created a video of former President Barack Obama giving a speech that looks incredibly realistic, but was actually fake.

You can see where this is going. The report’s authors suggest that people could create “fake news reports” with fabricated video and audio. These fake news reports could show “state leaders seeming to make inflammatory comments they never actually made.”

The authors also suggest that bad actors could use AI to create “automated, hyper-personalized disinformation campaigns,” in which “Individuals are targeted in swing districts with personalized messages in order to affect their voting behavior.”

4. AI could make weapons more destructive

Advances in AI could enable people, even a “single person,” to cause widespread violence, the report said. With the widespread availability of open-source technologies like algorithms that can detect faces or help drones navigate, the authors are concerned that criminals could use them for nefarious purposes. Think self-flying drones with the ability to detect a person’s face below it, and then carry out an attack.

What’s also concerning is that there’s been little regulation or technical research about defense techniques to combat the “global proliferation of weaponizable robots.”

From the report:

While defenses against attacks via robots (especially aerial
drones) are being developed, there are few obstacles at present
 to a moderately talented attacker taking advantage of the rapid proliferation of hardware, software, and skills to cause large amounts of physical harm through the direct use of AI or the subversion of AI-enabled systems.

  • 热读文章
  • 热门视频
活动
扫码打开财富Plus App