立即打开
人工智能已经走上战场,杀手机器人或将主宰未来战争

人工智能已经走上战场,杀手机器人或将主宰未来战争

Jeremy Kahn 2022-04-04
我们可以在俄乌冲突中看到科技在未来战争中的作用。

乌克兰基辅街头,装有爆炸物的无人机像死鱼一般肚皮朝上躺在地上,发出刺耳的噪音,尾部螺旋桨已经变形。这架无人机携带的致命爆炸物并未引爆,坠毁原因可能是故障或者被击落。

这架无人机的照片很快被上传到社交媒体,有武器专家确认这是由俄罗斯的武器制造商卡拉什尼科夫集团(Kalashnikov)的无人机部门扎拉航空(Zala Aero)生产的KUB-BLA“巡飞弹”。这类巡飞弹俗称“自杀式无人机”,可以自动飞到指定区域巡航最长30分钟。

无人机操作员能够远程监控其拍回的视频,等到地面出现敌军士兵或坦克时发起攻击。在某些情况下,无人机会安装人工智能软件,使其可以根据输入到机载系统的图像猎杀指定目标。无论在哪种情况下,只要无人机发现敌人并且操作人员选择攻击,无人机就会朝目标俯冲而下,然后爆炸。

俄乌战场成为日益先进的巡飞弹的重要试验场。这引起了人权活动家和技术人员的警惕。他们担心,这代表战场上利用“杀手机器人”的时代即将来临。这些由人工智能控制的武器会在没有人类决策的情况下自动猎杀人类。

随着这项技术的快速完善和成本下降,它引起了各国军队的密切关注。小型半自动无人机的卖点在于,它的价格只有体型更大的“捕食者”(Predator)无人机的几分之一,而且不需要由经验丰富的飞行员远程操控。“捕食者”无人机的价格可能高达数千万美元。步兵只要接受一点培训,就能够轻松使用这些新型自动化武器。

海豹突击队(Navy SEAL)的前队员、美国Shield AI公司的联合创始人及首席增长官布兰登·曾经说道:“‘捕食者’无人机价格过于昂贵,因此各国开始思考:‘能否用体积更小、价格更低的无人机实现98%的作战效果?’”Shield AI公司致力于生产小型无人侦察机,使用人工智能进行导航和图像分析。

但人权组织和一些计算机科学家却担心,这项技术将给战区的平民甚至整个人类带来新的严重威胁。

国际特赦组织(Amnesty International)的高级顾问维里蒂·科伊尔称:“目前的巡飞弹还有人类操作人员负责决定攻击目标,但取消人类操作并不难。一个巨大的威胁在于,如果没有清晰的法规,也就无法明确使用无人机的红线在哪里。”国际特赦组织参与了停止杀手机器人(Stop Killer Robots)活动。

联合市场研究公司(Allied Market Research)预测,全球人工智能驱动的各类致命武器市场发展迅速,其规模将从今年的近120亿美元,增长到这个十年底的300亿美元。Grand View Research公司表示,仅美国目前每年在巡飞弹方面的支出就高达约5.8亿美元,到这个十年底将增加到10亿美元。

从事巡飞弹生产的以色列国防公司UVision的国际销售与营销总监达干·勒夫·阿里表示,最初巡飞弹需求增长缓慢,直到2020年亚美尼亚与阿塞拜疆爆发冲突。当时阿塞拜疆使用先进无人机和巡飞弹摧毁了亚美尼亚的大量坦克和火炮,帮助其取得了决定性的胜利。

勒夫·阿里指出,那场战争让许多国家对巡飞弹产生了兴趣。美国开始大规模采购,包括UVision的“英雄”(Hero)系列自杀式无人机,以及由美国公司AeroVironment生产的“弹簧刀”(Switchblade)无人机,也推动了该市场的增长。勒夫·阿里补充道,俄乌冲突加快了需求增长速度。他说:“人们突然发现欧洲可能爆发一场战争,国防预算不断增加。”

巡飞弹的价格虽然低于某些武器,但实际上并不便宜。例如,有报告称,一架“弹簧刀”无人机包括发射控制系统和弹药在内的价格高达7万美元。

据称美国将向乌克兰提供了100架“弹簧刀”无人机。这些“弹簧刀”将作为乌克兰现有的土耳其造Bayraktar TB2无人机的补充。Bayraktar TB2无人机可以自动起飞、降落和巡航,但需要由人类操作人员寻找目标和发出发射机载火箭或投弹的命令。

巡飞弹并不是新鲜事物。最早的巡飞弹能够追溯到20世纪60年代,当时的飞航式导弹可以飞到指定区域搜索敌军防空系统的雷达信号。区别在于现在的巡飞弹技术更复杂、更准确。

理论上,人工智能武器系统能够减少战场上的平民伤亡。计算机处理信息的速度比人类更快,并且不会受到战斗带来的生理和情绪压力的影响。在激烈交火中,计算机还可以更准确地判断房子后面突然出现的是敌军士兵还是一名儿童。

但人权活动家和许多人工智能研究者却警告,面对是否要扼杀一个人的生命这个对所有人而言都最重要的决定,今天基于机器学习的算法在实践中并不值得信任。图像识别软件在某些测试中虽然能够媲美人类的能力,但在许多现实场景中,比如雨天或雪天环境,或者在光影对比明显的环境下,这类软件有明显的不足。

它经常会犯一些人类从来不会犯的稀奇古怪的错误。例如在一次试验中,研究人员成功欺骗了一个人工智能系统,通过微调图像的像素图案,使系统把一只乌龟识别成一把步枪。

即使目标识别系统可以做到完全精确无误,自动武器依旧会带来严重的威胁,除非它能够理解整个战场上的细微差别。比如,人工智能系统可以准确识别敌军坦克,却无法理解它停靠的地点在一家幼儿园旁边,为避免平民受伤不应该发动攻击。

支持禁止使用自动武器的人们还强调了“杀戮机器人”的危险,例如大量价格相对低廉的小型无人机群经过改造能够投放具有杀伤性的手榴弹,或者改造成巡飞弹。理论上,这类无人机群可以用于杀死某个区域内的所有人,或者实施种族灭绝,杀死具有某类种族特征的所有人,甚至能够利用面部识别技术来执行暗杀。

麻省理工学院(MIT)的物理学教授、生命未来研究所(Future of Life Institute)的联合创始人马克思·泰格马克表示,成群结队的杀戮机器人将成为“穷人的大规模毁灭性武器。”生命未来研究所致力于解决人类面临的“生存风险”。这类自动化武器可能动摇现有的世界秩序,因此他希望美俄等大国至少可以同意禁用这类杀戮机器人无人机和巡飞弹,尽管这些国家一直在研发从机器人潜艇到无人驾驶战机等各种人工智能武器。

联合国(United Nations)希望出台禁令,限制致命自动化武器的开发和销售,但到目前为止以失败告终。联合国下属的委员会用超过八年时间,讨论如何处理这类武器,始终未能达成任何共识。

虽然现在有66个国家支持出台相关禁令,但该委员会需达成共识才能够运行,而美国、英国、俄罗斯、以色列和印度均持反对立场。中国虽然也在开发人工智能武器,但已经表态支持禁令,只是在各国没有签署条约的情况下,中国不会单方面放弃研发。

对于杀戮机器人带来的反乌托邦未来,生产巡飞弹和其他人工智能武器的公司表示,这些武器只是为了增强人类在战场上的能力,而不是要取代人类。勒夫·阿里称:“我们不希望武器自动攻击。”但他承认其公司正在武器中添加基于人工智能的目标识别系统,以提高其自主性。他说:“这是为了协助人类做出必要决策。”

勒夫·阿里指出,即使武器可以找到目标,比如发现一辆敌军坦克,这并不意味着它是最佳攻击目标。他说:“那辆坦克可能已经无法行驶,而附近可能还有一辆更具有威胁性的坦克。”

英国谢菲尔德大学(University of Sheffield)的计算机科学专业荣誉教授诺埃尔·夏基是“停止杀手机器人”(Stop Killer Robots)活动的发言人。他表示,自动化正在加快战场的节奏,人类没有人工智能协助识别目标可能很难做出有效响应。一种人工智能创新必然会增加需求,演变成军备竞赛。夏基称:“这将导致大规模人道主义灾难。”

STM公司生产的Kargu 2无人机在测试中拍摄的视频。

图片来源:Courtesy of STM

致命的科技

人工智能武器已经被投入使用,但大多数武器未经人类同意无法选择和攻击目标。这样的例子不胜枚举。

Orca潜水艇

美国海军正在与波音公司(Boeing)合作开发一款长51英尺(约15.54米)的潜水艇Orca。其目标是使这款潜水艇能够自动巡航6500海里,可以使用声呐探测敌军船只和水雷。这款潜水艇的初始版本未配备武器,但美国海军表示最新款能够发射鱼雷。

Kargu 2无人机

联合国曾经得出结论,称由土耳其公司STM生产的一架小型四旋翼飞行器,在2020年自动攻击了一个利比亚军阀的武装,这让其名声大噪。据称这是第一次有自杀式无人机自行选择攻击目标。但STM否认其无人机具备这一功能。

SGR-A1自动机枪

由韩国的韩华航空航天公司(Hanwha Aerospace)与高丽大学(Korea University)联合开发的自动机枪SGR-A1,旨在帮助韩国守卫与朝鲜之间的边界。据媒体报道,这款机枪使用红外热成像技术侦测边境附近的人员。如果目标没有说出预先设定的密码,这款机枪就会发出警报,或发射橡皮子弹或致命性子弹。

KUB-BLA巡飞弹

俄罗斯正在将这款小型巡飞弹应用到乌克兰战场。据其制造商扎拉航空透露,这款无人机的操作人员可以在发射前向系统上传目标图像。之后,这款无人机将自动定位战场上的类似目标。

忠诚僚机(Loyal Wingman)

波音公司生产的“忠诚僚机”无人机长38英尺(约11.58米),会自动伴飞由飞行员驾驶的战机和其他飞行器,提供警报和监控,以及预警导弹袭击和其他威胁。STM公司生产的Kargu 2无人机在测试中拍摄的视频。(财富中文网)

本文另一版本发表于《财富》杂志2022年4月/5月刊,标题为《战场上的人工智能》(A.I. goes to war)。

翻译:刘进龙

审校:汪皓

乌克兰基辅街头,装有爆炸物的无人机像死鱼一般肚皮朝上躺在地上,发出刺耳的噪音,尾部螺旋桨已经变形。这架无人机携带的致命爆炸物并未引爆,坠毁原因可能是故障或者被击落。

这架无人机的照片很快被上传到社交媒体,有武器专家确认这是由俄罗斯的武器制造商卡拉什尼科夫集团(Kalashnikov)的无人机部门扎拉航空(Zala Aero)生产的KUB-BLA“巡飞弹”。这类巡飞弹俗称“自杀式无人机”,可以自动飞到指定区域巡航最长30分钟。

无人机操作员能够远程监控其拍回的视频,等到地面出现敌军士兵或坦克时发起攻击。在某些情况下,无人机会安装人工智能软件,使其可以根据输入到机载系统的图像猎杀指定目标。无论在哪种情况下,只要无人机发现敌人并且操作人员选择攻击,无人机就会朝目标俯冲而下,然后爆炸。

俄乌战场成为日益先进的巡飞弹的重要试验场。这引起了人权活动家和技术人员的警惕。他们担心,这代表战场上利用“杀手机器人”的时代即将来临。这些由人工智能控制的武器会在没有人类决策的情况下自动猎杀人类。

随着这项技术的快速完善和成本下降,它引起了各国军队的密切关注。小型半自动无人机的卖点在于,它的价格只有体型更大的“捕食者”(Predator)无人机的几分之一,而且不需要由经验丰富的飞行员远程操控。“捕食者”无人机的价格可能高达数千万美元。步兵只要接受一点培训,就能够轻松使用这些新型自动化武器。

海豹突击队(Navy SEAL)的前队员、美国Shield AI公司的联合创始人及首席增长官布兰登·曾经说道:“‘捕食者’无人机价格过于昂贵,因此各国开始思考:‘能否用体积更小、价格更低的无人机实现98%的作战效果?’”Shield AI公司致力于生产小型无人侦察机,使用人工智能进行导航和图像分析。

但人权组织和一些计算机科学家却担心,这项技术将给战区的平民甚至整个人类带来新的严重威胁。

国际特赦组织(Amnesty International)的高级顾问维里蒂·科伊尔称:“目前的巡飞弹还有人类操作人员负责决定攻击目标,但取消人类操作并不难。一个巨大的威胁在于,如果没有清晰的法规,也就无法明确使用无人机的红线在哪里。”国际特赦组织参与了停止杀手机器人(Stop Killer Robots)活动。

联合市场研究公司(Allied Market Research)预测,全球人工智能驱动的各类致命武器市场发展迅速,其规模将从今年的近120亿美元,增长到这个十年底的300亿美元。Grand View Research公司表示,仅美国目前每年在巡飞弹方面的支出就高达约5.8亿美元,到这个十年底将增加到10亿美元。

从事巡飞弹生产的以色列国防公司UVision的国际销售与营销总监达干·勒夫·阿里表示,最初巡飞弹需求增长缓慢,直到2020年亚美尼亚与阿塞拜疆爆发冲突。当时阿塞拜疆使用先进无人机和巡飞弹摧毁了亚美尼亚的大量坦克和火炮,帮助其取得了决定性的胜利。

勒夫·阿里指出,那场战争让许多国家对巡飞弹产生了兴趣。美国开始大规模采购,包括UVision的“英雄”(Hero)系列自杀式无人机,以及由美国公司AeroVironment生产的“弹簧刀”(Switchblade)无人机,也推动了该市场的增长。勒夫·阿里补充道,俄乌冲突加快了需求增长速度。他说:“人们突然发现欧洲可能爆发一场战争,国防预算不断增加。”

巡飞弹的价格虽然低于某些武器,但实际上并不便宜。例如,有报告称,一架“弹簧刀”无人机包括发射控制系统和弹药在内的价格高达7万美元。

据称美国将向乌克兰提供了100架“弹簧刀”无人机。这些“弹簧刀”将作为乌克兰现有的土耳其造Bayraktar TB2无人机的补充。Bayraktar TB2无人机可以自动起飞、降落和巡航,但需要由人类操作人员寻找目标和发出发射机载火箭或投弹的命令。

巡飞弹并不是新鲜事物。最早的巡飞弹能够追溯到20世纪60年代,当时的飞航式导弹可以飞到指定区域搜索敌军防空系统的雷达信号。区别在于现在的巡飞弹技术更复杂、更准确。

理论上,人工智能武器系统能够减少战场上的平民伤亡。计算机处理信息的速度比人类更快,并且不会受到战斗带来的生理和情绪压力的影响。在激烈交火中,计算机还可以更准确地判断房子后面突然出现的是敌军士兵还是一名儿童。

但人权活动家和许多人工智能研究者却警告,面对是否要扼杀一个人的生命这个对所有人而言都最重要的决定,今天基于机器学习的算法在实践中并不值得信任。图像识别软件在某些测试中虽然能够媲美人类的能力,但在许多现实场景中,比如雨天或雪天环境,或者在光影对比明显的环境下,这类软件有明显的不足。

它经常会犯一些人类从来不会犯的稀奇古怪的错误。例如在一次试验中,研究人员成功欺骗了一个人工智能系统,通过微调图像的像素图案,使系统把一只乌龟识别成一把步枪。

即使目标识别系统可以做到完全精确无误,自动武器依旧会带来严重的威胁,除非它能够理解整个战场上的细微差别。比如,人工智能系统可以准确识别敌军坦克,却无法理解它停靠的地点在一家幼儿园旁边,为避免平民受伤不应该发动攻击。

支持禁止使用自动武器的人们还强调了“杀戮机器人”的危险,例如大量价格相对低廉的小型无人机群经过改造能够投放具有杀伤性的手榴弹,或者改造成巡飞弹。理论上,这类无人机群可以用于杀死某个区域内的所有人,或者实施种族灭绝,杀死具有某类种族特征的所有人,甚至能够利用面部识别技术来执行暗杀。

麻省理工学院(MIT)的物理学教授、生命未来研究所(Future of Life Institute)的联合创始人马克思·泰格马克表示,成群结队的杀戮机器人将成为“穷人的大规模毁灭性武器。”生命未来研究所致力于解决人类面临的“生存风险”。这类自动化武器可能动摇现有的世界秩序,因此他希望美俄等大国至少可以同意禁用这类杀戮机器人无人机和巡飞弹,尽管这些国家一直在研发从机器人潜艇到无人驾驶战机等各种人工智能武器。

联合国(United Nations)希望出台禁令,限制致命自动化武器的开发和销售,但到目前为止以失败告终。联合国下属的委员会用超过八年时间,讨论如何处理这类武器,始终未能达成任何共识。

虽然现在有66个国家支持出台相关禁令,但该委员会需达成共识才能够运行,而美国、英国、俄罗斯、以色列和印度均持反对立场。中国虽然也在开发人工智能武器,但已经表态支持禁令,只是在各国没有签署条约的情况下,中国不会单方面放弃研发。

对于杀戮机器人带来的反乌托邦未来,生产巡飞弹和其他人工智能武器的公司表示,这些武器只是为了增强人类在战场上的能力,而不是要取代人类。勒夫·阿里称:“我们不希望武器自动攻击。”但他承认其公司正在武器中添加基于人工智能的目标识别系统,以提高其自主性。他说:“这是为了协助人类做出必要决策。”

勒夫·阿里指出,即使武器可以找到目标,比如发现一辆敌军坦克,这并不意味着它是最佳攻击目标。他说:“那辆坦克可能已经无法行驶,而附近可能还有一辆更具有威胁性的坦克。”

英国谢菲尔德大学(University of Sheffield)的计算机科学专业荣誉教授诺埃尔·夏基是“停止杀手机器人”(Stop Killer Robots)活动的发言人。他表示,自动化正在加快战场的节奏,人类没有人工智能协助识别目标可能很难做出有效响应。一种人工智能创新必然会增加需求,演变成军备竞赛。夏基称:“这将导致大规模人道主义灾难。”

致命的科技

人工智能武器已经被投入使用,但大多数武器未经人类同意无法选择和攻击目标。这样的例子不胜枚举。

Orca潜水艇

美国海军正在与波音公司(Boeing)合作开发一款长51英尺(约15.54米)的潜水艇Orca。其目标是使这款潜水艇能够自动巡航6500海里,可以使用声呐探测敌军船只和水雷。这款潜水艇的初始版本未配备武器,但美国海军表示最新款能够发射鱼雷。

Kargu 2无人机

联合国曾经得出结论,称由土耳其公司STM生产的一架小型四旋翼飞行器,在2020年自动攻击了一个利比亚军阀的武装,这让其名声大噪。据称这是第一次有自杀式无人机自行选择攻击目标。但STM否认其无人机具备这一功能。

SGR-A1自动机枪

由韩国的韩华航空航天公司(Hanwha Aerospace)与高丽大学(Korea University)联合开发的自动机枪SGR-A1,旨在帮助韩国守卫与朝鲜之间的边界。据媒体报道,这款机枪使用红外热成像技术侦测边境附近的人员。如果目标没有说出预先设定的密码,这款机枪就会发出警报,或发射橡皮子弹或致命性子弹。

KUB-BLA巡飞弹

俄罗斯正在将这款小型巡飞弹应用到乌克兰战场。据其制造商扎拉航空透露,这款无人机的操作人员可以在发射前向系统上传目标图像。之后,这款无人机将自动定位战场上的类似目标。

忠诚僚机(Loyal Wingman)

波音公司生产的“忠诚僚机”无人机长38英尺(约11.58米),会自动伴飞由飞行员驾驶的战机和其他飞行器,提供警报和监控,以及预警导弹袭击和其他威胁。STM公司生产的Kargu 2无人机在测试中拍摄的视频。(财富中文网)

本文另一版本发表于《财富》杂志2022年4月/5月刊,标题为《战场上的人工智能》(A.I. goes to war)。

翻译:刘进龙

审校:汪皓

The explosive-packed drone lay belly-up, like a dead fish, on a Kyiv street, its nose crushed and its rear propeller twisted. It had crashed without its deadly payload detonating, perhaps owing to a malfunction or because Ukrainian forces had shot it down.

Photos of the drone were quickly uploaded to social media, where weapons experts identified it as a KUB-BLA “loitering munition” made by Zala Aero, the dronemaking arm of Russian weapons maker Kalashnikov. Colloquially referred to as a “kamikaze drone,” it can fly autonomously to a specific area and then circle for up to 30 minutes.

The drone’s operator, remotely monitoring a video feed from the craft, can wait for enemy soldiers or a tank to appear below. In some cases, the drones are equipped with A.I. software that lets them hunt for particular kinds of targets based on images that have been fed into their onboard systems. In either case, once the enemy has been spotted and the operator has chosen to attack it, the drone nose-dives into its quarry and explodes.

The war in Ukraine has become a critical proving ground for increasingly sophisticated loitering munitions. That’s raised alarm bells among human rights campaigners and technologists who fear they represent the leading edge of a trend toward “killer robots” on the battlefield—weapons controlled by artificial intelligence that autonomously kill people without a human making the decision.

Militaries worldwide are keeping a close eye on the technology as it rapidly improves and its cost declines. The selling point is that small, semiautonomous drones are a fraction of the price of, say, a much larger Predator drone, which can cost tens of millions of dollars, and don’t require an experienced pilot to fly them by remote control. Infantry soldiers can, with just a little bit of training, easily deploy these new autonomous weapons.

“Predator drones are superexpensive, so countries are thinking, ‘Can I accomplish 98% of what I need with a much smaller, much less expensive drone?’ ” says Brandon Tseng, a former Navy SEAL who is cofounder and chief growth officer of U.S.-based Shield AI, a maker of small reconnaissance drones that use A.I. for navigation and image analysis.

But human rights groups and some computer scientists fear the technology could represent a grave new threat to civilians in conflict zones, or maybe even the entire human race.

“Right now, with loitering munitions, there is still a human operator making the targeting decision, but it is easy to remove that. And the big danger is that without clear regulation, there is no clarity on where the red lines are,” says Verity Coyle, senior adviser to Amnesty International, a participant in the Stop Killer Robots campaign.

The global market for A.I.-enabled lethal weapons of all kinds is growing quickly, from nearly $12 billion this year to an expected $30 billion by the end of the decade, according to Allied Market Research. In the U.S. alone, annual spending on loitering munitions, totaling about $580 million today, will rise to $1 billion by the end of the decade, Grand View Research said.

Dagan Lev Ari, the international sales and marketing director for UVision, an Israeli defense company that makes loitering munitions, says demand had been inching up until 2020, when war broke out between Armenia and Azerbaijan. In that conflict, Azerbaijan used advanced drones and loitering munitions to decimate Armenia’s larger arsenal of tanks and artillery, helping it achieve a decisive victory.

That got many countries interested, Lev Ari says. It also helps that the U.S. has begun major purchases, including UVision’s Hero family of kamikaze drones, as well as the Switchblade, made by rival U.S. firm AeroVironment. The Ukraine war has further accelerated demand, Lev Ari adds. “Suddenly, people see that a war in Europe is possible, and defense budgets are increasing,” he says.

Although less expensive than certain weapons, loitering munitions are not cheap. For example, each Switchblade costs as much as $70,000, after the launch and control systems plus munitions are factored in, according to some reports.

The U.S. is said to be sending 100 Switchblades to Ukraine. They would supplement that country’s existing fleet of Turkish-made Bayraktar TB2 drones, which can take off, land, and cruise autonomously, but need a human operator to find targets and give the order to drop the missiles or bombs they carry.

Loitering munitions aren’t entirely new. More primitive versions have been around since the 1960s, starting with a winged missile designed to fly to a specific area and search for the radar signature of an enemy antiaircraft system. What’s different today is that the technology is far more sophisticated and accurate.

In theory, A.I.-enabled weapons systems may be able to reduce civilian war casualties. Computers can process information faster than humans, and they are not affected by the physiological and emotional stress of combat. They might also be better at determining, in the heat of battle, whether the shape suddenly appearing from behind a house is an enemy soldier or a child.

But in practice, human rights campaigners and many A.I. researchers warn, today’s machine-learning–based algorithms can’t be trusted with the most consequential decision anyone will ever face: whether to take a human life. Image recognition software, while equaling human abilities in some tests, falls far short in many real-world situations—such as rainy or snowy conditions, or dealing with stark contrasts between light and shadow.

It can often make strange mistakes that humans never would. In one experiment, researchers managed to trick an A.I. system into thinking that a turtle was actually a rifle by subtly altering the pattern of pixels in the image.

Even if target identification systems were completely accurate, an autonomous weapon would still pose a serious danger unless it were coupled with a nuanced understanding of the entire battlefield. For instance, the A.I. system may accurately identify an enemy tank, but not understand that it’s parked next to a kindergarten, and so should not be attacked for fear of killing civilians.

Some supporters of a ban on autonomous weapons have evoked the danger of “slaughterbots,” swarms of small, relatively inexpensive drones, configured either to drop an antipersonnel grenade or as loitering munitions. Such swarms could, in theory, be used to kill everyone in a certain area, or to commit genocide, killing everyone with certain ethnic features, or even use facial recognition to assassinate specific individuals.

Max Tegmark, a physics professor at MIT and cofounder of the Future of Life Institute, which seeks to address “existential risks” to humanity, says swarms of slaughterbots would be a kind of “poor man’s weapon of mass destruction.” Because such autonomous weapons could destabilize the existing world order, he hopes that powerful nations—such as the U.S. and Russia—that have been pursuing other kinds of A.I.-enabled weapons, from robotic submarines to autonomous fighter jets, may at least agree to ban these slaughterbot drones and loitering munitions.

But so far, efforts at the United Nations to enact a restriction on the development and sale of lethal autonomous weapons have foundered. A UN committee has spent more than eight years debating what, if anything, to do about such weapons and has yet to reach any agreement.

Although as many as 66 countries now favor a ban, the committee operates by consensus, and the U.S., the U.K., Russia, Israel, and India oppose any restrictions. China, which is also developing A.I.-enabled weapons, has said it supports a ban, but absent a treaty, will not unilaterally forgo them.

As for the dystopian future of slaughterbots, companies building loitering munitions and other A.I.-enabled weapons say they are meant to enhance human capabilities on the battlefield, not replace them. “We don’t want the munition to attack by itself,” Lev Ari says, although he acknowledges that his company is adding A.I.-based target recognition to its weapons that would increase their autonomy. “That is to assist you in making the necessary decision,” he says.

Lev Ari points out that even if the munition is able to find a target, say, an enemy tank, it doesn’t mean that it is the best target to strike. “That particular tank might be inoperable, while another nearby may be more of a threat,” he says.

Noel Sharkey, emeritus professor of computer science at the University of Sheffield in the U.K., who is also a spokesperson for the Stop Killer Robots campaign, says automation is speeding the pace of battle to the point that humans can’t respond effectively without A.I. helping them identify targets. And inevitably one A.I. innovation is driving the demand for more, in a sort of arms race. Says Sharkey, “Therein lies the path to a massive humanitarian disaster.”

*****

Deadly tech

A.I.-enabled weapons are already here—but most are unable to select and attack targets without a human’s approval. These are some examples.

Orca

The U.S. Navy is working with Boeing on a 51-foot-long submersible called the Orca. The goal is for it to navigate autonomously for up to 6,500 nautical miles, using sonar to detect enemy vessels and underwater mines. While the initial version will be unarmed, the Navy has suggested that a later one will be able to fire torpedoes.

Kargu 2

This small quadcopter by Turkish company STM made headlines after the United Nations concluded that one had autonomously attacked forces affiliated with a Libyan warlord in 2020. It was said to be the first time a kamikaze drone had selected a target on its own. But STM denies its drone is capable of doing so.

SGR-A1

An autonomous machine gun developed by South Korea’s Hanwha Aerospace and Korea University that is designed to help South Korea defend its border with North Korea. According to a news account, it uses thermal and infrared imaging to detect people near the border. If the target doesn’t speak a predesignated password, the gun can sound an alarm or fire either rubber or lethal bullets.

KUB-BLA

Russia is using this small loitering munition in Ukraine. According to its manufacturer, Zala Aero, the drone’s operator can upload a target image to the system before launch. The aircraft can then autonomously locate similar targets on the battlefield.

Loyal Wingman

Produced by Boeing, this 38-foot-long drone autonomously accompanies crewed fighter jets and other aircraft to provide intelligence and surveillance, as well as warn of incoming missiles and other threats. The video feed from STM’s Kargu 2 drone during a test.

A version of this article appears in the April/May 2022 issue of Fortune with the headline, “A.I. goes to war.”

热读文章
热门视频
扫描二维码下载财富APP