手机APP下载

您现在的位置: 首页 > 双语阅读 > 双语新闻 > 科技新闻 > 正文

不能赋予机器人杀人的权力

来源:可可英语 编辑:shaun   可可英语APP下载 |  可可官方微信:ikekenet

Imagine this futuristic scenario: a US-led coalition is closing in on Raqqa determined to eradicate Isis. The international forces unleash a deadly swarm of autonomous, flying robots that buzz around the city tracking down the enemy.

想象一下这样的未来场景:以美国为首的联军正在逼近叙利亚的拉卡(Raqqa),决心消灭“伊斯兰国”(ISIS)。多国部队出动一批致命的自主机器人,围着城市四处飞行,追踪敌人。

Using face recognition technology, the robots identify and kill top Isis commanders, decapitating the organisation. Dazed and demoralised, the Isis forces collapse with minimal loss of life to allied troops and civilians.

利用面部识别技术,这些机器人识别和杀死ISIS的指挥官,斩落了这个组织的头目。在联军和平民伤亡最少的情况下,瓦解了不知所措、士气低落的ISIS部队。

Who would not think that a good use of technology?

有谁不认为这是很好地运用了技术呢?

As it happens, quite a lot of people, including many experts in the field of artificial intelligence, who know most about the technology needed to develop such weapons.

事实上,有很多人不这么认为,包括人工智能领域的很多专家,他们最了解研发这种武器所需要的技术。

In an open letter published last July, a group of AI researchers warned that technology had reached such a point that the deployment of Lethal Autonomous Weapons Systems (or Laws as they are incongruously known) was feasible within years, not decades. Unlike nuclear weapons, such systems could be mass produced on the cheap, becoming the “Kalashnikovs of tomorrow.”

去年7月,众多人工智能研究人员发表了一封公开信,警告称这种技术已经发展到一定程度,几年以后——而无需几十年——就有可能部署“致命自主武器系统”(Lethal Autonomous Weapons Systems,它还有一个不相称的简称,Laws,意为“法律”)。不像核武器,这类系统可以以低廉成本大规模生产,成为“明天的卡拉什尼科夫步枪(Kalashnikov,即AK-47)”。

“It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing,” they said. “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

“它们早晚会出现在黑市上,落入恐怖分子、希望更好地控制民众的独裁者和想要进行种族清洗的军阀的手中,”他们表示,“在军用人工智能领域开启一场军备竞赛是一个坏主意,应该对超出人类有效控制的攻击性自主武器施加禁令,以防止这样的军备竞赛。”

Already, the US has broadly forsworn the use of offensive autonomous weapons. Earlier this month, the United Nations held a further round of talks in Geneva between 94 military powers aiming to draw up an international agreement restricting their use.

美国大体上已承诺放弃使用攻击性自主武器。本月早些时候,联合国(UN)在日内瓦举行了有94个军事强国参加的新一轮谈判,旨在拟定一项限制此类武器使用的国际协定。

The chief argument is a moral one: giving robots the agency to kill humans would trample over a red line that should never be crossed.

主要论据是道德层面上的:赋予机器人杀人的代理权,将越过一条永远不应被越过的红线。

Jody Williams, who won a Nobel Peace Prize for campaigning against landmines and is a spokesperson for the Campaign To Stop Killer Robots, describes autonomous weapons as more terrifying than nuclear arms. “Where is humanity going if some people think it’s OK to cede the power of life and death of humans over to a machine?”

因为开展反对地雷的运动而获得诺贝尔和平奖的乔迪•威廉斯(Jody Williams)是“阻止杀手机器人运动”(Campaign To Stop Killer Robots)的发言人,他表示自主武器比核武器更可怕。“如果一些人认为把人类的生杀大权交给一台机器是可以的,人性又何以处之?”

There are other concerns beyond the purely moral. Would the use of killer robots lower the human costs of war thereby increasing the likelihood of conflict? How could proliferation of such systems be stopped? Who would be accountable when they went wrong?

除了纯粹的道德问题以外,还有其他令人担忧的问题。杀手机器人会降低战争中的人员成本,爆发冲突的可能性是否会因此提高?如何阻止这类系统的扩散?当它们出问题的时候谁来负责?

This moral case against killer robots is clear enough in a philosophy seminar. The trouble is the closer you look at their likely use in the fog of war the harder it is to discern the moral boundaries. Robots (with limited autonomy) are already deployed on the battlefield in areas such as bomb disposal, mine clearance and antimissile systems. Their use is set to expand dramatically.

在一个哲学研讨会上,反对杀手机器人的道德理由已是足够明显。问题在于,你越是近距离地观察它们在战争硝烟中可能的用处,就越难分辨出道德的界限。(有限自主的)机器人已经被用于战场上,应用在拆弹、排雷和反导系统等。它们的应用范围还将大为扩大。

The Center for a New American Security estimates that global spending on military robots will reach $7.5bn a year by 2018 compared with the $43bn forecast to be spent on commercial and industrial robots. The Washington-based think-tank supports the further deployment of such systems arguing they can significantly enhance “the ability of warfighters to gain a decisive advantage over their adversaries”.

据新美国安全中心(Center for a New American Security)估测,到2018年,全球范围内在军用机器人方面的支出将达到每年75亿美元。相比之下,该机构预测用于商业和工业机器人的支出将为430亿美元。这家位于华盛顿的智库支持进一步利用这类系统,主张它们能够显著提高“作战人员取得凌驾对手的绝对性优势的能力”。

In the antiseptic prose it so loves, the arms industry draws a distinction between different levels of autonomy. The first, described as humans-in-the-loop, includes predator drones, widely used by US and other forces. Even though a drone may identify a target it still requires a human to press the button to attack. As vividly shown in the film Eye in the Sky , such decisions can be morally agonising, balancing the importance of hitting vital targets with the risks of civilian casualties.

军工界用其最爱使用的置身事外的论调,对机器人不同的自主等级进行了区分。第一类被称为“人在环中”(humans-in-the-loop),包括被美军和其他军队广泛使用的“捕食者”无人机。即使一架无人机或许能够识别目标,还是需要一个人类来按下攻击按钮。就像电影《天空之眼》(Eye in the Sky)生动地体现出来的,这类决策可能会给人带来道德上的痛苦,你需要在打击关键目标和造成平民伤亡的风险之间进行权衡。

The second level of autonomy involves humans-in-the-loop systems, in which people supervise roboticised weapons systems, including anti-aircraft batteries. But the speed and intensity of modern warfare make it doubtful whether such human oversight amounts to effective control.

第二级的自主是“人在环中系统”(humans-in-the-loop system),人对机器人武器系统进行监督,包括防空炮。但现代战争的速度和强度让人怀疑这种人类的监督能否形成有效控制。

The third type, of humans-out-of-the-loop systems such as fully autonomous drones, is potentially the deadliest but probably the easiest to proscribe.

第三类是“人在环外系统”(humans-out-of-the-loop system),比如完全自主的无人机,这种可能是最致命的,但也很可能是最容易禁止的。

AI researchers should certainly be applauded for highlighting this debate. Arms control experts are also playing a useful, but frustratingly slow, part in helping define and respond to this challenge. “This is a valuable conversation,” says Paul Scharre, a senior fellow at CNAS. “But it is a glacial process.”

人工智能研究人员通过发表公开信,引起人们对这场辩论的关注,这一举动当然值得赞扬。军备控制专家在帮助定义和应对这一挑战方面起到有用的作用,但他们的行动步伐却慢得让人沮丧。“这是一次有价值的对话,”新美国安全中心的保罗•沙勒(Paul Scharre)说,“但这是一个极其缓慢的过程。”

As in so many other areas, our societies are scrambling to make sense of fast-changing technological realities, still less control them.

就像在其他很多方面一样,我们的社会在试图理解快速变化的技术现实方面就已穷于应付,更别提加以控制了。

重点单词   查看全部解释    
mass [mæs]

想一想再看

n. 块,大量,众多
adj. 群众的,大规模

 
thereby ['ðɛə'bai]

想一想再看

adv. 因此,从而

 
conversation [.kɔnvə'seiʃən]

想一想再看

n. 会话,谈话

联想记忆
eradicate [i'rædikeit]

想一想再看

v. 根除,扑减,根绝

联想记忆
recognition [.rekəg'niʃən]

想一想再看

n. 认出,承认,感知,知识

 
civilian [si'viljən]

想一想再看

adj. 平民的
n. 罗马法专家,平民

联想记忆
advantage [əd'vɑ:ntidʒ]

想一想再看

n. 优势,有利条件
vt. 有利于

联想记忆
effective [i'fektiv]

想一想再看

adj. 有效的,有影响的

联想记忆
doubtful ['dautfəl]

想一想再看

adj. 可疑的,疑心的,不确定的

联想记忆
decisive [di'saisiv]

想一想再看

adj. 决定性的

 


关键字: 机器人 杀人的权力

发布评论我来说2句

    最新文章

    可可英语官方微信(微信号:ikekenet)

    每天向大家推送短小精悍的英语学习资料.

    添加方式1.扫描上方可可官方微信二维码。
    添加方式2.搜索微信号ikekenet添加即可。