手机APP下载

您现在的位置: 首页 > 双语阅读 > 双语新闻 > 科技新闻 > 正文

为科技业试图防范“终结者”点赞

来源:可可英语 编辑:alice   可可英语APP下载 |  可可官方微信:ikekenet

“A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Isaac Asimov’s precept formed the moral underpinning of his futuristic fiction; but 75 years after he first articulated his three laws of robotics, the first and crucial principle is being overtaken by reality.

“机器人不得伤害人类,或目睹人类个体将遭受危险而袖手不管,”艾萨克?阿西莫夫(Isaac Asimov)的戒律奠定了其未来主义小说的道德基础;但在他首次明确表述“机器人三定律”的75年后,这条至关重要的第一原则正在被现实压倒。
True, there are as yet no killer androids rampaging across the battlefield. But there are already defensive systems in place that can be programmed to detect and fire at threats — whether incoming missiles or approaching humans. The Pentagon has tested a swarm of miniature drones — raising the possibility that commanders could in future send clouds of skybots into enemy territory equipped to gather intelligence, block radar or — aided by face recognition technology — carry out assassinations. From China to Israel, Russia to Britain, many governments are keen to put rapid advances in artificial intelligence to military use.
没错,迄今为止还没有“杀手机器人”驰骋在战场上。但现在已经出现了可用来探查威胁并向目标——无论是飞来的导弹还是靠近的人类——开火的防御系统。五角大楼(Pentagon)测试了一批迷你无人机——它们带来了一种可能性,即未来指挥官可派出一群群的skybot(空中机器人)进入敌人领土,收集情报、阻断雷达、或在人脸识别技术的辅助下完成刺杀任务。从中国到以色列、从俄罗斯到英国,很多政府都急于把人工智能方面取得的快速进展应用于军事用途。
This is a source of alarm to researchers and tech industry executives. Already under fire for the impact that disruptive technologies will have on society, they have no wish to see their commercial innovations adapted to devastating effect. Hence this week’s call from the founders of robotics and AI companies for the UN to take action to prevent an arms race in lethal autonomous weapons systems. In an open letter, they underline their concern that such technology could permit conflict “at a scale greater than ever”, could help repressive regimes quell dissent, or that weapons could be hacked “to behave in undesirable ways”.
对于研究人员和科技业高管来说,这种情况值得担忧。他们已经因颠覆性技术将对社会产生的影响而饱受抨击,他们不希望看到自己的商业创新被改造后用于制造毁灭。因此,百余家机器人和人工智能企业的创始人日前联合呼吁联合国采取行动,阻止各国在致命性自主武器系统方面展开军备竞赛。他们在公开信中强调了他们的担忧,称此类技术可能使冲突达到“前所未有的规模”、可能帮助专制政权压制异见者,这些武器还可能因受到黑客攻击而做出有害的行为。
Their concerns are well-founded, but attempts to regulate these weapons are fraught with ethical and practical difficulties. Those who support the increasing use of AI in warfare argue that it has the potential to lessen suffering, not only because fewer front line troops would be needed, but because intelligent weapon systems would be better able to minimise civilian casualties. Targeted strikes against militants would obviate the need for indiscriminate bombing of the kind seen in Falluja or, more recently, Mosul. And there would be many less contentious uses for AI — say, driverless convoys on roads vulnerable to ambush.
他们的顾虑是有根据的,但试图控制这类武器在伦理和实践方面都存在困难。那些支持在战争中更多使用人工智能的人认为,此类技术有可能减少伤害,不只因为所需部署的前线部队减少,也因为智能武器系统可以更好地减少平民伤亡。如果可以针对作战人员展开目标明确的打击行动,也就不必进行无差别的狂轰滥炸,从而可以避免费卢杰(Falluja)或最近摩苏尔(Mosul)发生的那种惨剧。人工智能还将开发出很多没那么具有争议的用途——比如说,在易受埋伏路段使用无人驾驶车队。
At present, there is a broad consensus among governments against deploying fully autonomous weapons — systems that can select and engage targets with no meaningful human control. For the US military, this is a moral red line: there must always be a human operator responsible for a decision to kill. For others in the debate, it is a practical consideration — autonomous systems could behave unpredictably or be vulnerable to hacking.
目前,各国政府在反对部署全自主武器——这类武器可在没有实际人为控制的情况下选择目标并向其进攻——方面存在广泛共识。对于美国军方而言,有一条道德红线:杀人的决定必须由人类操作者做出。对于争论中的其他各方而言,存在一个现实的考量,即自主系统可能做出难以预测的举动、或容易受到黑客攻击。

为科技业试图防范“终结者”点赞.jpg

It becomes far harder to draw boundaries between systems with a human “in the loop” — in full control of a single drone, for example — and those where humans are “on the loop”, supervising and setting parameters for a broadly autonomous system. In the latter case — which might apply to anti-aircraft systems now, or to future drone swarms — it is arguable whether human oversight would amount to effective control in the heat of battle.

如今在“人在决策圈内”的系统(例如完全控制一架无人机)和“人在决策圈之上”的系统(人类监督完全自主的系统并为之设定参数)之间更难划分界限了。后一种技术可能适用于如今的防空系统或未来的无人机群,但一个疑问是,当战斗进入白热化阶段,人类监督是否会形成有效的控制。
Existing humanitarian law helps to an extent. The obligations to distinguish between combatants and civilians, avoid indiscriminate attacks and weapons that cause unnecessary suffering still apply; and commanders must take responsibility when they deploy robots just as they do for the actions of servicemen and women.
现有的人道主义法则有一定的作用。人们有责任区分作战人员和平民、避免无差别攻击以及会造成不必要伤害的武器;当指挥官像派遣士兵一样部署机器人去执行任务时,他们必须承担相应的责任。
But the AI industry is right to call for clearer rules, no matter how hard it may be to frame and enforce them. Killer robots may remain the stuff of science fiction, but self-operating weapons are a fast-approaching reality.
但是人工智能行业呼吁制定更明确规则的做法是正确的,无论这类规则多难制定和执行。“杀手机器人”可能仍然只存在于科幻小说中,但自主操作的武器即将成为现实。

重点单词   查看全部解释    
control [kən'trəul]

想一想再看

n. 克制,控制,管制,操作装置
vt. 控制

 
effective [i'fektiv]

想一想再看

adj. 有效的,有影响的

联想记忆
prevent [pri'vent]

想一想再看

v. 预防,防止

联想记忆
deploy [di'plɔi]

想一想再看

v. 展开,配置,部署

联想记忆
source [sɔ:s]

想一想再看

n. 发源地,来源,原始资料

 
intelligent [in'telidʒənt]

想一想再看

adj. 聪明的,智能的

 
dissent [di'sent]

想一想再看

n. 异议 v. 持异议

联想记忆
avoid [ə'vɔid]

想一想再看

vt. 避免,逃避

联想记忆
commercial [kə'mə:ʃəl]

想一想再看

adj. 商业的
n. 商业广告

联想记忆
debate [di'beit]

想一想再看

n. 辩论,讨论
vt. 争论,思考

联想记忆

发布评论我来说2句

    最新文章

    可可英语官方微信(微信号:ikekenet)

    每天向大家推送短小精悍的英语学习资料.

    添加方式1.扫描上方可可官方微信二维码。
    添加方式2.搜索微信号ikekenet添加即可。