手机APP下载

您现在的位置: 首页 > 双语阅读 > 双语新闻 > 经济新闻 > 正文

警惕人工智能的失控风险

来源:可可英语 编辑:alice   可可英语APP下载 |  可可官方微信:ikekenet

As an experiment, Tunde Olanrewaju messed around one day with the Wikipedia entry of his employer, McKinsey. He edited the page to say that he had founded the consultancy firm. A friend took a screenshot to preserve the revised record.

作为一项实验,通德?奥兰雷瓦朱(Tunde Olanrewaju)有一天给维基百科(Wikipedia)关于他雇主的条目——麦肯锡(McKinsey)——捣了点乱。他编辑了该页面,说自己创办了这家咨询公司。一位朋友将修改后的记录截图保存了。
Within minutes, Mr Olanrewaju received an email from Wikipedia saying that his edit had been rejected and that the true founder’s name had been restored. Almost certainly, one of Wikipedia’s computer bots that police the site’s 40m articles had spotted, checked and corrected his entry.
几分钟内,奥兰雷瓦朱收到来自维基百科的电子邮件,告知他的编辑操作被拒绝,麦肯锡真正创始人的名字已被恢复。几乎可以肯定,管理维基百科网站上4000万篇条目的机器人(bot)之一已发现、核对并纠正了被他编辑的条目。
It is reassuring to know that an army of such clever algorithms is patrolling the frontline of truthfulness — and can outsmart a senior partner in McKinsey’s digital practice. In 2014, bots were responsible for about 15 per cent of all edits made on Wikipedia.
我们非常欣慰地得知,大量如此聪明的算法正巡逻在保卫真实性的前线——并且可以比麦肯锡旗下数字业务的资深合伙人更聪明。2014年,维基百科上约15%的编辑量是由机器人完成的。
But, as is the way of the world, algos can be used for offence as well as defence. And sometimes they can interact with each other in unintended and unpredictable ways. The need to understand such interactions is becoming ever more urgent as algorithms become so central in areas as varied as social media, financial markets, cyber security, autonomous weapons systems and networks of self-driving cars.
但是,世上的事情都是如此:算法既可以用于攻击,也可以用于防御。有时,算法之间可以通过非故意和不可预测的方式相互作用。由于算法在诸如社交媒体、金融市场、网络安全、自主武器系统(AWS)和自动驾驶汽车网络等不同领域发挥如此核心的作用,人类越来越迫切地需要理解这种相互作用。
A study published last month in the research journal Plos One, analysing the use of bots on Wikipedia over a decade, found that even those designed for wholly benign purposes could spend years duelling with each other.
今年2月研究期刊《公共科学图书馆?综合》(PLoS ONE)发表的一篇论文发现,即使那些出于完全善良意愿而设计的机器人,也可能会花费数年时间彼此争斗。这篇论文分析了十年来维基百科上机器人的使用情况。
In one such battle, Xqbot and Darknessbot disputed 3,629 entries, undoing and correcting the other’s edits on subjects ranging from Alexander the Great to Aston Villa football club.
在一次这样的争斗中,Xqbot和Darknessbot在3629个条目——从亚历山大大帝(Alexander the Great)到阿斯顿维拉(Aston Villa)足球俱乐部——上发生了冲突,反复撤消和更正对方的编辑结果。
The authors, from the Oxford Internet Institute and the Alan Turing Institute, were surprised by the findings, concluding that we need to pay far more attention to these bot-on-bot interactions. “We know very little about the life and evolution of our digital minions.”
来自牛津互联网学院(Oxford Internet Institute)和图灵研究所(Alan Turing Institute)的几位论文作者对这些发现感到吃惊。他们得出结论,我们需要对这些机器人之间的相互作用给予更多关注。“我们对我们的数字小黄人的生活和进化知之甚少。”
Wikipedia’s bot ecosystem is gated and monitored. But that is not the case in many other reaches of the internet where malevolent bots, often working in collaborative botnets, can run wild.
维基百科的机器人生态系统有门禁,受到监控。但在互联网所触及的许多其他领域,情况并非如此:恶意机器人——通常结成协作的僵尸网络(botnet)来工作——可能会失控。
The authors highlighted the dangers of such bots mimicking humans on social media to “spread political propaganda or influence public discourse”. Such is the threat of digital manipulation that a group of European experts has even questioned whether democracy can survive the era of Big Data and Artificial Intelligence.
几位作者突出强调了这种机器人模仿人类、在社交媒体上“传播政治宣传言论或影响公共话语”的危险。数字操纵的威胁如此严峻,以致一群欧洲专家甚至质疑民主在大数据和人工智能时代还有没有活路。
It may not be too much of an exaggeration to say we are reaching a critical juncture. Is truth, in some senses, being electronically determined? Are we, as the European academics fear, becoming the “digital slaves” of our one-time “digital minions”? The scale, speed and efficiency of some of these algorithmic interactions are reaching a level of complexity beyond human comprehension.
要说我们正在逼近一个紧要关头,也许不太夸张。在某种意义上,真理是否正由电子手段确定?我们是否正如欧洲学者所害怕的那样,正成为曾经听命于我们的“数字小黄人”的“数字奴隶”?一些算法之间交互作用的规模、速度和效率开始达到人类无法理解的复杂程度。
If you really want to scare yourself on a dark winter’s night you should read Susan Blackmore on the subject. The psychologist has argued that, by creating such computer algorithms we may have inadvertently unleashed a “third replicator”, which she originally called a teme, later modified to treme.
如果你真的想在冬日暗夜里吓唬自己的话,你应该读一读苏珊?布莱克莫尔(Susan Blackmore)有关这个题材的作品。这位心理学家认为,通过创建这样的计算机算法,我们也许已不经意地释放出一个“第三复制因子”——她最初称之为技因(teme),后来改称为treme。
The first replicators were genes that determined our biological evolution. The second were human memes, such as language, writing and money, that accelerated cultural evolution. But now, she believes, our memes are being superseded by non-human tremes, which fit her definition of a replicator as being “information that can be copied with variation and selection”.
第一复制因子是决定我们生物进化的基因。第二复制因子是人类的迷因(meme)——如语言、写作和金钱——迷因加速了文化演变。但现在,她认为,我们的迷因正在被非人类的treme取代,treme符合她对于复制因子的定义,即“可以有变化和有选择地复制的信息”。
“We humans are being transformed by new technologies,” she said in a recent lecture. “We have let loose the most phenomenal power.”
“我们人类正在被新技术所改造,”她在最近的一次讲座中说,“我们把一种最惊人的力量放出来了。”
For the moment, Prof Blackmore’s theory remains on the fringes of academic debate. Tremes may be an interesting concept, says Stephen Roberts, professor of machine learning at the University of Oxford, but he does not think we have lost control.
目前,布莱克莫尔教授的理论仍游离于学术辩论的边缘。牛津大学(University of Oxford)机器学习教授斯蒂芬?罗伯茨(Stephen Roberts)说,Treme或许是个有趣的概念,但他认为,我们并未失去控制权。

警惕人工智能的失控风险.jpg

“There would be a lot of negative consequences of AI algos getting out of hand,” he says. “But we are a long way from that right now.”

“人工智能(AI)算法失控将产生很多负面后果,”他说,“但现在,我们距离这个局面还有很远的距离。”
The more immediate concern is that political and commercial interests have learnt to “hack society”, as he puts it. “Falsehoods can be replicated as easily as truth. We can be manipulated as individuals and groups.”
更紧迫的问题是,用他的话说,政治和商业利益集团已学会了“侵入社会”。 “谎言可以像真理一样轻易地复制。我们作为个人和团体,都可能被操纵。”
His solution? To establish the knowledge equivalent of the Millennium Seed Bank, which aims to preserve plant life at risk from extinction.
他的解决方案是什么?为知识建立类似千年种子银行(Millennium Seed Bank)那样的保护计划。千年种子银行旨在保护濒危植物免于灭绝。
“As we de-speciate the world we are trying to preserve these species’ DNA. As truth becomes endangered we have the same obligation to record facts.”
“随着人类让这个世界上的物种减少,我们在试图保护这些物种的DNA。随着真相变得濒危,我们有同样的义务记录下事实。”
But, as we have seen with Wikipedia, that is not always such a simple task.
但是,正如我们在维基百科中所看到的情况那样,这并不总是那么简单的任务。

重点单词   查看全部解释    
decade ['dekeid]

想一想再看

n. 十年

联想记忆
intelligence [in'telidʒəns]

想一想再看

n. 理解力,智力
n. 情报,情报工作,情报

联想记忆
equivalent [i'kwivələnt]

想一想再看

adj. 等价的,相等的
n. 相等物

联想记忆
benign [bi'nain]

想一想再看

adj. 仁慈的,温和的,良性的

联想记忆
collaborative

想一想再看

adj. 合作的,协作的

 
security [si'kju:riti]

想一想再看

n. 安全,防护措施,保证,抵押,债券,证券

 
complexity [kəm'pleksiti]

想一想再看

n. 复杂,复杂性,复杂的事物

联想记忆
democracy [di'mɔkrəsi]

想一想再看

n. 民主,民主制,民主国家

联想记忆
efficiency [i'fiʃənsi]

想一想再看

n. 效率,功率

联想记忆
evolution [.i:və'lu:ʃən]

想一想再看

n. 进化,发展,演变

联想记忆

发布评论我来说2句

    最新文章

    可可英语官方微信(微信号:ikekenet)

    每天向大家推送短小精悍的英语学习资料.

    添加方式1.扫描上方可可官方微信二维码。
    添加方式2.搜索微信号ikekenet添加即可。