手机APP下载

您现在的位置: 首页 > 双语阅读 > 双语新闻 > 科技新闻 > 正文

霍金等呼吁提防人工智能副作用

来源:可可英语 编辑:shaun   可可英语APP下载 |  可可官方微信:ikekenet

Dozens of scientists, entrepreneurs and investors involved in the field of artificial intelligence, including Stephen Hawking and Elon Musk, have signed an open letter warning that greater focus is needed on its safety and social benefits.

数十位科学家、企业家及与人工智能领域有关的投资者联名发出了一封公开信,警告人们必须更多地注意人工智能(AI)的安全性及其社会效益。参加联署的人中包括了科学家史蒂芬•霍金(Stephen Hawking)及企业家埃伦•马斯克(Elon Musk)。

The letter and an accompanying paper from the Future of Life Institute, which suggests research priorities for “robust and beneficial” artificial intelligence, come amid growing nervousness about the impact on jobs or even humanity’s long-term survival from machines whose intelligence and capabilities could exceed those of the people who created them.

这封发自生命未来研究所(Future of Life Institute,简称FLI)的公开信还附带了一篇论文,其中建议应优先研究“强大而有益”的人工智能。目前,人们日益担心机器的智力和能力可能会超过创造它们的人类,从而影响到人类的就业,甚至影响到人类的长期生存。
“Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls,” the FLI’s letter says. “Our AI systems must do what we want them to do.”
这封FLI的公开信表示:“由于人工智能的巨大潜力,开展如何在规避其潜在陷阱的同时获取其好处的研究十分重要。我们的人工智能系统,必须按照我们的意愿工作。”
The FLI was founded last year by volunteers including Jaan Tallinn, a co-founder of Skype, to stimulate research into “optimistic visions of the future” and to “mitigate existential risks facing humanity”, with a focus on those arising from the development of human-level artificial intelligence.
FLI去年由包括Skype联合创始人让•塔林(Jaan Tallinn)在内的志愿者创立。成立该研究所的目的一方面是为了促进对“未来乐观图景”的研究,一方面则是为了“降低人类面临的现存风险”。这其中,在开发与人类相当的人工智能技术过程中出现的那些风险,将是该所关注的重点。
Mr Musk, the co-founder of SpaceX and Tesla, who sits on the FLI’s scientific advisory board alongside actor Morgan Freeman and cosmologist Stephen Hawking, has said that he believes uncontrolled artificial intelligence is “potentially more dangerous than nukes”.
SpaceX和特斯拉(Tesla)的共同创始人马斯克、著名演员摩根•弗里曼(Morgan Freeman)以及宇宙学家史蒂芬•霍金都是FLI科学顾问委员会的委员。马斯克表示,他相信不受控制的人工智能“可能比核武器更危险”。
Other signatories to the FLI’s letter include Luke Muehlhauser, executive director of Machine Intelligence Research Institute, Frank Wilczek, professor of physics at the Massachusetts Institute of Technology and a Nobel laureate, and the entrepreneurs behind artificial intelligence companies DeepMind and Vicarious, as well as several employees at Google, IBM and Microsoft.
这封FLI公开信上的其他联署人还包括机器智能研究所(Machine Intelligence Research Institute)的执行主任吕克•米尔豪泽(Luke Muehlhauser),麻省理工学院(MIT)物理学教授、诺贝尔奖得主弗兰克•维尔切克(Frank Wilczek),人工智能企业DeepMind和Vicarious的幕后主管,以及几名谷歌(Google)、IBM和微软(Microsoft)的员工。
Rather than fear-mongering, the letter is careful to highlight both the positive and negative effects of artificial intelligence.
这封信并不以一封兜售恐惧心理为目的。与此相反,它十分谨慎地同时强调了人工智能的积极面和消极面。
“There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase,” the letter reads. “The potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable.”
信中写道:“如今存在的一个广泛共识是,人工智能研究正在稳步进展之中,它对社会的影响也很可能会逐渐增大。人类文明所能提供的一切都是人类智慧的结晶。这种智慧被人工智能可能提供的工具放大后,我们能做到什么是我们无法想象的,不过那样的话根除疾病和贫困将不再是遥不可及的。从这个意义上说,人工智能有巨大的潜在好处。”
Benefits from artificial intelligence research that are already coming into use include speech and image recognition, and self-driving vehicles. Some in Silicon Valley have estimated that more than 150 start-ups are working on artificial intelligence today.
目前,人工智能研究的部分好处已经成为现实,其中包括语音识别和图像识别,以及自动驾驶的汽车。在硅谷,部分人估计如今从事人工智能业务的初创企业超过了150家。
As the field draws in more investment and entrepreneurs and companies such as Google eye huge rewards from creating computers that can think for themselves, the FLI warns that greater focus on the social ramifications would be “timely”, drawing not only on computer science but economics, law and IT security.
人工智能正吸引越来越多的投资,许多创业家和谷歌等企业都在盼望着能通过建立会自主思考的电脑,获得巨额回报。对于这种局面,FLI警告说,人们或许应“及时”将更多注意力集中在人工智能的社会后果上,不仅要从计算机科学的角度开展研究,还要从经济、法律及信息安全的角度开展研究。

重点单词   查看全部解释    
achieve [ə'tʃi:v]

想一想再看

v. 完成,达到,实现

 
vicarious [vi'kɛəriəs]

想一想再看

adj. 代理的,担任代理的,替身的,代替性的,有同感的

联想记忆
negative ['negətiv]

想一想再看

adj. 否定的,负的,消极的
n. 底片,负

联想记忆
predict [pri'dikt]

想一想再看

v. 预知,预言,预报,预测

联想记忆
timely ['taimli]

想一想再看

adj. 及时的,适时的
adv. 及时的

联想记忆
highlight ['hailait]

想一想再看

n. 加亮区,精彩部分,最重要的细节或事件,闪光点

 
intelligence [in'telidʒəns]

想一想再看

n. 理解力,智力
n. 情报,情报工作,情报

联想记忆
mitigate ['miti.geit]

想一想再看

vt. 镇静,缓和,减轻

联想记忆
social ['səuʃəl]

想一想再看

adj. 社会的,社交的
n. 社交聚会

 
potentially [pə'tenʃəli]

想一想再看

adv. 潜在地

 

发布评论我来说2句

    最新文章

    可可英语官方微信(微信号:ikekenet)

    每天向大家推送短小精悍的英语学习资料.

    添加方式1.扫描上方可可官方微信二维码。
    添加方式2.搜索微信号ikekenet添加即可。