手机APP下载

您现在的位置: 首页 > 英语听力 > 英语演讲 > TED演讲视频 > 正文

如何让人工智能赋予我们力量 而非受控于它

来源:可可英语 编辑:max   可可英语APP下载 |  可可官方微信:ikekenet

After 13.8 billion years of cosmic history, our universe has woken up and become aware of itself.

在138亿年的历史之后,我们的宇宙终于觉醒了,并开始有了自我意识。
From a small blue planet, tiny, conscious parts of our universe
从一颗蓝色的小星球,宇宙中那些有了微小意识的部分,
have begun gazing out into the cosmos with telescopes, discovering something humbling.
开始用它们的望远镜,窥视整个宇宙,从而有了谦卑的发现。
We've discovered that our universe is vastly grander than our ancestors imagined
宇宙比我们祖先所想象的要大得多,使得生命显得如同渺小的扰动,
and that life seems to be an almost imperceptibly small perturbation on an otherwise dead universe.
小到足以被忽视,但若没有它们的存在,宇宙也没了生命。
But we've also discovered something inspiring,
不过我们也发现了一些振奋人心的事,
which is that the technology we're developing has the potential to help life flourish like never before,
那就是我们所开发的技术,有着前所未有的潜能去促使生命变得更加繁盛,
not just for centuries but for billions of years, and not just on earth but throughout much of this amazing cosmos.
不仅仅只有几个世纪,而是持续了数十亿年;也不仅仅是地球上,甚至是在整个浩瀚的宇宙中。
I think of the earliest life as "Life 1.0" because it was really dumb, like bacteria,
我把最早的生命称之为“生命1.0”,因为它那会儿还略显蠢笨,就像细菌,
unable to learn anything during its lifetime.
在它们的一生中也不会学到什么东西。
I think of us humans as "Life 2.0" because we can learn, which we in nerdy, geek speak,
我把我们人类称为“生命2.0”,因为我们能够学习,用技术宅男的话来说,
might think of as installing new software into our brains, like languages and job skills.
就像是在我们脑袋里装了一个新的软件,比如语言及工作技能。
"Life 3.0," which can design not only its software but also its hardware of course doesn't exist yet.
而“生命3.0”不仅能开始设计它的软件,甚至还可以创造其硬件。当然,它目前还不存在。
But perhaps our technology has already made us "Life 2.1," with our artificial knees, pacemakers and cochlear implants.
但是也许我们的科技已经让我们走进了“生命2.1”,因为现在我们有了人工膝盖,心脏起搏器以及耳蜗植入技术。
So let's take a closer look at our relationship with technology, OK?
我们一起来聊聊人类和科技的关系吧!
As an example, the Apollo 11 moon mission was both successful and inspiring,
举个例子,阿波罗11号月球任务很成功,令人备受鼓舞,
showing that when we humans use technology wisely, we can accomplish things that our ancestors could only dream of.
展示出了我们人类对于使用科技的智慧,我们实现了很多祖先们只能想象的事情。
But there's an even more inspiring journey propelled by something more powerful than rocket engines,
但还有一段更加鼓舞人心的旅程,由比火箭引擎更加强大的东西所推动着,
where the passengers aren't just three astronauts but all of humanity.
乘客也不仅仅只是三个宇航员,而是我们全人类。
Let's talk about our collective journey into the future with artificial intelligence.
让我们来聊聊与人工智能一起走向未来的这段旅程。
My friend Jaan Tallinn likes to point out that just as with rocketry, it's not enough to make our technology powerful.
我的朋友扬·塔林常说,这就像是火箭学一样,只让我们的科技拥有强大的力量是不够的。
We also have to figure out, if we're going to be really ambitious, how to steer it and where we want to go with it.
如果我们有足够的雄心壮志,就应当想出如何控制它们的方法,希望它朝着怎样的方向前进。
So let's talk about all three for artificial intelligence: the power, the steering and the destination.
那么对于人工智能,我们先来谈谈这三点:力量、操控和目的地。
Let's start with the power. I define intelligence very inclusively -- simply as our ability to accomplish complex goals,
我们先来说力量。我对于人工智能的定义非常全面--就是我们能够完成复杂目标的能力,
because I want to include both biological and artificial intelligence.
因为我想把生物学和人工智能都包含进去。
And I want to avoid the silly carbon-chauvinism idea that you can only be smart if you're made of meat.
我还想要避免愚蠢的碳沙文主义的观点,即你认为如果你很聪明,你就一定有着肉身。
It's really amazing how the power of AI has grown recently.
人工智能的力量在近期的发展十分惊人。
Just think about it. Not long ago, robots couldn't walk. Now, they can do backflips.
试想一下。甚至在不久以前,机器人还不能走路呢。现在,它们居然开始后空翻了。
Not long ago, we didn't have self-driving cars. Now, we have self-flying rockets.
不久以前,我们还没有全自动驾驶汽车。现在,我们都有自动飞行的火箭了。
Not long ago, AI couldn't do face recognition.
不久以前,人工智能甚至不能完成脸部识别。
Now, AI can generate fake faces and simulate your face saying stuff that you never said.
现在,人工智能都开始生成仿真面貌了,并模拟你的脸部表情,说出你从未说过的话。
Not long ago, AI couldn't beat us at the game of Go.
不久以前,人工智能还不能在围棋中战胜人类。
Then, Google DeepMind's AlphaZero AI took 3,000 years of human Go games and Go wisdom,
然后,谷歌的DeepMind推出的AlphaZero就掌握了人类三千多年的围棋比赛和智慧,
gnored it all and became the world's best player by just playing against itself.
通过和自己对战的方式轻松秒杀我们,成了全球最厉害的围棋手。
And the most impressive feat here wasn't that it crushed human gamers,
这里最让人印象深刻的部分,不是它击垮了人类棋手,
but that it crushed human AI researchers who had spent decades handcrafting game-playing software.
而是它击垮了人类人工智能的研究者,这些研究者花了数十年手工打造了下棋软件。
And AlphaZero crushed human AI researchers not just in Go but even at chess, which we have been working on since 1950.
此外,AlphaZero也在国际象棋比赛中轻松战胜了人类的人工智能研究者们,我们从1950年就开始致力于国际象棋研究。
So all this amazing recent progress in AI really begs the question: How far will it go?
所以近来,这些惊人的人工智能进步,让大家不禁想问:它到底能达到怎样的程度?
I like to think about this question in terms of this abstract landscape of tasks,
我在思考这个问题时,想从工作任务中的抽象地景来切入,
where the elevation represents how hard it is for AI to do each task at human level,
图中的海拔高度表示人工智能要把每一项工作做到人类的水平的难度,
and the sea level represents what AI can do today.
海平面表示现今的人工智能所达到的水平。
The sea level is rising as AI improves, so there's a kind of global warming going on here in the task landscape.
随着人工智能的进步,海平面会上升,所以在这工作任务地景上,有着类似全球变暖的后果。
And the obvious takeaway is to avoid careers at the waterfront... which will soon be automated and disrupted.
很显然,我们要避免从事那些近海区的工作--这些工作不会一直由人来完成,迟早要被自动化取代。
But there's a much bigger question as well. How high will the water end up rising?
然而同时还存在一个很大的问题,水平面最后会升到多高?
Will it eventually rise to flood everything, matching human intelligence at all tasks.
它最后是否会升高到淹没一切,人工智能会不会最终能胜任所有的工作?
This is the definition of artificial general intelligence -- AGI,
这就成了通用人工智能--缩写是AGI,
which has been the holy grail of AI research since its inception.
从一开始它就是人工智能研究最终的圣杯。
By this definition, people who say, "Ah, there will always be jobs that humans can do better than machines,"
根据这个定义,有人说,“总是有些工作,人类可以做得比机器好的。”
are simply saying that we'll never get AGI.
意思就是,我们永远不会有AGI。
Sure, we might still choose to have some human jobs or to give humans income and purpose with our jobs,
当然,我们可以仍然保留一些人类的工作,或者说,通过我们的工作带给人类收入和生活目标,
but AGI will in any case transform life as we know it with humans no longer being the most intelligent.
但是不论如何,AGI都会转变我们对生命的认知,人类或许不再是最有智慧的了。
Now, if the water level does reach AGI, then further AI progress will be driven mainly not by humans but by AI,
如果海平面真的上升到AGI的高度,那么进一步的人工智能进展将会由人工智能来引领,而非人类,
which means that there's a possibility that further AI progress
那就意味着有可能,进一步提升人工智能水平将会进行得非常迅速,
could be way faster than the typical human research and development timescale of years,
甚至超越用年份来计算时间的典型人类研究和发展,
raising the controversial possibility of an intelligence explosion
提高到一种极具争议性的可能性,那就是智能爆炸,
where recursively self-improving AI rapidly leaves human intelligence far behind, creating what's known as superintelligence.
即能够不断做自我改进的人工智能很快就会遥遥领先人类,创造出所谓的超级人工智能。
Alright, reality check: Are we going to get AGI any time soon?
好了,回归现实:我们很快就会有AGI吗?
Some famous AI researchers, like Rodney Brooks, think it won't happen for hundreds of years.
一些著名的AI研究者,如罗德尼·布鲁克斯,认为一百年内是没有可能的。
But others, like Google DeepMind founder Demis Hassabis,
但是其他人,如谷歌DeepMind公司的创始人德米斯·哈萨比斯,
are more optimistic and are working to try to make it happen much sooner.
就比较乐观,且努力想要它尽早实现。
And recent surveys have shown that most AI researchers actually share Demis's optimism,
近期的调查显示,大部分的人工智能研究者其实都和德米斯一样持乐观态度,
expecting that we will get AGI within decades, so within the lifetime of many of us, which begs the question -- and then what?
预期我们十年内就会有AGI,所以我们中许多人在有生之年就能看到,这就让人不禁想问--那么接下来呢?
What do we want the role of humans to be if machines can do everything better and cheaper than us?
如果什么事情机器都能做得比人好,成本也更低,那么人类又该扮演怎样的角色?
The way I see it, we face a choice. One option is to be complacent.
依我所见,我们面临一个选择。选择之一是要自我满足。
We can say, "Oh, let's just build machines that can do everything we can do and not worry about the consequences.
我们可以说,“我们来建造机器,让它来帮助我们做一切事情,不要担心后果。
Come on, if we build technology that makes all humans obsolete, what could possibly go wrong?"
拜托,如果我们能打造出让全人类都被淘汰的机器,还有什么会出错吗?”
But I think that would be embarrassingly lame. I think we should be more ambitious -- in the spirit of TED.
但我觉得那样真是差劲到悲哀。我们认为我们应该更有野心--带着TED的精神。
Let's envision a truly inspiring high-tech future and try to steer towards it.
让我们来想象一个真正鼓舞人心的高科技未来,并试着朝着它前进。
This brings us to the second part of our rocket metaphor: the steering.
这就把我们带到了火箭比喻的第二部分:操控。
We're making AI more powerful, but how can we steer towards a future where AI helps humanity flourish rather than flounder?
我们让人工智能的力量更强大,但是我们要如何朝着人工智能帮助人类未来更加繁盛,而非变得挣扎的目标不断前进呢?
To help with this, I cofounded the Future of Life Institute.
为了协助实现它,我联合创办了“未来生命研究所”。
It's a small nonprofit promoting beneficial technology use,
它是个小型的非营利机构,旨在促进有益的科技使用,
and our goal is simply for the future of life to exist and to be as inspiring as possible.
我们的目标很简单,就是希望生命的未来能够存在,且越是鼓舞人心越好。
You know, I love technology. Technology is why today is better than the Stone Age.
你们知道的,我爱科技。现今之所以比石器时代更好,就是因为科技。
And I'm optimistic that we can create a really inspiring high-tech future ... if -- and this is a big if
我很乐观的认为我们能创造出一个非常鼓舞人心的高科技未来...如果--这个“如果”很重要,
if we win the wisdom race -- the race between the growing power of our technology and the growing wisdom with which we manage it.
如果我们能赢得这场关于智慧的赛跑--这场赛跑的两位竞争者便是我们不断成长的科技力量以及我们用来管理科技的不断成长的智慧。
But this is going to require a change of strategy because our old strategy has been learning from mistakes.
但这也需要策略上的改变,因为我们以往的策略往往都是从错误中学习的。
We invented fire, screwed up a bunch of times -- invented the fire extinguisher.
我们发明了火,因为搞砸了很多次--我们发明出了灭火器。
We invented the car, screwed up a bunch of times -- invented the traffic light, the seat belt and the airbag,
我们发明了汽车,又一不小心搞砸了很多次--发明了红绿灯,安全带和安全气囊,
but with more powerful technology like nuclear weapons and AGI, learning from mistakes is a lousy strategy, don't you think?
但对于更强大的科技,像是核武器和AGI,要去从错误中学习,似乎是个比较糟糕的策略,你们怎么看?
It's much better to be proactive rather than reactive;
事前的准备比事后的补救要好得多;
plan ahead and get things right the first time because that might be the only time we'll get.
提早做计划,争取一次成功,因为有时我们或许没有第二次机会。
But it is funny because sometimes people tell me, "Max, shhh, don't talk like that. That's Luddite scaremongering."
但有趣的是,有时候有人告诉我,“麦克斯,嘘,别那样说话。那是勒德分子在制造恐慌。”
But it's not scaremongering. It's what we at MIT call safety engineering.
但这并不是制造恐慌。在麻省理工学院,我们称之为安全工程。
Think about it: before NASA launched the Apollo 11 mission,
想想看:在美国航天局(NASA)部署阿波罗11号任务之前,
they systematically thought through everything that could go wrong
他们全面地设想过所有可能出错的状况,
when you put people on top of explosive fuel tanks and launch them somewhere where no one could help them.
毕竟是要把人类放进易燃易爆的太空舱里,再将他们发射上一个无人能助的境遇。
And there was a lot that could go wrong. Was that scaremongering? No.
可能出错的情况非常多,那是在制造恐慌吗?不是。

如何让人工智能赋予我们力量 而非受控于它

That's was precisely the safety engineering that ensured the success of the mission,

那正是在做安全工程的工作,以确保任务顺利进行,
and that is precisely the strategy I think we should take with AGI.
这正是我认为处理AGI时应该采取的策略。
Think through what can go wrong to make sure it goes right.
想清楚什么可能出错,然后避免它的发生。
So in this spirit, we've organized conferences,
基于这样的精神,我们组织了几场大会,
bringing together leading AI researchers and other thinkers to discuss how to grow this wisdom we need to keep AI beneficial.
邀请了世界顶尖的人工智能研究者和其他有想法的专业人士,来探讨如何发展这样的智慧,从而确保人工智能对人类有益。
Our last conference was in Asilomar, California last year and produced this list of 23 principles
我们最近的一次大会去年在加州的阿西洛玛举行,我们得出了23条原则,
which have since been signed by over 1,000 AI researchers and key industry leaders,
自此已经有超过1000位人工智能研究者,以及核心企业的领导人参与签署,
and I want to tell you about three of these principles.
我想要和各位分享其中的三项原则。
One is that we should avoid an arms race and lethal autonomous weapons.
第一,我们需要避免军备竞赛,以及致命的自动化武器出现。
The idea here is that any science can be used for new ways of helping people or new ways of harming people.
其中的想法是,任何科学都可以用新的方式来帮助人们,同样也可以以新的方式对我们造成伤害。
For example, biology and chemistry are much more likely to be used for new medicines or new cures than for new ways of killing people,
例如,生物和化学更可能被用来制造新的医药用品,而非带来新的杀人方法,
because biologists and chemists pushed hard -- and successfully -- for bans on biological and chemical weapons.
因为生物学家和化学家很努力--也很成功地--在推动禁止生化武器的出现。
And in the same spirit, most AI researchers want to stigmatize and ban lethal autonomous weapons.
基于同样的精神,大部分的人工智能研究者也在试图指责和禁止致命的自动化武器。
Another Asilomar AI principle is that we should mitigate AI-fueled income inequality.
另一条阿西洛玛人工智能会议的原则是,我们应该要减轻由人工智能引起的收入不平等。
I think that if we can grow the economic pie dramatically with AI
我认为,我们能够大幅度利用人工智能发展出一块经济蛋糕,
and we still can't figure out how to divide this pie so that everyone is better off, then shame on us.
但却没能想出如何来分配它才能让所有人受益,那可太丢人了。
Alright, now raise your hand if your computer has ever crashed.
那么问一个问题,如果你的电脑有死机过的,请举手。
Wow, that's a lot of hands.
哇,好多人举手。
Well, then you'll appreciate this principle that we should invest much more in AI safety research,
那么你们就会感谢这条准则,我们应该要投入更多以确保对人工智能安全性的研究,
because as we put AI in charge of even more decisions and infrastructure,
因为我们让人工智能在主导更多决策以及基础设施时,
we need to figure out how to transform today's buggy and hackable computers into robust AI systems that we can really trust,
我们要了解如何将会出现程序错误以及有漏洞的电脑转化为可靠的人工智能,
because otherwise, all this awesome new technology can malfunction and harm us, or get hacked and be turned against us.
否则的话,这些了不起的新技术就会出现故障,反而伤害到我们,或被黑入以后转而对抗我们。
And this AI safety work has to include work on AI value alignment,
这项人工智能安全性的工作必须包含对人工智能价值观的校准,
because the real threat from AGI isn't malice, like in silly Hollywood movies, but competence
因为AGI会带来的威胁通常并非出于恶意,就像是愚蠢的好莱坞电影中表现的那样,而是源于能力,
AGI accomplishing goals that just aren't aligned with ours.
AGI想完成的目标与我们的目标背道而驰。
For example, when we humans drove the West African black rhino extinct,
例如,当我们人类促使了西非的黑犀牛灭绝时,
we didn't do it because we were a bunch of evil rhinoceros haters, did we?
并不是因为我们是邪恶且痛恨犀牛的家伙,对吧?
We did it because we were smarter than them and our goals weren't aligned with theirs.
我们能够做到只是因为我们比它们聪明,而我们的目标和它们的目标相违背。
But AGI is by definition smarter than us, so to make sure that we don't put ourselves in the position of those rhinos if we create AGI,
但是AGI在定义上就比我们聪明,所以必须确保我们别让自己落到了黑犀牛的境遇,如果我们发明AGI,
we need to figure out how to make machines understand our goals, adopt our goals and retain our goals.
首先就要解决如何让机器明白我们的目标,选择采用我们的目标,并一直跟随我们的目标。
And whose goals should these be, anyway? Which goals should they be?
不过,这些目标到底是谁的目标?这些目标到底是什么目标?
This brings us to the third part of our rocket metaphor: the destination.
这就引出了火箭比喻的第三部分:目的地。
We're making AI more powerful, trying to figure out how to steer it, but where do we want to go with it?
我们要让人工智能的力量更强大,试图想办法来操控它,但我们到底想把它带去何方呢?
This is the elephant in the room that almost nobody talks about -- not even here at TED
这就像是房间里的大象,显而易见却无人问津--甚至在TED也没人谈论,
because we're so fixated on short-term AI challenges.
因为我们都把目光聚焦于短期的人工智能挑战。
Look, our species is trying to build AGI, motivated by curiosity and economics,
你们看,我们人类正在试图建造AGI,由我们的好奇心以及经济需求所带动,
but what sort of future society are we hoping for if we succeed?
但如果我们能成功,希望能创造出怎样的未来社会呢?
We did an opinion poll on this recently, and I was struck to see that most people actually want us to build superintelligence:
最近对于这一点,我们做了一次观点投票,结果很让我惊讶,大部分的人其实希望我们能打造出超级人工智能:
AI that's vastly smarter than us in all ways.
在各个方面都比我们聪明的人工智能。
What there was the greatest agreement on was that we should be ambitious and help life spread into the cosmos,
大家甚至一致希望我们应该更有野心,并协助生命在宇宙中的拓展,
but there was much less agreement about who or what should be in charge.
但对于应该由谁,或者什么来主导,大家就各持己见了。
And I was actually quite amused to see that there's some some people who want it to be just machines.
有件事我觉得非常奇妙,就是我看到有些人居然表示让机器主导就好了。
And there was total disagreement about what the role of humans should be, even at the most basic level,
至于人类该扮演怎样的角色,大家的意见简直就是大相径庭,即使在最基础的层面上也是,
so let's take a closer look at possible futures that we might choose to steer toward, alright?
那么让我们进一步去看看这些可能的未来,我们可能去往目的地,怎么样?
So don't get me wrong here. I'm not talking about space travel, merely about humanity's metaphorical journey into the future.
别误会我的意思。我不是在谈论太空旅行,只是打个比方,人类进入未来的这个旅程。
So one option that some of my AI colleagues like is to build superintelligence and keep it under human control,
我的一些研究人工智能的同事很喜欢的一个选择就是打造人工智能,并确保它被人类所控制,
like an enslaved god, disconnected from the internet and used to create unimaginable technology and wealth for whoever controls it.
就像被奴役起来的神一样,网络连接被断开,为它的操控者创造出无法想象的科技和财富。
But Lord Acton warned us that power corrupts, and absolute power corrupts absolutely,
但是艾克顿勋爵警告过我们,权力会带来腐败,绝对的权力终将带来绝对的腐败,
so you might worry that maybe we humans just aren't smart enough, or wise enough rather, to handle this much power.
所以也许你会担心我们人类就是还不够聪明,或者不够智慧,无法妥善处理过多的权力。
Also, aside from any moral qualms you might have about enslaving superior minds,
还有,除了对于奴役带来的优越感,你可能还会产生道德上的忧虑,
you might worry that maybe the superintelligence could outsmart us, break out and take over.
你也许会担心人工智能能够在智慧上超越我们,奋起反抗,并取得我们的控制权。
But I also have colleagues who are fine with AI taking over and even causing human extinction,
但是我也有同事认为,让人工智能来操控一切也无可厚非,造成人类灭绝也无妨,
as long as we feel the the AIs are our worthy descendants, like our children.
只要我们觉得人工智能配得上成为我们的后代,就像是我们的孩子。
But how would we know that the AIs have adopted our best values
但是我们如何才能知道人工智能汲取了我们最好的价值观,
and aren't just unconscious zombies tricking us into anthropomorphizing them?
而不是只是一个无情的僵尸,让我们误以为它们有人性?
Also, shouldn't those people who don't want human extinction have a say in the matter, too?
此外,那些绝对不想看到人类灭绝的人,对此应该也有话要说吧?
Now, if you didn't like either of those two high-tech options,
如果这两个高科技的选择都不是你所希望的,
it's important to remember that low-tech is suicide from a cosmic perspective,
请记得,从宇宙历史的角度来看,低级的科技如同自杀,
because if we don't go far beyond today's technology, the question isn't whether humanity is going to go extinct,
因为如果我们不能远远超越今天的科技,问题就不再是人类是否会灭绝,
merely whether we're going to get taken out by the next killer asteroid,
而是让我们灭绝的会是下一次巨型流星撞击地球,
supervolcano or some other problem that better technology could have solved.
还是超级火山爆发,亦或是一些其他本该可以由更好的科技来解决的问题。
So, how about having our cake and eating it
所以,为什么不干脆坐享其成...
with AGI that's not enslaved but treats us well because its values are aligned with ours?
使用非奴役的AGI,因为价值观和我们一致,愿意和我们并肩作战的AGI?
This is the gist of what Eliezer Yudkowsky has called "friendly AI," and if we can do this, it could be awesome.
尤多科斯基所谓的“友善的人工智能”就是如此,若我们能做到这点,那简直太棒了。
It could not only eliminate negative experiences like disease, poverty, crime and other suffering,
它或许不会解决负面的影响,如疾病、贫穷、犯罪或是其它,
but it could also give us the freedom to choose from a fantastic new diversity of positive experiences
但是它会给予我们自由,让我们从那些正面的境遇中去选择,
basically making us the masters of our own destiny.
让我们成为自己命运的主人。
So in summary, our situation with technology is complicated, but the big picture is rather simple.
总的来说,在科技上,我们的现状很复杂,但是若从大局来看,又很简单。
Most AI researchers expect AGI within decades, and if we just bumble into this unprepared,
多数人工智能的研究者认为AGI能在未来十年内实现,如果我们没有事先准备好去面对它们,
it will probably be the biggest mistake in human history -- let's face it.
就可能成为人类历史上最大的一个错误--我们要面对现实。
It could enable brutal, global dictatorship with unprecedented inequality, surveillance and suffering,
它可能导致残酷的全球独裁主义变成现实,造成前所未有的不平等监控和苦难,
and maybe even human extinction. But if we steer carefully, we could end up in a fantastic future where everybody's better off:
或许甚至导致人类灭绝。但是如果我们能小心操控,我们可能会有个美好的未来,人人都会受益的未来,
the poor are richer, the rich are richer, everybody is healthy and free to live out their dreams.
穷人变得富有,富人变得更富有,每个人都是健康的,能自由地去实现他们的梦想。
Now, hang on. Do you folks want the future that's politically right or left?
不过先别急。你们希望未来的政治是左派还是右派?
Do you want the pious society with strict moral rules, or do you an hedonistic free-for-all, more like Burning Man 24/7?
你们想要一个有严格道德准则的社会,还是一个人人可参与的享乐主义社会,更像是个无时无刻不在运转的火人盛会?
Do you want beautiful beaches, forests and lakes,
你们想要美丽的海滩、森林和湖泊,
or would you prefer to rearrange some of those atoms with the computers, enabling virtual experiences?
还是偏好用电脑重新排列组成新的原子,实现真正的虚拟现实?
With friendly AI, we could simply build all of these societies
有了友善的人工智能,我们就能轻而易举地建立这些社会,
and give people the freedom to choose which one they want to live in
让大家有自由去选择想要生活在怎样的社会里,
because we would no longer be limited by our intelligence, merely by the laws of physics.
因为我们不会再受到自身智慧的限制,唯一的限制只有物理的定律。
So the resources and space for this would be astronomical -- literally.
所以资源和空间会取之不尽--毫不夸张。
So here's our choice. We can either be complacent about our future,
我们的选择如下:我们可以对未来感到自满,
taking as an article of blind faith that any new technology is guaranteed to be beneficial,
带着盲目的信念,相信任何科技必定是有益的,
and just repeat that to ourselves as a mantra over and over and over again as we drift like a rudderless ship towards our own obsolescence.
并将这个想法当作圣歌一般,不断默念,让我们像漫无目的船只,驶向自我消亡的结局。
Or we can be ambitious -- thinking hard about how to steer our technology and where we want to go with it to create the age of amazement.
或者,我们可以拥有雄心壮志--努力去找到操控我们科技的方法,以及向往的目的地,创造出真正令人惊奇的时代。
We're all here to celebrate the age of amazement,
我们相聚在这里,赞颂这令人惊奇的时代,
and I feel that its essence should lie in becoming not overpowered but empowered by our technology. Thank you.
我觉得,它的精髓应当是,让科技赋予我们力量,而非反过来受控于它。谢谢大家。

重点单词   查看全部解释    
poverty ['pɔvəti]

想一想再看

n. 贫困,贫乏

 
cosmos ['kɔzmɔs]

想一想再看

n. 宇宙
(复数)cosmos或cosmos

联想记忆
rearrange ['ri:ə'reindʒ]

想一想再看

重新整理,重新排序

联想记忆
transform [træns'fɔ:m]

想一想再看

vt. 转换,变形
vi. 改变
n

联想记忆
controversial [.kɔntrə'və:ʃəl]

想一想再看

adj. 引起争论的,有争议的

联想记忆
intelligent [in'telidʒənt]

想一想再看

adj. 聪明的,智能的

 
explosive [iks'pləusiv]

想一想再看

adj. 爆炸(性)的
n. 炸药

联想记忆
extinction [iks'tiŋkʃən]

想一想再看

n. 消失,消减,废止

联想记忆
pious ['paiəs]

想一想再看

adj. 虔诚的,尽责的,值得的

 
ambitious [æm'biʃəs]

想一想再看

adj. 有雄心的,有抱负的,野心勃勃的

联想记忆

发布评论我来说2句

    最新文章

    可可英语官方微信(微信号:ikekenet)

    每天向大家推送短小精悍的英语学习资料.

    添加方式1.扫描上方可可官方微信二维码。
    添加方式2.搜索微信号ikekenet添加即可。