手机APP下载

您现在的位置: 首页 > 英语听力 > 英语演讲 > TED演讲视频 > 正文

Deepfakes是如何破坏真相、威胁民主的

来源:可可英语 编辑:max   可可英语APP下载 |  可可官方微信:ikekenet

Rana Ayyub is a journalist in India whose work has exposed government corruption and human rights violations.

Rana Ayyub是一位印度记者,她的工作揭露了政府腐败和人权侵犯。
And over the years, she's gotten used to vitriol and controversy around her work.
这些年来,她已经习惯了工作中的残酷和争议。
But none of it could have prepared her for what she faced in April 2018.
但这些都不足以让她准备好来面对2018年4月份的事情。
She was sitting in a café with a friend when she first saw it: a two-minute, 20-second video of her engaged in a sex act.
当时她和一个朋友坐在咖啡厅,第一次看见了她自己出现在一个2分20秒的性爱视频里。
And she couldn't believe her eyes. She had never made a sex video.
她不能相信自己的眼睛。她从来没有拍摄过这样的视频。
But unfortunately, thousands upon thousands of people would believe it was her.
但是很不幸的是,成千上万的人选择相信就是她。
I interviewed Ms. Ayyub about three months ago, in connection with my book on sexual privacy.
我在大约三个月前采访了Ayyub女士,为了我的关于性隐私的书籍。
I'm a law professor, lawyer and civil rights advocate.
我是一个法律教授,律师和民权倡导者。
So it's incredibly frustrating knowing that right now, law could do very little to help her.
所以我非常沮丧,因为我知道现在的法律几乎帮不到她。
And as we talked, she explained that she should have seen the fake sex video coming.
当我们在谈话的时候,她解释道她应该更早意识到虚假性爱视频的到来。
She said, "After all, sex is so often used to demean and to shame women, especially minority women,
她说:“毕竟,性爱经常被用来贬低和羞辱女性,特别是少数民族的妇女,
and especially minority women who dare to challenge powerful men," as she had in her work.
尤其是敢于挑战权势的少数民族妇女。”就像她在工作所做的那样。
The fake sex video went viral in 48 hours.
那个伪造的性爱视频在48小时之内像病毒一样传播。
All of her online accounts were flooded with screenshots of the video,
她所有的线上账户都被这些视频截屏淹没,
with graphic rape and death threats and with slurs about her Muslim faith.
同时还有图片形式的强奸和死亡威胁和对她穆斯林信仰的诽谤。
Online posts suggested that she was "available" for sex.
网上的帖子说她可以随意跟其他人进行性行为。
And she was doxed, which means that her home address and her cell phone number were spread across the internet.
并且她已经被“人肉”,也就是说她的家庭住址和手机号已经在互联网上随处可见。
The video was shared more than 40,000 times.
那个视频已经被分享了超过4万次。
Now, when someone is targeted with this kind of cybermob attack, the harm is profound.
如今,这样的网络暴力,其伤害是非常深远的。
Rana Ayyub's life was turned upside down. For weeks, she could hardly eat or speak.
Rana Ayyub的生活已经彻底改变。几周时间里,她几乎不吃饭也不说话。
She stopped writing and closed all of her social media accounts,
她不再写文章,关闭了所有社交媒体账户,
which is, you know, a tough thing to do when you're a journalist.
这是作为一个记者很难应付的事情。
And she was afraid to go outside her family's home.
她不敢走出家门。
What if the posters made good on their threats?
如果那些发帖的人真的进行了威胁呢?
The UN Council on Human Rights confirmed that she wasn't being crazy.
联合国人权理事会确认她没有精神问题。
It issued a public statement saying that they were worried about her safety.
他们发布了一个声明,表示担心她的人身安全。
What Rana Ayyub faced was a deepfake:
Rana Ayyub面对的是deepfake:
machine-learning technology that manipulates or fabricates audio and video recordings
一种机器学习技术,能够操纵或者伪造视频和音频
to show people doing and saying things that they never did or said.
来展示人们做了或者说了他们从来没有做过或说过的事情。
Deepfakes appear authentic and realistic, but they're not; they're total falsehoods.
Deepfake让这些看起来是真实的,但事实上并不是;这些都是虚假的视频。
Although the technology is still developing in its sophistication, it is widely available.
尽管这项技术还在这种复杂性中发展,但已经广泛被使用。
Now, the most recent attention to deepfakes arose, as so many things do online, with pornography.
大家最近对deepfakes最关注的,就像很多网上的事情一样,是色情媒介。
In early 2018, someone posted a tool on Reddit to allow users to insert faces into porn videos.
在2018年早期,有人在Reddit论坛上发布了一个工具,可以让用户在色情视频中插入人脸。
And what followed was a cascade of fake porn videos featuring people's favorite female celebrities.
这促成了一系列伪造的色情视频,使用了人们最喜爱的女星的面貌。
And today, you can go on YouTube and pull up countless tutorials with step-by-step instructions
现在,你在YoutTube上能搜索到不计其数步骤详细的教学视频,
on how to make a deepfake on your desktop application.
教你如何在电脑上制作这种视频。
And soon we may be even able to make them on our cell phones.
很快,我们也许也可以在我们的手机上制作。
Now, it's the interaction of some of our most basic human frailties and network tools that can turn deepfakes into weapons.
此时,正是我们最基本的人性弱点和网络工具的相互作用使得deepfakes变成了武器。
So let me explain. As human beings, we have a visceral reaction to audio and video.
让我解释一下。作为人类,我们对音频和视频有直观的反应。
We believe they're true, on the notion that of course you can believe what your eyes and ears are telling you.
我们相信反应是真实的,理论上当然你应该相信你的眼睛和耳朵告诉你的事情。
And it's that mechanism that might undermine our shared sense of reality.
并且正是那种机制削弱了我们共有的对现实的感觉。
Although we believe deepfakes to be true, they're not.
尽管我们相信deepfakes是真的,但它不是。
And we're attracted to the salacious, the provocative.
我们被那些淫秽和挑衅吸引了。
We tend to believe and to share information that's negative and novel.
我们倾向于去相信和共享消极和新奇的信息。
And researchers have found that online hoaxes spread 10 times faster than accurate stories.
研究者们发现网上的骗局比真实故事的传播速度快10倍。
Now, we're also drawn to information that aligns with our viewpoints.
我们也容易被迎合我们观点的信息吸引。
Psychologists call that tendency "confirmation bias."
心理学家称这种倾向为“验证性偏见”。
And social media platforms supercharge that tendency,
同时社交媒体平台极大鼓励了这种倾向,
by allowing us to instantly and widely share information that accords with our viewpoints.
允许我们即刻并广泛地分享符合我们自我观点的信息。
Now, deepfakes have the potential to cause grave individual and societal harm.
Deepfakes能够对个人和社会造成严重的潜在危害。
So, imagine a deepfake that shows American soldiers in Afganistan burning a Koran.
想象一个deepfake视频显示美国士兵在阿富汗焚烧可兰经。
You can imagine that that deepfake would provoke violence against those soldiers.
你可以想象这个视频会挑起针对这些士兵的暴力行为。
And what if the very next day there's another deepfake that drops,
然后如果第二天又有另一个deepfake视频出现,
that shows a well-known imam based in London praising the attack on those soldiers?
展示在伦敦的一位有名的伊玛目(伊斯兰教领袖的称号)在赞美对士兵的袭击行为呢?
We might see violence and civil unrest, not only in Afganistan and the United Kingdom, but across the globe.
我们可能会看见暴力和内乱,不仅出现在阿富汗和英国,而是全世界。
And you might say to me, "Come on, Danielle, that's far-fetched." But it's not.
你可能会对我说,“拜托,Danielle,你说的太牵强了。”但并不是。
We've seen falsehoods spread on WhatsApp and other online message services lead to violence against ethnic minorities.
我们已经看见了许多虚假信息在WhatsApp和其它在线聊天服务里传播,导致了对少数民族的暴力。
And that was just text -- imagine if it were video.
而那还仅仅是文字--如果是视频呢?
Now, deepfakes have the potential to corrode the trust that we have in democratic institutions.
Deepfakes有可能会腐蚀我们在民主机构里的所拥有的信任。

Deepfakes是如何破坏真相、威胁民主的

So, imagine the night before an election. There's a deepfake showing one of the major party candidates gravely sick.

想象一下在选举的前一晚,Deepfake展示了其中一个主要党派的候选人生了重病。
The deepfake could tip the election and shake our sense that elections are legitimate.
它可以颠覆这个选举并动摇我们认为选举合法的想法。
Imagine if the night before an initial public offering of a major global bank,
接下来,想象在一个跨国银行的首次公开募股的前夜,
there was a deepfake showing the bank's CEO drunkenly spouting conspiracy theories.
deepfake显示银行的CEO因为喝醉宣扬阴谋论。
The deepfake could tank the IPO, and worse, shake our sense that financial markets are stable.
这个视频可以降低IPO的估值,更糟的是,动摇我们认为金融市场稳定的感觉。
So deepfakes can exploit and magnify the deep distrust that we already have in politicians, business leaders and other influential leaders.
所以,deepfakes可以利用并放大我们对政客、商业领袖和其他有影响力的领导人已有的深深的不信任性。
They find an audience primed to believe them. And the pursuit of truth is on the line as well.
它们找到了愿意相信它们的观众。对真相的追寻也岌岌可危。
Technologists expect that with advances in AI,
技术人员认为,未来随着人工智能的进步,
soon it may be difficult if not impossible to tell the difference between a real video and a fake one.
想要分辨出真视频和假视频可能将会变得非常困难。
So how can the truth emerge in a deepfake-ridden marketplace of ideas?
那么真相如何才能在一个deepfake驱使的思想市场中产生?
Will we just proceed along the path of least resistance and believe what we want to believe, truth be damned?
我们仅仅沿着阻力最小的路前进并相信我们想去相信的,而真相已经被诅咒了吗?
And not only might we believe the fakery, we might start disbelieving the truth.
而且不仅是我们相信虚假,可能是我们也开始不相信真相。
We've already seen people invoke the phenomenon of deepfakes to cast doubt on real evidence of their wrongdoing.
我们已经见过人们利用deepfakes的现象来质疑能证明他们错误行为的真实证据。
We've heard politicians say of audio of their disturbing comments,
我们也听过政客回应他们令人不安的评论,
"Come on, that's fake news. You can't believe what your eyes and ears are telling you."
“拜托,那是假新闻。你不能相信你的眼睛和耳朵告诉你的事情。”
And it's that risk that professor Robert Chesney and I call the "liar's dividend":
这是一种风险,Robert Chesney教授和我把它叫做“撒谎者的福利”:
the risk that liars will invoke deepfakes to escape accountability for their wrongdoing.
撒谎者利用deepfakes来逃避他们为错误行为负责的风险。
So we've got our work cut out for us, there's no doubt about it.
所以我们知道我们的工作非常艰难,毋庸置疑。
And we're going to need a proactive solution from tech companies, from lawmakers, law enforcers and the media.
我们需要一个主动积极的解决方案,来自于科技公司,国会议员,执法者和媒体。
And we're going to need a healthy dose of societal resilience.
并且我们将需要健康的社会复原力。
So now, we're right now engaged in a very public conversation about the responsibility of tech companies.
当下我们正在与科技公司进行关于社会责任的公开谈话。
And my advice to social media platforms has been to change their terms of service and community guidelines to ban deepfakes that cause harm.
我对社交媒体平台的建议已经影响了他们的服务条款和社区准则,来禁止deepfakes造成危害。
That determination, that's going to require human judgment, and it's expensive.
那种决心需要人类的判断,并且代价很大。
But we need human beings to look at the content and context of a deepfake
但是我们需要人类去看看deepfake的内容和背景,
to figure out if it is a harmful impersonation or instead, if it's valuable satire, art or education.
弄清楚它是否是有害的伪装,或者相反,是否是有价值的讽刺、艺术或教育。
So now, what about the law? Law is our educator.
接下来,法律呢?法律是我们的教育家。
It teaches us about what's harmful and what's wrong.
它教育我们什么是有害的,什么是错误的。
And it shapes behavior it deters by punishing perpetrators and securing remedies for victims.
法律通过惩罚肇事作案者和保护对受害者的补救来规范行为。
Right now, law is not up to the challenge of deepfakes.
现在,法律还无法应对deepfakes带来的挑战。
Across the globe, we lack well-tailored laws that would be designed to tackle digital impersonations
在全球范围内,我们缺少精心制定的法律,解决数字伪造问题的法律,
that invade sexual privacy, that damage reputations and that cause emotional distress.
针对会侵犯性隐私,毁坏名誉并造成情绪困扰的事件的法律。
What happened to Rana Ayyub is increasingly commonplace.
发生在Rana Ayyub身上的事情越来越普遍。
Yet, when she went to law enforcement in Delhi, she was told nothing could be done.
但是,当她去德里的执法部门举报时,她被告知他们无能为力。
And the sad truth is that the same would be true in the United States and in Europe.
并且令人悲伤的事实是,在美国和欧洲可能是同样的情况。
So we have a legal vacuum that needs to be filled.
所以我们需要填补这一法律空白。
My colleague Dr. Mary Anne Franks and I are working with US lawmakers
我和同事Mary Anne Franks博士正在和美国国会议员
to devise legislation that would ban harmful digital impersonations that are tantamount to identity theft.
一起制定可以禁止有危害的数字伪造,相当于身份盗窃的法律。
And we've seen similar moves in Iceland, the UK and Australia.
同时我们看见了在冰岛、英国和澳大利亚有类似的过程在进行。
But of course, that's just a small piece of the regulatory puzzle.
但是当然,这只是整个监管难题的一小部分。
Now, I know law is not a cure-all. Right? It's a blunt instrument.
我知道法律并不是灵丹妙药。它是一个武器。
And we've got to use it wisely. It also has some practical impediments.
我们需要智慧地使用它。法律也有一些实际的障碍。
You can't leverage law against people you can't identify and find.
你不能利用法律对抗你无法找到和定位的人。
And if a perpetrator lives outside the country where a victim lives,
如果一个作案者居住在不同于受害者的国家,
then you may not be able to insist that the perpetrator come into local courts to face justice.
那么你可能无法强制作案者来到当地法庭来接受审判。
And so we're going to need a coordinated international response.
因此我们需要一项协调一致的、联合全球的响应策略。
Education has to be part of our response as well.
教育也是我们响应策略中的一部分。
Law enforcers are not going to enforce laws they don't know about and proffer problems they don't understand.
执法者不会去强制执行他们不知道的法律,也不会提出他们不明白的问题。
In my research on cyberstalking,
在我关于网络追踪的研究中,
I found that law enforcement lacked the training to understand the laws available to them and the problem of online abuse.
我发现执法者缺少了解可供使用的法律和网络暴力问题的培训。
And so often they told victims, "Just turn your computer off. Ignore it. It'll go away."
所以他们经常告诉受害者,“把电脑关掉就行了,不要管,自然就没了。”
And we saw that in Rana Ayyub's case.
我们在RanaAyyub的案例中也发现了这点。
She was told, "Come on, you're making such a big deal about this. It's boys being boys."
她被告知:“拜托,你太小题大做了。他们只是不懂事的孩子。”
And so we need to pair new legislation with efforts at training.
所以我们需要展开针对新法律的培训。
And education has to be aimed on the media as well.
媒体也同样需要接受相关的培训。
Journalists need educating about the phenomenon of deepfakes so they don't amplify and spread them.
记者们需要接受关于deepfakes现象的培训,这样它们就不会被放大和传播。
And this is the part where we're all involved. Each and every one of us needs educating.
这正是我们所有人需要参与其中的部分。我们每个人都需要这方面的教育。
We click, we share, we like, and we don't even think about it.
我们点击,我们分享,我们点赞,但是我们没有认真去思考。
We need to do better. We need far better radar for fakery.
我们需要做得更好。我们需要针对虚假事件更敏锐的雷达。
So as we're working through these solutions, there's going to be a lot of suffering to go around.
所以在我们研究解决方案的时候,会有很多痛苦和折磨围绕着我们。
Rana Ayyub is still wrestling with the fallout.
Rana Ayyub仍在和后果搏斗。
She still doesn't feel free to express herself on- and offline.
她还是无法自在地在线上和线下表达自己。
And as she told me, she still feels like there are thousands of eyes on her naked body,
就像她告诉我的,她仍然感觉有上千双眼睛在看着她的裸体,
even though, intellectually, she knows it wasn't her body.
尽管理智上她知道那并不是她的身体。
And she has frequent panic attacks, especially when someone she doesn't know tries to take her picture.
而且她经常遭受惊吓,特别是当有陌生人试图去偷拍她。
"What if they're going to make another deepfake?" she thinks to herself.
“如果他们想拍来做另一个deepfake呢?”她不禁想。
And so for the sake of individuals like Rana Ayyub and the sake of our democracy,
所以为了像Rana Ayyub一样的个人,为了我们的民主,
we need to do something right now. Thank you.
我们现在就要有所作为。谢谢大家。

重点单词   查看全部解释    
emotional [i'məuʃənl]

想一想再看

adj. 感情的,情绪的

 
judgment ['dʒʌdʒmənt]

想一想再看

n. 裁判,宣告,该判决书

联想记忆
network ['netwə:k]

想一想再看

n. 网络,网状物,网状系统
vt. (

 
blunt [blʌnt]

想一想再看

adj. 钝的,迟钝的,直率的
v. 使迟钝,

 
emerge [i'mə:dʒ]

想一想再看

vi. 浮现,(由某种状态)脱出,(事实)显现出来

联想记忆
disturbing [di'stə:biŋ]

想一想再看

adj. 烦扰的;令人不安的 v. 干扰;打断(dist

 
escape [is'keip]

想一想再看

v. 逃跑,逃脱,避开
n. 逃跑,逃脱,(逃

 
dividend ['dividend]

想一想再看

n. 红利,股息,意外之财,彩金,被除数

联想记忆
demean [di'mi:n]

想一想再看

vt. 贬抑,降低 vt. 刻意

联想记忆
advocate ['ædvəkeit,'ædvəkit]

想一想再看

n. 提倡者,拥护者,辩护者,律师
v. 主张

联想记忆

发布评论我来说2句

    最新文章

    可可英语官方微信(微信号:ikekenet)

    每天向大家推送短小精悍的英语学习资料.

    添加方式1.扫描上方可可官方微信二维码。
    添加方式2.搜索微信号ikekenet添加即可。