手机APP下载

您现在的位置: 首页 > 双语阅读 > 双语新闻 > 科技新闻 > 正文

机器人并不只抢人类工作 开始向人类发放工作岗位了

来源:可可英语 编辑:shaun   可可英语APP下载 |  可可官方微信:ikekenet

Robots are not just taking people’s jobs away, they are beginning to hand them out, too.

机器人并不只抢走人类的工作,它们也开始向人类发放工作岗位了。

Go to any recruitment industry event and you will find the air is thick with terms like machine learning, big data and predictive analytics.

参加招聘行业的任何一场活动,你都会发现空气中弥漫着像机器学习、大数据和预测分析这样的字眼。

The argument for using these tools in recruitment is simple.

在招聘中使用这些工具的理由很简单。

Robo-recruiters can sift through thousands of job candidates far more efficiently than humans.

机器人招聘者可以快速筛选数以千计的应聘者,效率远高于人类。

They can also do it more fairly.

它们还能做到更加公平。

Since they do not harbour conscious or unconscious human biases, they will recruit a more diverse and meritocratic workforce.

因为它们不会像人类那样带着有意或无意的偏见,它们会招聘到一批更多元化和择优录用的员工。

This is a seductive idea but it is also dangerous.

这是个很诱人的想法,但也是危险的。

Algorithms are not inherently neutral just because they see the world in zeros and ones.

算法的中立并非是其固有,而是因为它们看到的世界只是0和1。

For a start, any machine learning algorithm is only as good as the training data from which it learns.

首先,任何机器学习的算法,并不会比它所学习的训练数据更好。

Take the PhD thesis of academic researcher Colin Lee, released to the press this year. He analysed data on the success or failure of 441,769 job applications and built a model that could predict with 70 to 80 per cent accuracy which candidates would be invited to interview.

以学术研究者科林•李(Colin Lee)今年向媒体发布的博士论文为例,他分析了44.1769万份成功和不成功的求职申请,建立了一个准确度达70%至80%的模型,可预测哪些应聘者会被邀请参加面试。

The press release plugged this algorithm as a potential tool to screen a large number of CVs while avoiding human error and unconscious bias.

该新闻稿称,这一算法潜在可用作工具,用于在筛选大量简历的过程中避免人为错误和无意识偏见。

But a model like this would absorb any human biases at work in the original recruitment decisions.

但这样的模型会吸收最初招聘决定中的人为职场偏见。

For example, the research found that age was the biggest predictor of being invited to interview, with the youngest and the oldest applicants least likely to be successful.

例如,上述研究发现,年龄因素可以在最大程度上预测该应聘者是否会被邀请面试,最年轻和最年长的应聘者最不可能成功。

You might think it fair enough that inexperienced youngsters do badly, but the routine rejection of older candidates seems like something to investigate rather than codify and perpetuate.

你可能觉得这挺公平,因为没有经验的年轻人干不好,但拒绝年长应聘者的常见做法似乎值得调查,而不是被编入程序和得以延续。

Mr Lee acknowledges these problems and suggests it would be better to strip the CVs of attributes such as gender, age and ethnicity before using them.

科林承认这些问题的存在,并建议最好从简历中剔除一些属性(例如:性别、年龄和种族)再加以使用。

Even then, algorithms can wind up discriminating.

即使那样,算法仍有可能带有歧视。

In a paper published this year, academics Solon Barocas and Andrew Selbst use the example of an employer who wants to select those candidates most likely to stay for the long term.

在今年发表的一篇论文中,索伦•巴洛卡斯(Solon Barocas)和安德鲁•谢尔博斯特(Andrew Selbst)这两位学者使用了一个案例,即雇主希望挑选最有可能长期留在工作岗位上的雇员。

If the historical data show women tend to stay in jobs for a significantly shorter time than men (possibly because they leave when they have children), the algorithm will probably discriminate against them on the basis of attributes that are a reliable proxy for gender.

如果历史数据显示,女性雇员在工作岗位上停留的时间大大少于男性雇员(可能因为当她们有了孩子便会离职),算法就有可能利用那些性别指向明确的属性,得出对女性不利的结果。

Or how about the distance a candidate lives from the office? That might well be a good predictor of attendance or longevity at the company; but it could also inadvertently discriminate against some groups, since neighbourhoods can have different ethnic or age profiles.

应聘者住址与办公室之间的距离如何?这也可能是预测该雇员出勤率和在公司服务年限的不错的预测因素;但它可能也会在无意间歧视某些群体,因为不同的住宅社区有不同的种族和年龄特征。

These scenarios raise the tricky question of whether it is wrong to discriminate even when it is rational and unintended. This is murky legal territory.

这些现象提出了一个棘手问题:在理性和非有意的情况下,歧视是否错误?这是一个模糊的法律领域。

In the US, the doctrine of disparate impact outlaws ostensibly neutral employment practices that disproportionately harm protected classes, even if the employer does not intend to discriminate.

在美国,根据差别影响(disparate impact)原则,貌似中立的雇佣实践若超出比例地伤害了受保护阶层,即为不合法,即便雇主并非有意歧视。

But employers can successfully defend themselves if they can prove there is a strong business case for what they are doing.

但雇主若能证明该做法有很强的商业理由,就能为自己成功辩护。

If the intention of the algorithm is simply to recruit the best people for the job, that may be a good enough defence.

如果使用算法的意图仅仅是为相关职位招募最佳人选,那可能是个足够好的辩护理由。

Still, it is clear that employers who want a more diverse workforce cannot assume that all they need to do is turn over recruitment to a computer.

话虽如此,那些希望拥有更多元化的员工队伍的雇主,显然不能想当然地认为只需把招聘交给电脑去做。

If that is what they want, they will need to use data more imaginatively.

假如这正是他们想要的,那他们也得把数据运用得更富想象力一些。

Instead of taking their own company culture as a given and looking for the candidates statistically most likely to prosper within it, for example, they could seek out data about where (and in which circumstances) a more diverse set of workers thrive.

比如说,与其将他们自己的公司文化设为既定条件,进而寻找统计学上最有可能在该文化中成功的人选,不如找到相关数据显示,一支更为多元化的员工队伍在哪些情况下会成功。

Machine learning will not propel your workforce into the future if the only thing it learns from is your past.

如果机器学习唯一学到的只是你的过去,那么它将无法推动你的员工队伍走向未来。

重点单词   查看全部解释    
thrive [θraiv]

想一想再看

vi. 兴旺,繁荣,茁壮成长

 
defend [di'fend]

想一想再看

v. 防护,辩护,防守

 
intend [in'tend]

想一想再看

vt. 想要,计划,打算,意指

联想记忆
unconscious [ʌn'kɔnʃəs]

想一想再看

adj. 失去知觉的

联想记忆
diverse [dai'və:s]

想一想再看

adj. 不同的,多种多样的

联想记忆
rational ['ræʃənəl]

想一想再看

adj. 合理的,理性的,能推理的
n. 有理

 
rejection [ri'dʒekʃən]

想一想再看

n. 拒绝,被弃,被抛弃的实例

联想记忆
predict [pri'dikt]

想一想再看

v. 预知,预言,预报,预测

联想记忆
seductive [si'dʌktiv]

想一想再看

adj. 诱惑的,引人注意的,有魅力的

 
strip [strip]

想一想再看

n. 长条,条状,脱衣舞
v. 脱衣,剥夺,剥

联想记忆


关键字: 机器人 工作岗位

发布评论我来说2句

    最新文章

    可可英语官方微信(微信号:ikekenet)

    每天向大家推送短小精悍的英语学习资料.

    添加方式1.扫描上方可可官方微信二维码。
    添加方式2.搜索微信号ikekenet添加即可。