欢迎访问译联翻译公司!  联系邮箱:fanyi@translian.com
当前位置:首页 > 新闻动态 > 行业新闻

新闻动态 / NEWS

在线咨询 / ONLINE CHAT



广州科技翻译公司:向机器人求个职

作者: 来源: 日期:2016/9/21 8:34:13

The dangerous attraction of the robo-recruiter

向机器人求个职

 

广州科技翻译公司:奥康纳:机器人并不只抢走人类的工作,它们也开始招聘人类员工了,因为它们可以快速筛选应聘者,但这很危险。

 

Robots are not just taking people’s jobs away, they are beginning to hand them out, too. Go to any recruitment industry event and you will find the air is thick with terms like “machine learning”, “big data” and “predictive analytics”.

机器人并不只抢走人类的工作,它们也开始向人类发放工作岗位了。参加招聘行业的任何一场活动,你都会发现空气中弥漫着像“机器学习”、“大数据”和“预测分析”这样的字眼。广州科技翻译公司。

 

The argument for using these tools in recruitment is simple. Robo-recruiters can sift through thousands of job candidates far more efficiently than humans. They can also do it more fairly. Since they do not harbour conscious or unconscious human biases, they will recruit a more diverse and meritocratic workforce.

在招聘中使用这些工具的理由很简单。机器人招聘者可以快速筛选数以千计的应聘者,效率远高于人类。它们还能做到更加公平。因为它们不会像人类那样带着有意或无意的偏见,它们会招聘到一批更多元化和择优录用的员工。

 

This is a seductive idea but it is also dangerous. Algorithms are not inherently neutral just because they see the world in zeros and ones.

这是个很诱人的想法,但也是危险的。算法的中立并非是其固有,而是因为它们看到的世界只是“0”和“1”。广州科技翻译公司。

 

For a start, any machine learning algorithm is only as good as the training data from which it learns. Take the PhD thesis of academic researcher Colin Lee, released to the press this year. He analysed data on the success or failure of 441,769 job applications and built a model that could predict with 70 to 80 per cent accuracy which candidates would be invited to interview. The press release plugged this algorithm as a potential tool to screen a large number of CVs while avoiding “human error and unconscious bias”.

首先,任何机器学习的算法,并不会比它所学习的训练数据更好。以学术研究者科林·李(Colin Lee)今年向媒体发布的博士论文为例,他分析了44.1769万份成功和不成功的求职申请,建立了一个准确度达70%80%的模型,可预测哪些应聘者会被邀请参加面试。该新闻稿称,这一算法潜在可用作工具,用于在筛选大量简历的过程中避免“人为错误和无意识偏见”。

 

But a model like this would absorb any human biases at work in the original recruitment decisions. For example, the research found that age was the biggest predictor of being invited to interview, with the youngest and the oldest applicants least likely to be successful. You might think it fair enough that inexperienced youngsters do badly, but the routine rejection of older candidates seems like something to investigate rather than codify and perpetuate.

但这样的模型会吸收最初招聘决定中的人为职场偏见。例如,上述研究发现,年龄因素可以在最大程度上预测该应聘者是否会被邀请面试,最年轻和最年长的应聘者最不可能成功。你可能觉得这挺公平,因为没有经验的年轻人干不好,但拒绝年长应聘者的常见做法似乎值得调查,而不是被编入程序和得以延续。广州科技翻译公司。

 

Mr Lee acknowledges these problems and suggests it would be better to strip the CVs of attributes such as gender, age and ethnicity before using them. Even then, algorithms can wind up discriminating. In a paper published this year, academics Solon Barocas and Andrew Selbst use the example of an employer who wants to select those candidates most likely to stay for the long term. If the historical data show women tend to stay in jobs for a significantly shorter time than men (possibly because they leave when they have children), the algorithm will probably discriminate against them on the basis of attributes that are a reliable proxy for gender.

科林承认这些问题的存在,并建议最好从简历中剔除一些属性(例如:性别、年龄和种族)再加以使用。即使那样,算法仍有可能带有歧视。在今年发表的一篇论文中,索伦·巴洛卡斯(Solon Barocas)和安德鲁·谢尔博斯特(Andrew Selbst)这两位学者使用了一个案例,即雇主希望挑选最有可能长期留在工作岗位上的雇员。如果历史数据显示,女性雇员在工作岗位上停留的时间大大少于男性雇员(可能因为当她们有了孩子便会离职),算法就有可能利用那些性别指向明确的属性,得出对女性不利的结果。

 

Or how about the distance a candidate lives from the office? That might well be a good predictor of attendance or longevity at the company; but it could also inadvertently discriminate against some groups, since neighbourhoods can have different ethnic or age profiles.

应聘者住址与办公室之间的距离如何?这也可能是预测该雇员出勤率和在公司服务年限的不错的预测因素;但它可能也会在无意间歧视某些群体,因为不同的住宅社区有不同的种族和年龄特征。广州科技翻译公司。

 

These scenarios raise the tricky question of whether it is wrong to discriminate even when it is rational and unintended. This is murky legal territory. In the US, the doctrine of “disparate impact” outlaws ostensibly neutral employment practices that disproportionately harm “protected classes”, even if the employer does not intend to discriminate. But employers can successfully defend themselves if they can prove there is a strong business case for what they are doing. If the intention of the algorithm is simply to recruit the best people for the job, that may be a good enough defence.

这些现象提出了一个棘手问题:在理性和非有意的情况下,歧视是否错误?这是一个模糊的法律领域。在美国,根据“差别影响”(disparate impact)原则,貌似中立的雇佣实践若超出比例地伤害了“受保护阶层”,即为不合法,即便雇主并非有意歧视。但雇主若能证明该做法有很强的商业理由,就能为自己成功辩护。如果使用算法的意图仅仅是为相关职位招募最佳人选,那可能是个足够好的辩护理由。

 

Still, it is clear that employers who want a more diverse workforce cannot assume that all they need to do is turn over recruitment to a computer. If that is what they want, they will need to use data more imaginatively.

话虽如此,那些希望拥有更多元化的员工队伍的雇主,显然不能想当然地认为只需把招聘交给电脑去做。假如这正是他们想要的,那他们也得把数据运用得更富想象力一些。广州科技翻译公司。

 

Instead of taking their own company culture as a given and looking for the candidates statistically most likely to prosper within it, for example, they could seek out data about where (and in which circumstances) a more diverse set of workers thrive.

比如说,与其将他们自己的公司文化设为既定条件,进而寻找统计学上最有可能在该文化中成功的人选,不如找到相关数据显示,一支更为多元化的员工队伍在哪些情况下会成功。

 

Machine learning will not propel your workforce into the future if the only thing it learns from is your past.

如果机器学习唯一学到的只是你的过去,那么它将无法推动你的员工队伍走向未来。

 

广州科技翻译公司

本文由:译联广州翻译公司免费发布:供学习参考,禁止商用与转载。