Does AI Improve Hiring?

The impacts of automation on job recruitment

Artificial intelligence makes it much easier for many companies to hire candidate for job positions, as opposed to going though the traditional route of doing an in-person job interview, which is much less common these days.

Link to the Original Article

  • AI streamlines the hiring process and saves money for businesses but questions abound concerning hiring bias and an equitable process for all candidates.

  • Ethical concerns about AI and hiring bias are partially due to lack of transparency around secrecy about the process.

  • AI will be truly successful when developers integrate software that mitigates hiring bias and makes the process equitable and transparent.

Artificial intelligence has taken over a significant amount of the hiring process for many companies. Specifically, large companies have delegated a large percentage of what goes into hiring new employees –recruiting, screening, and even onboarding– over to artificial intelligence. 

Of the companies who’ve adopted this new hiring technique, most report that it has improved their ability to hire high-quality employees. These improvements include more thorough shortlisting and an overall more expedient means of hiring employees. This new process is often faster, and some large companies believe it’s increased the quality of the employees they hire. 

However, various experts have ethical concerns about the advent of AI-powered hiring. In particular, they are worried that artificial intelligence may result in biases that undermine equity in the hiring process, resulting in unfair advantages for some groups over others. 

On the one hand, AI is making the hiring process quicker for businesses, cutting costs, and reliably helping to hire qualified candidates. What this means is that AI improves the effectiveness of recruiting new employees. 

On the other hand, the ethical implications of using AI to hire new employees do not merely reside in some hypothetical distant future: rather, they’re here now, and if nothing gets done about them, it will worsen hiring biases.  

Thus, while it’s a no-brainer to retain AI-powered recruiting, engineers must implement software within the AI that helps eliminate bias – so that the hiring process is efficient and equitable.  

How does AI work for recruiting? 

We’re not at the point yet with AI where we’d expect a humanoid-looking robot to sit at a desk and conduct your interview. Though that would be comical and creepy –the “uncanny valley,” as it’s called in the tech community—that is a long way away.  

However, our sci-fi fantasies are already a reality –albeit in a more subtle and myopic fashion. For large companies, artificial intelligence has already taken over the bulk of the hiring process. Specifically, 38 percent of all companies have now focused their efforts on talent acquisition. In fact, nearly two-thirds (63 percent) of recruiters report that it has completely altered the way hiring is done at their company.  

What might these changes look like? 

One of the most common artificial intelligence programs used in recruiting is ATS’s (Applicant Tracking Systems). To put it simply, these software programs scan your resume for keywords that indicate the relevance of your application to the position you’re applying for. Sometimes, these programs are looking for an exact match to specific keywords. This means that, for some jobs, if your resume lacks particular keywords, it will be immediately thrown in the virtual trash.  

Another example of the altered landscape of recruitment through AI is data collection and utilization. ATS systems will not necessarily toss out your application. Employers can use AI to store applicant files to potentially pull from them for future job opportunities.  

Finally, interviews can now be conducted entirely by chatbots. Instead of speaking to a hiring manager, employers can now use an interview chatbot to interview potential employees.  

How effective is using artificial intelligence for recruiting? 

Over the past decade, various polls and studies have been conducted on the effectiveness of using AI for hiring new employees. Employers are particularly praiseworthy of its efficacy. 

Consider, for instance, that the vast majority of resumes employers receive are from candidates that are not qualified for the role. On average, it will take an HR manager 23 hours to hire a single candidate. Because 67 percent of HR managers and recruiters say that AI saves them time, the necessity for the time being saved can’t be overstated.  

Indeed, when it comes to reviews from the corporate world, they are all unanimously positive. For example, 80 percent of executives believe that AI can improve productivity. In addition, one study showed that using AI-powered recruiting software led to a 75 percent reduction in cost per screen and a 35 percent reduction in employee turnover rates. 

Ethical concerns regarding equity: Is there a hiring bias? 

Despite the measurable improvements in the hiring process generated by artificial intelligence, it also has its drawbacks. Chiefly among these drawbacks is a lack of transparency.  

In other words, it is almost as if a veil covers the answers to questions such as, “how does the algorithm determine who is ‘not qualified?’ and “how does the algorithm prevent perpetuating hiring biases?” 

This lack of transparency has sparked ethical concerns amongst legislators, legal experts, and those studying existential risk; namely, worries about whether AI-powered HR programs are exacerbating existing inequities in income and socio-economic status.  

For instance, how many people looking for jobs at the moment are aware of ATS’s processing of their resumes? In connection with that, how many of them know that there is the software you can use to optimize your resume’s keywords for specific jobs, improving the likelihood of acquiring an interview?  

Chances are –even if you are objectively a qualified candidate—that if you’re from a low-income background –which is highly correlated with having poorer educational outcomes and an overall lack of at-hand resources for career advancement—you probably have not heard of such software, making it much less likely for you to get an interview.  

According to managing director of the tech equity nonprofit Upturn Aaron Rieke “What can naturally happen is you build a model that identifies common characteristics of your current workforce, which isn’t diverse. Or might reflect the fact that hiring managers have traditionally given preference to male candidates over women.”  

Ricke indeed says the biases may arise out of using AI for recruitment even when it is unintended. For instance, under the Americans with Disabilities Act, employers are prohibited from inquiring into a candidate’s health and personal background. This might include sexual orientation, race, personality traits, drug use, and mental health status.  

However, artificial intelligence used by employers can indirectly pick up these attributes, which may play a role in the hiring process. Unfortunately, due to the lack of transparency companies provide, it’s difficult to tell just how much it plays a role in hiring new employees.  

And, morally speaking, this is precisely why worries regarding hiring biases from AI-recruiting programs are so pronounced. Companies indeed could be violating the Americans with Disabilities Act under the radar. 

And without any oversight on the usage of AI for recruiting workers, the depths of these potential biases will continue to be a mystery going unsolved. Although job discrimination is unlawful, the lack of transparency companies displays with their use of recruitment.  

In a world where companies can now check what you’ve liked on Facebook to infer your political or sexual orientation, it’s bafflingly mysterious what the implications of using such new powers might have on exacerbating already prevalent hiring biases. 

Being concerned about exacerbating hiring biases is only natural when companies are known to use technology that gives them unflattering information about candidates, while not being transparent about their usage of such technology.  

Finally, there’s an already existing inequity problem in the tech industry. Most people who work in the tech industry are straight white males. This means that the software they develop could unintentionally have biases programmed into them that unfairly disadvantage other demographic groups.  

How to improve recruiting artificial intelligence to mitigate hiring biases

AI has its benefits for hiring new workers. Not only does it expedite the process, but it also reliably recruits qualified workers and saves companies a lot of money. But, the potential this technology has to worsen hiring biases is excellent and cannot go unchecked.  

What, then, could a balanced solution look like? How can we retain the positive gains made through AI while mitigating the risk of bias?  

One thing is for sure: going back to human hiring processes will not root out bias. If anything, the potential for AI’s ability to root out bias is great enough for us to continue using it in the hiring process. History has simply shown that humans are far too susceptible to biases to be fully trusted in these decisions. 

But, how can this potential be fulfilled? According to Eric Sydell, executive vice president of innovation at Modern Hire, the algorithms used for hiring can be programmed to rule out unfair biases –something we cannot do to humans.  

This means, however, that tech companies developing these new technologies need to make it a prime mission to balance the scales between ensuring that the most qualified candidates get interviewed while simultaneously providing that unfair biases are rooted out.  

An example of such an effort comes from HireVue, which uses a process predicated on the U.S. Equal Employment Opportunity Commission’s Uniform Guidelines. Hence, they use a program that is specifically are designed to mitigate unfair hiring biases.  

However, even when companies make these sorts of efforts, clearly more needs to be done. This is evidence by the complaints filed on HireVue’s AI’s use of facial recognition technology.  

Allegedly, this technology has been used to analyze the racial difference between job candidates, among others related to individual demographics. Unfortunately, the result of these analyses might be the exacerbation of unfair biases. 

Thus, one of the first essential provisions to preventing the worsening of unfair hiring biases might be external auditing. HireVue reports that they audit their own AI programs and that they are now seeking third-party audits. The question remains, however, as to what these third-party audits need to root out. If there’s unclarity on that point, biases will be inevitable.  

There are specific pieces of information that, when collected by employers, could feed into exacerbating biases. And it is these pieces of information that should be excluded from data collecting AI in the process of recruitment. Here are some examples

  • Race

  • Sexual Orientation

  • Health Status

  • Criminal Record

  • Graduation Dates

  • Salary History

In some cases, of course, some of these pieces of information could be relevant and necessary for hiring specific candidates. For instance, some acting roles may require a BIPOC actor or actress.  

But in the vast majority of cases, these pieces of information serve no use in determining an applicant’s skill set. Instead, all this information helps amplify is the likelihood of its being used nefariously and thus has no place in being collected –directly or indirectly—in the hiring process.  

Thus, what’s crucial here is eliminating human bias in hiring algorithms. Part of this will entail diversifying the tech industry itself. But it also must entail that audits be on the lookout for overly frequent homogeneity in applicant pools.  

If, for instance, interviewees are almost always white men, that would be a time to reconsider the recruitment program you’re using and opt for an algorithm that is more sensitive to mitigating hiring biases.  

Thus, while AI is profoundly practical for hiring managers and large companies, we should be wary when using it. If we are not careful, all-too-human biases can be programmed into AI-recruitment programs that unfairly disadvantage specific demographics.  

Making it a point not to collect data that is irrelevant to one’s job performance, like race, gender, or sexual orientation, will be imperative in this process. And if things are done correctly, human bias now has the potential to be largely eliminated from the hiring process –a feat we cannot afford to miss out on.

Previous
Previous

Working While High | Should Employers Still Screen Workers For Cannabis?

Next
Next

Impact of Remote Work on Diversity