Yixuan Xie
Medill Reports
While job applicants hope they are evaluated based upon their capabilities and skills when applying for a job, hiring decisions can be full of biases, ranging from dismissing a candidate simply for a name to focusing recruiting efforts on elite schools.
With multiple studies revealing discrimination in recruitment, artificial intelligence is being embraced as a way to level the playing filed. AI removes human interaction from some parts of resume and video screening, helping to address conscious and unconscious hiring bias. But despite creating a more consistent and fairer way to evaluate applicants, it has the potential to be problematic.
Resume Screening
A 2016 study by Cornell University showed that resumes reveal candidates’ personal identifiable information and may introduce bias into the screening process, especially at the initial stages. It found that candidates with Caucasian-sounding names had 50 percent higher call back rates for interviews than those candidates with African American-sounding names. Research by PayScale, a salary trend analysis website, revealed this year that women face barriers to being hired at tech companies, with females being just 29 percent of the industry.
Inspired by the anonymous approaches to reviewing resumes, which the Cornell study suggests leads to more equal opportunities for every candidate, Opus AI Inc., a New York-based startup, developed a blind screening system that uses AI technology to enrich a candidate’s profile with additional data collected.
After employers prioritize five of their qualification requirements in categories like job skills, soft skills, culture and personality, Opus AI goes beyond candidates’ resumes because they are not sufficient to decide whether or not to bring in someone for an interview, said Loren Davie, CEO at Opus AI. By reaching out to candidates through email, Opus AI asks interview-styled questions based on the qualifications selected by employers such as “How many years’ experience do you have in this particular skill?” or “Tell me about a time when you had a conflict with a co-worker and how did you resolve it,” Davie said.
With candidates’ input and their uploaded resumes, Davie said Opus AI builds profiles and removes names from them by default. It also offers options to remove gender, race, age, email address, phone number and websites before screening. It then generates charts comparing candidates’ qualifications versus what employers are asking for before moving to a human to make decision, Davie said.
“If you can just look at candidates based on their capabilities, essentially their fits for a particular position, the assessment is effectively skill-based and can bypass this question of bias,” Davie said.
Video Screening
While Opus AI only automates the profile-building part within the screening process, two other video screening platforms are using AI to decide whether to move a candidate to future rounds.
HireVue Inc., based in South Jordan, Utah, has built AI assessment models that analyze answers to interview questions and looks at word choice, frequency of word usage, eye movements, facial cues and other traits when applicants record themselves answering questions that help companies identify who is most likely to be successful at a job, said Kevin Parker, HireVue CEO. When recruiters get a stack of resumes or cover letters, they have to pick the best-fit candidates based on some criteria, and the criteria can be as unfair as only interviewing the ones from Ivy League Schools, he said. Companies need a way to fairly, consistently and thoughtfully select the best people from all applicants, he said.
HireVue personalizes interview questions for each job with a team of data scientists, engineers and 13 industrial-organizational psychologists, Parker said. The company works with customers to decide what characteristics to look for and identifies several questions that can get to those issues during an interview.
“In a particular case, the candidate uses the word ‘I’ more than they use the word ‘we’ when answering questions. But if you are working in a team-oriented environment, maybe that’s indicative of somebody who is more individual oriented in terms of their approach to work,” Parker said.
Knockri Inc., an AI video assessment startup in Toronto, Canada, uses a similar way to measure employers’ most desired soft skills such as empathy, persuasion, growth mindset and collaboration, said Jahanzaib Ansari, co-founder and CEO at Knockri.
“Nobody is watching the video and it’s scored by AI,” Ansari said. “Our AI is trained to not see your name, skin color, race or gender. That being said, we completely de-identified any of those characteristics that could lead to the bias. So, it only looks at facial muscle movements and the content of the response and that’s it.”
Ansari said he has personally experienced discrimination when applying for jobs because his name is one that sounds foreign to some people. Despite having good skill sets and being well experienced, he wouldn’t hear back from employers. To avoid those situations, Knockri created a blind video screening tool that is used even before recruiters review resumes, Ansari said.
Besides the question component to test emotional-related capabilities, HireVue has also added game-based challenges such as puzzle games and memory games for cognitive assessments, Parker said. Questions and games tend to be mixed, and a candidate may play one or two games within a 20- to 25-minute interview. The assessment is not only used to make recommendations to customers on which candidates to follow up, but also to give a report back to applicants showing their strengths and areas for potential development, Parker said.
According to a 2018 survey conducted by Knockri on more than 5,000 individuals, companies using its AI-powered screening tool have seen 23 percent more diverse candidates being shortlisted, Ansari said. There is also a 62 percent reduction in cost and a 68 percent drop in time when filling in positions, he said.
By measuring the interview-to-offer ratio, Parker said that for every 10 candidates who get a final interview, the number of candidates who are hired has increased from one to 6.5. Being able to match what candidates are really good at and what a job requires also helps new hires stay longer at a company, he said.
“We see turnover actually reduced, whether it’s moving from 100 percent to 50 percent or on average candidates have stayed a year longer in the job that they otherwise wouldn’t have,” Parker said.
re:work, a Chicago-based non-profit organization that creates educational and career opportunities for underserved communities, is using HireVue to identify people with strong communication, negotiation skills and persuasion abilities among more than 500 applicants each year to enroll in an eight-week training program that runs every month, said Shelton Banks, CEO at re:work. After the first four weeks, re:work uses HireVue’s AI assessment tool again to measure a person’s growth. Based on the assessments, re:work suggests certain roles and companies for them, Banks said.
Banks said he has seen an increase in candidate quality, with fewer people dropping out of the program and more candidates getting a job.
Concerns about prejudices in algorithms
While Mercer’s Global Talent Trends 2019 report, which gathers the views of more than 7,300 business executives, human resource leaders and employees around the world, shows that three in five companies plan to increase automation this year, there are concerns about deep-rooted human prejudices being inserted into algorithms.
For example, Reuters reported in October 2018 that Amazon’s AI-powered recruiting system was built based on resumes submitted to the company over a 10-year period, most of which came from men. Trained on such historical data that comes from an environment where there has already been bias in hiring, the system creates a correlation between high-performing candidates and gender and therefore favors men over women. It downgrades resumes that include the word “women’s” and assigns lower scores to graduates of women-only colleges, according to Reuters. As a result, the company had to kill the project in 2017.
Ben Eubanks, a human resource analyst and author of “Artificial Intelligence for HR: Use AI to Support and Develop a Successful Workforce,” said he is concerned about algorithms having a hard time detecting facial features of people who are not Caucasian males or downgrading candidates due to looking away from the camera when in some cultures looking directly at someone is impolite.
“I think that fear is common when comes to technology,” Banks said. “We still actually take a second look at those videos to see if AI is really true. In some cases, we still picked up the person despite what HireVue is saying, but overall it is extremely accurate in assessing if a person can do well.”
Ninety-eight percent of re:work’s candidates are minorities, and Banks said he has seen people getting high scores on HireVue regardless of their backgrounds.
HireVue has a broad set of training data that is diverse, and the company always tests the data before deploying an algorithm to make sure that different groups of people are not being adversely impacted, Parker said.
“For example, what we found over time in the interviewing process is that men tend to talk faster than women,” Parker said. “So, when we evaluate speed of vocalization, we have so much data in our algorithms that we can completely avoid those factors as consistently and as fairly as we possibly can.”
Problems with AI
Taking the potential discriminative variables out of algorithms will lead to two problems, said Mireille Hildebrandt, a research professor on “Interfacing law and technology” at the Vrije Universiteit Brussels and general co-chair of ACM-FAT*2020 Conference, the renowned computer science conference on fairness, accountability and transparency in machine learning
“People of a certain age behave differently from people of another age. If you take the age out in the data, that means the system will simply use those other data that correlate with this prohibited data in order to be effective,” Hildebrandt said. “In the end, it will still give you the same outcome.”
Hildebrandt said people in the field of computer science are working hard to solve this problem, but the lack of transparency in testing is also an issue.
“If the application is behind a trade secret or intellectual property rights, the company can say it has done everything. But if we cannot test it, why ever should we believe this, as it may be a short term competitive advantage to refrain from investing in what is now called ‘fair, accountable and transparent’ AI,” Hildebrandt said.
Hildebrandt stressed the need for more transparency of proprietary software and said companies should be able to show regulators or machine learning experts their testing processes and prove whether the tool is actually discriminatory or not.