A Blog by Jonathan Low

 

Dec 17, 2019

How AI Will Determine If You Get Your Next Job

It seems especially foolish to use machines and algorithms to assess human candidates in a war for talent environment where efficiently managing volume should be the least of a hiring managers problems.

And who knows what historical anomalies in bad data sets may do to future organizational productivity. JL

Rebecca Heilwell reports in Vox:

Recruiters are increasingly using AI to make the first round of cuts, helping target who sees what job descriptions. Trained on data collected about previous applicants, these tools cut down on the effort recruiters need to make a hire. Last year, 67% of recruiters said AI saves them time. Critics argue such systems introduce bias, lack accountability and transparency, and aren’t guaranteed to be accurate. If a résumé-screening machine learning tool is trained on résumés collected from a company’s previously hired candidates, the system will inherit the conscious and unconscious preferences of the hiring managers who made those selections.
With parents using artificial intelligence to scan prospective babysitters’ social media and an endless slew of articles explaining how your résumé can “beat the bots,” you might be wondering whether a robot will be offering you your next job.
We’re not there yet, but recruiters are increasingly using AI to make the first round of cuts and to determine whether a job posting is even advertised to you. Often trained on data collected about previous or similar applicants, these tools can cut down on the effort recruiters need to expend in order to make a hire. Last year, 67 percent of hiring managers and recruiters surveyed by LinkedIn said AI was saving them time.
But critics argue that such systems can introduce bias, lack accountability and transparency, and aren’t guaranteed to be accurate. Take, for instance, the Utah-based company HireVue, which sells a job interview video platform that can use artificial intelligence to assess candidates and, it claims, predict their likelihood to succeed in a position. The company says it uses on-staff psychologists to help develop customized assessment algorithms that reflect the ideal traits for a particular role a client (usually a company) hopes to hire for, like a sales representative or computer engineer.
That algorithm is then used to analyze how individual candidates answer preselected questions in a recorded video interview, grading their verbal responses and, in some cases, facial movements. HireVue claims the tool — which is used by about 100 clients, including Hilton and Unilever — is more predictive of job performance than human interviewers conducting the same structured interviews.
But last month, lawyers at the Electronic Privacy Information Center (EPIC), a privacy rights nonprofit, filed a complaint with the Federal Trade Commission, pushing the agency to investigate the company for potential bias, inaccuracy, and lack of transparency. It also accused HireVue of engaging in “deceptive trade practices” because the company claims it doesn’t use facial recognition. (EPIC argues HireVue’s facial analysis qualifies as facial recognition.)
The lawsuit follows the introduction of the Algorithmic Accountability Act in Congress earlier this year, which would grant the FTC authority to create regulations to check so-called “automated decision systems” for bias. Meanwhile, the Equal Opportunity Employment Commission (EEOC) — the federal agency that deals with employment discrimination — is reportedly now investigating at least two discrimination cases involving job decision algorithms, according to Bloomberg Law.

AI can pop up throughout the recruitment and hiring process

Recruiters can make use of artificial intelligence throughout the hiring process, from advertising and attracting potential applicants to predicting candidates’ job performance. “Just like with the rest of the world’s digital advertisement, AI is helping target who sees what job descriptions [and] who sees what job marketing,” explains Aaron Rieke, a managing director at Upturn, a DC-based nonprofit digital technology research group.
And it’s not just a few outlier companies, like HireVue, that use predictive AI. Vox’s own HR staff use LinkedIn Recruiter, a popular tool that uses artificial intelligence to rank candidates. Similarly, the jobs platform ZipRecruiter uses AI to match candidates with nearby jobs that are potentially good fits, based on the traits the applicants have shared with the platform — like their listed skills, experience, and location — and previous interactions between similar candidates and prospective employers. For instance, because I applied for a few San Francisco-based tutoring gigs on ZipRecruiter last year, I’ve continued to receive emails from the platform advertising similar jobs in the area.
Overall, the company says its AI has trained on more than 1.5 billion employer-candidate interactions.
Platforms like Arya — which says it’s been used by Home Depot and Dyson — go even further, using machine learning to find candidates based on data that might be available on a company’s internal database, public job boards, social platforms like Facebook and LinkedIn, and other profiles available on the open web, like those on professional membership sites.
Arya claims it’s even able to predict whether an employee is likely to leave their old job and take a new one, based on the data it collects about a candidate, such as their promotions, movement between previous roles and industries, and the predicted fit of a new position, as well as data about the role and industry more broadly.
Another use of AI is to screen through application materials, like résumés and assessments, in order to recommend which candidates recruiters should contact first. Somen Mondal, the CEO and co-founder of one such screening and matching service, Ideal, says these systems do more than automatically search résumés for relevant keywords.
For instance, Ideal can learn to understand and compare experiences across candidates’ résumés and then rank the applicants by how closely they match an opening. “It’s almost like a recruiter Googling a company [listed on an application] and learning about it,” explains Mondal, who says his platform is used to screen 5 million candidates a month.
But AI doesn’t just operate behind the scenes. If you’ve ever applied for a job and then been engaged by a text conversation, there’s a chance you’re talking to a recruitment bot. Chatbots that use natural-language understanding created by companies like Mya can help automate the process of reaching out to previous applicants about a new opening at a company, or finding out whether an applicant meets a position’s basic requirements — like availability — thus eliminating the need for human phone-screening interviews. Mya, for instance, can reach out over text and email, as well as through messaging applications like Facebook and WhatsApp.
Another burgeoning use of artificial intelligence in job selection is talent and personality assessments. One company championing this application is Pymetrics, which sells neuroscience computer games for candidates to play (one such game involves hitting the spacebar whenever a red circle, but not a green circle, flashes on the screen).
These games are meant to predict candidates’ “cognitive and personality traits.” Pymetrics says on its website that the system studies “millions of data points” collected from the games to match applicants to jobs judged to be a good fit, based on Pymetrics’ predictive algorithms.

Proponents say AI systems are faster and can consider information human recruiters can’t calculate quickly

These tools help HR departments move more quickly through large pools of applicants and ultimately make it cheaper to hire. Proponents say they can be more fair and more thorough than overworked human recruiters skimming through hundreds of résumés and cover letters.
“Companies just can’t get through the applications. And if they do, they’re spending — on average — three seconds,” Mondal says. “There’s a whole problem with efficiency.” He argues that using an AI system can ensure that every résumé, at the very least, is screened. After all, one job posting might attract thousands of applications, with a huge share from people who are completely unqualified for a role.
Such tools can automatically recognize traits in the application materials from previous successful hires and look for signs of that trait among materials submitted by new applicants. Mondal says systems like Ideal can consider between 16 and 25 factors (or elements) in each application, pointing out that, unlike humans, it can calculate something like commute distance in “milliseconds.”
“You can start to fine-tune the system with not just the people you’ve brought in to interview, or not just the people that you’ve hired, but who ended up doing well in the position. So it’s a complete loop,” Mondal explains. “As a human, it’s very difficult to look at all that data across the lifecycle of an applicant. And [with AI] this is being done in seconds.”
These systems typically operate on a scale greater than a human recruiter. For instance, HireVue claims the artificial intelligence used in its video platform evaluates “tens of thousands of factors.” Even if companies are using the same AI-based hiring tool, they’re likely using a system that’s optimized to their own hiring preferences. Plus, an algorithm is likely changing if it’s continuously being trained on new data.
Another service, Humantic, claims it can get a sense of candidates’ psychology based on their résumés, LinkedIn profiles, and other text-based data an applicant might volunteer to submit, by mining through and studying their use of language (the product is inspired by the field of psycholinguistics). The idea is to eliminate the need for additional personality assessments. “We try to recycle the information that’s already there,” explains Amarpreet Kalkat, the company’s co-founder. He says the service is used by more than 100 companies.
Proponents of these recruiting tools also claim that artificial intelligence can be used to avoid human biases, like an unconscious preference for graduates of a particular university, or a bias against women or a racial minority. (But AI often amplifies bias; more on that later.) They argue that AI can help strip out — or abstract — information related to a candidate’s identity, like their name, age, gender, or school, and more fairly consider applicants.
The idea that AI might clamp down on — or at least do better than — biased humans inspired California lawmakers earlier this year to introduce a bill urging fellow policymakers to explore the use of new technology, including “artificial intelligence and algorithm-based technologies,” to “reduce bias and discrimination in hiring.”

AI tools reflect who builds and trains them

These AI systems are only as good as the data they’re trained on and the humans that build them. If a résumé-screening machine learning tool is trained on historical data, such as résumés collected from a company’s previously hired candidates, the system will inherit both the conscious and unconscious preferences of the hiring managers who made those selections. That approach could help find stellar, highly qualified candidates. But Rieke warns that method can also pick up “silly patterns that are nonetheless real and prominent in a data set.”
One such résumé-screening tool identified being named Jared and having played lacrosse in high school as the best predictors of job performance, as Quartz reported.
If you’re a former high school lacrosse player named Jared, that particular tool might not sound so bad. But systems can also learn to be racist, sexist, ageist, and biased in other nefarious ways. For instance, Reuters reported last year that Amazon had created a recruitment algorithm that unintentionally tended to favor male applicants over female applicants for certain positions. The system was trained on a decade of résumés submitted to the company, which Reuters reported were mostly from men.
(An Amazon spokesperson told Recode that the system was never used and was abandoned for several reasons, including because the algorithms were primitive and that the models randomly returned unqualified candidates.)
Mondal says there is no way to use these systems without regular, extensive auditing. That’s because, even if you explicitly instruct a machine learning tool not to discriminate against women, it might inadvertently learn to discriminate against other proxies associated with being female, like having graduated from a women’s college.
“You have to have a way to make sure that you aren’t picking people who are grouped in a specific way and that you’re only hiring those types of people,” he says. Ensuring that these systems are not introducing unjust bias means frequently checking that new hires don’t disproportionately represent one demographic group.
But there’s skepticism that efforts to “de-bias” algorithms and AI are a complete solution. And Upturn’s report on equity and hiring algorithms notes that “[de-biasing] best practices have yet to crystallize [and] [m]any techniques maintain a narrow focus on individual protected characteristics like gender or race, and rarely address intersectional concerns, where multiple protected traits produce compounding disparate effects.”
And if a job is advertised on an online platform like Facebook, it’s possible you won’t even see a posting because of biases produced by that platform’s algorithms. There’s also concern that systems like HireVue’s could inherently be built to discriminate against people with certain disabilities.
Critics are also skeptical of whether these tools do what they say, especially when they make broad claims about a candidates’ “predicted” psychology, emotion, and suitability for a position. Adina Sterling, an organizational behavior professor at Stanford, also notes that, if not designed carefully, an algorithm could drive its preferences toward a single type of candidate. Such a system might miss a more unconventional applicant who could nevertheless excel, like an actor applying for a job in sales.
“Algorithms are good for economies of scale. They’re not good for nuance,” she explains, adding that she doesn’t believe companies are being vigilant enough when studying the recruitment AI tools they use and checking what these systems actually optimize for.

Who regulates these tools?

Employment lawyer Mark Girouard says AI and algorithmic selection systems fall under the Uniform Guidelines on Employee Selection Procedures, guidance established in 1978 by federal agencies that guide companies’ selection standards and employment assessments.
Many of these AI tools say they follow the four-fifths rule, a statistical “rule of thumb” benchmark established under those employee selection guidelines. The rule is used to compare the selection rate of applicant demographic groups and investigate whether selection criteria might have had an adverse impact on a protected minority group.
But experts have noted that the rule is just one test, and Rieke emphasizes that passing the test doesn’t imply these AI tools do what they claim. A system that picked candidates randomly could pass the test, he says. Girouard explains that as long as a tool does not have a disparate impact on race or gender, there’s no law on the federal level that requires that such AI tools work as intended.
In its case against HireVue, EPIC argues that the company has failed to meet established AI transparency guidelines, including artificial intelligence principles outlined by the Organization for Economic Co-operation and Development that have been endorsed by the U.S and 41 other countries. HireVue told Recode that it follows the standards set by the Uniform Guidelines, as well as guidelines set by other professional organizations. The company also says its systems are trained on a diverse data set and that its tools have helped its clients increase the diversity of their staff.
At the state level, Illinois has made some initial headway in promoting the transparent use of these tools. In January, its Artificial Intelligence Video Interview Act will take effect, which requires that employers using artificial intelligence-based video analysis technology notify, explain, and get the consent of applicants.
Still, Rieke says few companies release the methodologies used in their bias audits in “meaningful detail.” He’s not aware of any company that has released the results of an audit conducted by a third party.
Meanwhile, senators have pushed the EEOC to investigate whether biased facial analysis algorithms could violate anti-discrimination laws, and experts have previously warned the agency about the risk of algorithmic bias. But the EEOC has yet to release any specific guidance regarding algorithmic decision-making or artificial intelligence-based tools and did not respond to Recode’s request for comment.
Rieke did highlight one potential upside for applicants. Should lawmakers one day force companies to release the results of their AI hiring selection systems, job candidates could gain new insight into how to improve their applications. But as to whether AI will ever make the final call, Sterling says that’s a long way’s off.
“Hiring is an extremely social process,” she explains. “Companies don’t want to relinquish it to tech.”

0 comments:

Post a Comment