A Blog by Jonathan Low

 

Oct 10, 2019

Why Concerns Are Growing About Automated Hiring

Lack of transparency and accountability mean lack of input, let alone influence or even control. JL

Ifeoma Ajunwa comments in the New York Times:

Automated hiring can create a closed-loop system. Advertisements created by algorithms encourage certain people to send in their résumés. After automated culling, a few are hired and then subjected to automated evaluation, the results of which are looped back to establish criteria for future job advertisements and selections. This system operates with no transparency or accountability to check the criteria. Human biases can be introduced at any stage, from the design of the hiring algorithm to how results are interpreted. Under federal law, employers have discretion to decide which qualities are a “cultural fit” for their organization.
Algorithms make many important decisions for us, like our creditworthiness, best romantic prospects and whether we are qualified for a job. Employers are increasingly using them during the hiring process out of the belief they’re both more convenient and less biased than humans. However, as I describe in a new paper, this is misguided.
In the past, a job applicant could walk into a clothing store, fill out an application and even hand it straight to the hiring manager. Nowadays, her application must make it through an obstacle course of online hiring algorithms before it might be considered. This is especially true for low-wage and hourly workers.
The situation applies to white-collar jobs too. People applying to be summer interns and first-year analysts at Goldman Sachs have their résumés digitally scanned for keywords that can predict success at the company. And the company has now embraced automated interviewing.
The problem is that automated hiring can create a closed-loop system. Advertisements created by algorithms encourage certain people to send in their résumés. After the résumés have undergone automated culling, a lucky few are hired and then subjected to automated evaluation, the results of which are looped back to establish criteria for future job advertisements and selections. This system operates with no transparency or accountability built in to check that the criteria are fair to all job applicants.
As a result, automated hiring platforms have enabled discrimination against job applicants. In 2017, the Illinois attorney general opened an investigation into several automated hiring platforms after complaints that a résumé building tool on Jobr effectively excluded older applicants. The platform had a drop-down menu that prevented applicants from listing their college graduation year or year of a first job before 1980.
Similarly, a 2016 class-action lawsuit alleged that Facebook Business tools “enable and encourage discrimination by excluding African-Americans, Latinos and Asian-Americans but not white Americans from receiving advertisements for relevant opportunities.” Facebook’s former Lookalike Audiences feature allowed employers to choose only Facebook users demographically identical to their existing workers to see job advertisements, thus replicating racial or gender disparities at their companies. In March, Facebook agreed to make changes to its ad platform to settle the lawsuit.
But this is just the tip of the iceberg. Under federal law, employers have wide discretion to decide which qualities are a “cultural fit” for their organization. This allows companies to choose hiring criteria that could exclude certain groups of people and to hide this bias through automated hiring. For example, choosing “lack of gaps in employment” as a cultural fit could hurt women, who disproportionately take leaves from the workplace to tend to children and ailing family members.
Automated hiring has now evolved past simple résumé parsing and culling. According to one lawsuit, a college student with a near-perfect SAT score and a diagnosis of bipolar disorder found himself rejected over and over for minimum-wage jobs at supermarkets and retail stores that were using a personality test modeled after a test used to diagnose mental illness.
How do we make sure that automated hiring platforms do not worsen employment discrimination?
The first step is to pass laws that let plaintiffs bring suits when they have experienced bias in an automated hiring system. Federal law requires a plaintiff to prove either disparate treatment (that is, “smoking gun” evidence of intentional discrimination) or disparate impact (statistical proof that a group of applicants, for example, racial minorities or white women, were disproportionately rejected for employment). It’s hard for applicants, though, to get either type of proof because employers control the data in hiring platforms.
We should change the law to allow for a third method for plaintiffs to bring suit under the “discrimination per se” doctrine. As I describe in a paper, this new doctrine would allow for the burden of proof to be shifted to the employer.
So when a plaintiff using a hiring platform encounters a problematic design feature — like platforms that check for gaps in employment — she should be able to bring a lawsuit on the basis of discrimination per se, and the employer would then be required to provide statistical proof from internal and external audits to show that its hiring platform is not unlawfully discriminating against certain groups.
We need a federal law that would mandate data retention for all applications (including applications that were not completed) on hiring platforms and that would require employers to conduct internal and external audits so that no groups of applicants are disproportionately excluded. The audits would also ensure that the criteria being used is actually related to job tasks.
This idea has precedence in federal law: Occupational Safety and Health Administration audits are recommended to ensure safe working conditions for employees. Employers that subject their automated hiring platforms to external audits should also receive a certification mark, that would favorably distinguish those employers in the labor market. This type of auditing and certification system recognizes that job applicants should be able to make informed choices about which hiring platforms they will trust with their information.
Unions can help to ensure that automated hiring platforms are fair. Through collective bargaining, unions can work with employers to determine what criteria are actually relevant for determining job fit. Unions can also make sure that applicant data retained by automated hiring platforms is protected, and that it is not sold or transferred to workers’ detriment.
To be sure, human decision-making is clouded by bias. But so is automated decision-making, especially given that human biases can be introduced at any stage of the process, from the design of the hiring algorithm to how results are interpreted.
We cannot rely on automated hiring platforms without adequate safeguards to prevent unlawful employment discrimination. We need new laws and mandates to achieve that goal.

0 comments:

Post a Comment