A Blog by Jonathan Low

 

Jan 11, 2021

How New York City Proposes Regulating Algorithms Used In Hiring

As software employing algorithms is used increasingly in making hiring decisions, there are growing ethical concerns that job candidates are unaware they are being evaluated by machines and that they have no rights of appeal if the judgements used to hire, fire, promote or compensate are inaccurate or biased.  

Organizational design and psychology professionals are already aware of the challenges to making informed and fair decisions even without algorithmic intervention. That New York is arguably America's preeminent business and legal center, gives these concerns added weight globally. JL

Tom Simonite reports in ars technica:

Legislation proposed in New York updates hiring discrimination rules for the age of algorithms. The bill would require companies to disclose to candidates when they have been assessed with software. Companies that sell such tools would have to perform annual audits to check their tech doesn’t discriminate (and they) would be required to conduct a “bias audit” of their products each year and make the results available. Some say the proposal could allow software that perpetuates discrimination to get rubber-stamped as having passed a fairness audit. (But) supporters say it will compel disclosure to people that they were evaluated by a machine. That will get the public into the conversation.”

In 1964, the Civil Rights Act barred the humans who made hiring decisions from discriminating on the basis of sex or race. Now, software often contributes to those hiring decisions, helping managers screen résumés or interpret video interviews.

That worries some tech experts and civil rights groups, who cite evidence that algorithms can replicate or magnify biases shown by people. In 2018, Reuters reported that Amazon scrapped a tool that filtered résumés based on past hiring patterns because it discriminated against women.

Legislation proposed in the New York City Council seeks to update hiring discrimination rules for the age of algorithms. The bill would require companies to disclose to candidates when they have been assessed with the help of software. Companies that sell such tools would have to perform annual audits to check that their people-sorting tech doesn’t discriminate.


The proposal is a part of a recent movement at all levels of government to place legal constraints on algorithms and software that shape life-changing decisions—one that may shift into new gear when Democrats take control of the White House and both houses of Congress.

More than a dozen US cities have banned government use of face recognition, and New York state recently passed a two-year moratorium on the technology’s use in schools. Some federal lawmakers have proposed legislation to regulate face algorithms and automated decision tools used by corporations, including for hiring. In December, 10 senators asked the Equal Employment Opportunity Commission to police bias in AI hiring tools, saying they feared the technology could deepen racial disparities in employment and hurt economic recovery from COVID-19 in marginalized communities. Also last year, a new law took effect in Illinois requiring consent before using video analysis on job candidates; a similar Maryland law restricts use of face analysis technology in hiring.

Lawmakers are more practiced in talking about regulating new algorithms and AI tools than implementing such rules. Months after San Francisco banned face recognition in 2019, it had to amend the ordinance because it inadvertently made city-owned iPhones illegal.

The New York City proposal launched by Democratic council member Laurie Cumbo would require companies using what are termed automated employment-decision tools to help screen candidates or decide terms such as compensation to disclose use of the technology. Vendors of such software would be required to conduct a “bias audit” of their products each year and make the results available to customers.

Strange bedfellows

The proposal faces resistance from some unusual allies, as well as unresolved questions about how it would operate. Eric Ellman, senior vice president for public policy at the Consumer Data Industry Association, which represents credit- and background-checking firms, says the bill could make hiring less fair by placing new burdens on companies that run background checks on behalf of employers. He argues that such checks can help managers overcome a reluctance to hire people from certain demographic groups.

Some civil rights groups and AI experts also oppose the bill—for different reasons. Albert Fox Cahn, founder of the Surveillance Technology Oversight Project, organized a letter from 12 groups including the NAACP and New York University’s AI Now Institute objecting to the proposed law. Cahn wants to regulate hiring tech, but he says the New York proposal could allow software that perpetuates discrimination to get rubber-stamped as having passed a fairness audit.

Cahn wants any law to define the technology covered more broadly, not let vendors decide how to audit their own technology, and allow individuals to sue to enforce the law. “We didn’t see any meaningful form of enforcement against the discrimination we’re concerned about,” he says.

Supporters

Others have concerns but still support the New York proposal. “I hope that the bill will go forward,” says Julia Stoyanovich, director of the Center for Responsible AI at New York University. “I also hope it will be revised.”

Like Cahn, Stoyanovich is concerned that the bill’s auditing requirement is not well defined. She still thinks it’s worth passing, in part because when she organized public meetings on hiring technology at Queens Public Library, many citizens were surprised to learn that automated tools were widely used. “The reason I’m in favor is that it will compel disclosure to people that they were evaluated in part by a machine as well as a human,” Stoyanovich says. “That will help get members of the public into the conversation.”

Two New York–based startups whose hiring tools would be regulated by the new rules say they welcome them. The founders of HiredScore, which tries to highlight promising candidates based on résumés and other data sources, and Pymetrics, which offers online assessments based on cognitive psychology with the help of machine learning, both supported the bill during a virtual hearing of the City Council’s Committee on Technology in November.

Frida Polli, Pymetrics’ CEO and cofounder, markets the company’s technology as providing a fairer signal about candidates than traditional measures like résumés, which she says can disadvantage people from less privileged backgrounds. The company recently had its technology audited for fairness by researchers from Northeastern University. She acknowledges that the bill’s auditing requirement could be tougher but says it’s unclear how to do that in a practical way, and it would be better to get something on the books. “The bill is moderate, but in a powerful way,” she says.

“Like the Wild West out there”

Robert Holden, chair of the City Council’s Committee on Technology, has his own concerns about the cash-strapped city government’s capacity to define how to scrutinize hiring software. He’s also been hearing from envoys from companies whose software would fall under the proposed rules, which have prompted more industry engagement than is usual for City Council business. Some have assured him the industry can be trusted to self-regulate. Holden says what he’s learned so far makes clear that more transparency is needed. “It’s almost like the Wild West out there now,” Holden says. “We really have to provide some transparency.”

Holden says the bill likely faces some negotiations and rewrites, as well as possible opposition from the mayor’s office, before it could be scheduled for a final vote by the council. If passed, it would take effect January 2022.

0 comments:

Post a Comment