A Blog by Jonathan Low

 

May 18, 2020

Post-Covid Job Applicants May Be Subject To AI Background Checks

Due to the record number of layoffs and furloughs, as well as what is hoped will be a record number of applicants for new jobs, companies are turning to AI to enhance hiring process productivity while reducing costs.

Applicants have to hope the systems are accurate and their data (somewhat) safe. JL

Rebecca Heilweil reports in Re/code:

Companies (are) automating aspects of the hiring process and cutting down on costs. Some are using artificial intelligence to scan through resumesanalyze facial expressions during video job interviews, compare criminal records, and even judge applicants’ social media behavior. if you can see information about you online, a future employer — or AI — can see it, too.And in a pandemic, where the companies still hiring are already seeing a surge in applications and eager to streamline the recruiting process, technology that makes hiring quicker and easier sounds appealing.
Unemployment in May reached its highest levels since the Great Depression, but companies like Postmates and Uber have continued to hire new workers during the pandemic. If you’re interested in this kind of gig, however, there’s a good chance you’ll need to pass an AI-powered background check from a company like Checkr. This might not be as easy as it sounds.
Checkr is on the forefront of a new and potentially problematic kind of hiring, one that’s powered by still-emerging technology. Those hoping to quickly get extra work complain that Checkr and others using AI to do background checks aren’t addressing errors and mistakes on their criminal records reports. In these cases, a glitch in the system can cost someone a job.
But this isn’t exactly a new problem. In recent years, Checkr has faced a slew of lawsuits for making mistakes that have cost people much-desired opportunities to work, according to legal records. One complaint from a man hoping to drive for Uber alleged that he was wrongly linked to a murder conviction that actually belonged to someone with a similar name. Another person hoping to work for the ride-share giant complained that he was erroneously reported to have committed several misdemeanors — including the possession of a controlled substance — crimes that belonged to another person with the same name.
Checkr is one of many companies automating aspects of the hiring process and cutting down on costs. Some of these companies are using artificial intelligence to scan through resumesanalyze facial expressions during video job interviews, compare criminal records, and even judge applicants’ social media behavior. And in a pandemic, where the companies still hiring are likely already seeing a surge in applications and eager to find ways to streamline the recruiting process, technology that makes hiring quicker and easier sounds appealing.
But experts have expressed skepticism about the role that AI can actually play in hiring. The technology doesn’t always work and can exacerbate bias and privacy problems. Inevitably, it also raises bigger questions of how powerful AI should become.

AI can help companies study your criminal record

When you’re being considered for a job, background check companies typically use personal information, provided by you, to learn more about your criminal record and other information about your identity. That can involve collating all types of data, including but not limited to information from sex offender registries, global watch lists, state criminal records databases, and the Public Access to Court Electronic Records (PACER) system. Sometimes, a background check provider will need to consult a courthouse to search for more records, a process that might not be possible right now due to pandemic-required closures.
In recent years, using artificial intelligence to speed up the process of analyzing these records has been pioneered by Checkr, though other startups, like UK-based Onfido and Israeli-based Intelligo, have worked or are working on similar systems. Meanwhile, more traditional background check companies are also making use of AI. GoodHire, for instance, has used machine learning to verify the identity of people completing an online background check.
Checkr has become a favorite of gig economy firms, including Uber, InstacartShiptPostmates, and Lyft. On its website, Checkr argues that AI can ultimately drive down the cost of bringing on a new hire by helping process background-checks in two ways. First, the technology helps verify that a given criminal record belongs to the person whose background is being checked. Second, the AI assists in comparing the names of criminal charges that have different names in different places. What might be reported as “petty theft” in one locale could be reported as “petit larceny” somewhere else.
But as lawsuits against Checkr suggest, these services can make mistakes, even with the use of AI. Many of these complaints allege that the company matched people to criminal records belonging to others with the same or similar names.


“The threshold question is, did we even match the right person,” explains Aaron Rieke, the managing director of the digital rights group Upturn. “If you have a common name, that’s a non-trivial thing to do, and the last 20 years are rife with database matching problems just at that very basic level.”
Other complaints against Checkr say out-of-context or outdated records also end up getting included in their reports.
Checkr did not comment on the lawsuits specifically, but Kristen Faris, the company’s industry strategy vice president, told Recode that humans are involved in both the review and quality assurance process to ensure the accuracy of the reports.
“In the traditional world, where you’re using offshore labor to apply this criteria, you have a much lower accuracy rate just because of the manual processes involved,” Faris said.

AI can also screen your social media and anything else that’s public on the web

Most background checks tend to focus on criminal records, but some services have started to include information about a person that’s available online, including their social media presence. Some managers already look up social media activity of prospective hires, but companies like Good Egg sell social media background checks, while others like Intelligo can use AI to screen these platforms.
“I think when people use their real name in public fora online, the reality is that that information could be sucked into a background check process,” Rieke, the digital rights advocate, said.
This is what happened to Kai Moore earlier this year, when their employer switched payroll systems and required employee background checks to be run again. Moore expected a review of what’s typically included in this process: information about their criminal records and confirmation of their identity. But what they didn’t expect was a 300-page report from Fama Technologies on their social media history, which included documentation of their tweets, retweets, and “Likes.”
Even more worrisome was how their online activity was graded. A post Moore had “Liked” with the phrase “Big Dick Energy” was flagged for “sexism” and “bigotry.” Tweets they’d favorited about alcohol were flagged as “bad,” while one mentioning “hell” in discussing LGBTQ identity and religion was flagged for “language.”
Moore’s employer assured them that their job was not at risk, but it also noted that Fama’s algorithms had ultimately deemed them a “consider,” rather than an outright “clear,” for the position in which Moore had already been working. And to Moore, this signified the absurdity and inaccuracy of artificial intelligence.
“I think it’s really dangerous to give these kinds of algorithms so much authority,” Moore told Recode. “It’s such a terrible algorithm. It’s a keyword search.”
Fama founder and CEO Ben Mones told Recode that the company claims to be able to identify problematic behavior, like sexism and bullying, as well as the risk that someone might commit insider trading or intellectual property theft. Fama primarily analyzed Twitter activity in Moore’s case, but the technology can also pick up information about applicants from news sites and other webpages. Since Moore’s report was performed, Fama has stopped labeling these posts as “good” or “bad.” Now, it simply flags content, leaving employers to make their own judgments.
Fama isn’t the only company that’s attempted such a business model. Other companies are looking for ways to report what prospective applicants share online, a process that some background check companies say they can expedite with the help of AI. Faris, the Checkr VP, said her company has talked about offering social media screenings but has yet to see significant demand from its existing customer base.
It’s also unclear if social media companies themselves will tolerate this use of their data. Predictim, a company that used AI to score potential babysitters based on their social media, attracted enough negative attention back in 2018 that it was ultimately blocked by Facebook, Instagram, and Twitter. Predictim’s website is no longer active.
Fama, for one, has found a way around some social media companies’ policies. Twitter told Recode that it suspended Fama’s API access around the same time Moore’s story about their background check went viral. Twitter said that its API policies ban the use of the platform for “background checks or any form of extreme vetting,” as well as “surveillance.” Fama maintains that it still has some form of access to Twitter data for its services.

The use of artificial intelligence doesn’t mean your rights are forfeited

Background check companies are generally considered credit reporting agencies, and there are state and federal laws that regulate how these agencies operate. Chief among them is the Fair Credit Reporting Act, a law passed in 1970 to protect consumers that’s regulated by the Federal Trade Commission. Just last month, the agency shared best practices for working with artificial intelligence and algorithms.
Currently, the Fair Credit Reporting Act law requires potential employers to inform and get a person’s permission before running a background check. If the employer thinks the results of the background check will factor into rejecting an applicant for a job, they have to let the applicant know and give them a chance to contest any information in the report. If that happens, the credit reporting agency enlisted by the employer has to reinvestigate its findings. There is no guarantee, however, that any corrections will be made in time for a person to remain in consideration for a particular position.
Background check companies can make significant errors, and those errors can impact whether or not someone is ultimately offered a job. According to Ariel Nelson, an attorney at the National Consumer Law Center, these firms do have a legal obligation to have “reasonable procedures to assure maximum possible accuracy of the information.” There’s still a pervasive problem of mistakes being included in background checks, Nelson explained, even when AI is not involved.
So when you find yourself applying for a new job, consider that your application could be subject to an AI-powered background check, especially if you’re looking for work in the gig economy. You do have control over some aspects of this process. You can make your social media accounts private or delete your data from these platforms entirely. Basically, if you can see information about you online while you’re not logged into a platform, a future employer — or a hired AI — can probably see it, too.

0 comments:

Post a Comment