A Blog by Jonathan Low

 

Jan 28, 2022

Why Submitting To Facial Recognition Should Not Be Required To Pay US Taxes

The IRS and other US government agencies are increasingly demanding that citizens submit to facial recognition in order to receive services to which they are entitled by law like veterans' benefits or Social Security - and even to pay taxes. 

Aside from the fact that none of the agencies has sought citizens' consent, the inaccuracy problems chronically associated with facial recognition, as well as the privacy implications, raise concerns about abuse of this technology. JL 

Joy Buolamwini reports in The Atlantic:

The IRS is pushing individuals to submit to facial recognition in exchange for being able to complete basic tax-related activities online. The IRS has retained a private firm that claims to provide “ identity proofing, authentication, and verification." Facial identity verification can deny individuals access to government benefits; cause undue scrutiny, as well as time wasted seeking to clear one’s name of erroneous accusations. In the US last year, people repeatedly complained of facial-recognition that failed to recognize them, preventing them from accessing essential government services. ID.me is used by the Department of Veterans Affairs and the Social Security Administration, as well as other federal agencies.

With tax season upon us, the IRS is pushing individuals to submit to facial recognition in exchange for being able to complete a range of basic tax-related activities online. The IRS has retained a private firm—ID.me (formerly known as TroopSwap)—that claims to provide “secure identity proofing, authentication, and group affiliation verification for government and businesses across sectors.” The IRS is not the only government agency working with ID.me. The company claims to serve “27 states, multiple federal agencies, and over 500 name brand retailers.”

This is alarming for several reasons. In a 2022 white paper, the company frames its technology in misleading ways. Specifically, ID.me obfuscates the relationship between two distinct types of facial recognition—one-to-one verification (unlocking a phone with your face) and one-to-many identification (police searching for criminal suspects using security-camera footage and a mug-shot database). In the white paper, ID.me goes out of its way to suggest that it does not do facial recognition per se. It recognizes these two distinct types of facial recognition, and then immediately proceeds to expressly define “facial recognition” as facial identification only. By doing so, ID.me appears to recognize the dangers of facial-recognition technologies—which are widely criticized and have been linked to false arrests—and want to be seen as outside of this technological category. The company’s linguistic move to distance its services from the term facial recognition should raise alarms about its trustworthiness.

ID.me also downplayed concerns raised by university studies. As someone who, as a graduate student at MIT, carried out peer-reviewed research on commercial AI systems that analyze faces, I couldn’t help but notice that ID.me failed to provide a single citation for its claim that “university studies often fail to use precise terminology.” ID.me clearly has a huge incentive—in the form of lucrative taxpayer-funded government contracts—to sweep away any criticism of its technology, and that appears to be just what it is attempting to do.

Though ID.me asserts that “significant benefits” come from the use of one-to-one facial recognition, the company fails to adequately address its known harms or deeply engage with specific findings that indicate substantial racial bias, as documented in a 2019 U.S. Department of Commerce report. According to the department’s National Institute for Standards and Technology, an analysis of one-to-one-facial-recognition algorithms had sobering results. The NIST team says it observed “higher rates of false positives for Asian and African American faces relative to images of Caucasians. The differentials often ranged from a factor of 10 to 100 times, depending on the individual algorithm. False positives might present a security concern to the system owner, as they may allow access to impostors.” In other words, security risks from false positives mean that someone else could gain access to your tax information.

 

The dangers that arise from one-to-one facial identity verification can deny individuals access to government benefits; cause undue scrutiny, as well as time wasted seeking to clear one’s name of erroneous accusations of fraud or other criminal activity; and more. What happened in the United Kingdom in 2019 should be a warning: Multiple reports emerged of dark-skinned individuals struggling to access government services because of facial-recognition failures. In the United States last year, people repeatedly complained of facial-recognition technology that failed to recognize them, in some cases preventing them from accessing essential government services.

Biometric identification linked to government services does not pose problems merely for people of color. Consider the story of a Colorado man who spent months unable to access his unemployment benefits, in large part because of ID.me’s technology. The company’s sign-up process is also likely to present additional problems for transgender and gender-nonconforming people, because it requires users to match an image from a government-issued ID to a selfie, and not everyone has access to a driver’s license or passport with photos that reflect their current gender presentation.

Beyond false positives or false negatives is something even more important—the right not to use biometric technology at all, regardless of its accuracy. Government pressure on citizens to share their biometric data with the government affects all of us—no matter your race, gender, or political affiliations. As I shared in my written congressional testimony to the House Oversight Committee in 2019, technologies that analyze human faces have real and dire consequences for people’s lives.

Just a few months ago, in October 2021, the White House Office of Science and Technology Policy (OSTP), whose mission is “to maximize the benefits of science and technology to advance health, prosperity, security, environmental quality, and justice for all Americans,” issued a request for information on public- and private-sector uses of biometric technologies. While the OSTP is soliciting input from leading experts and any interested persons to decide whether and how facial recognition (including one-to-one matching) and other biometric technologies should be used, multiple federal-government agencies, including the IRS, have already moved to adopt ID.me’s products and deploy them on the American public. According to Fast Company, ID.me says that it is now used by the Department of Veterans Affairs and the Social Security Administration, as well as several other federal agencies.

ID.me claims to advance equity and justice, yet it pushes for adoption of its technology before adequate public scrutiny, debate, and oversight have taken place. The company’s CEO also backtracked claims that ID.me’s technology does not use facial recognition only after a leaked internal communication revealed that its engineers had been using one-to-many facial recognition for fraud detection. We should all be concerned about the misrepresentation of biometric technologies sold to and deployed by the government, as they have enormous implications for our civil rights and liberties. The U.S. government is already pushing this technology on citizens—all while the executive branch purports to be conducting a meaningful investigation into how the government should proceed. What’s the point of seeking input about the limitations and harms of this course of action if officials are proceeding to deploy it anyway?

ID.me’s tagline is “Leave no identity behind,” but what may be even more concerning is that the federal government might be leaving behind its mandate to safeguard the civil rights and liberties of its people. Federal and state governments should immediately halt their use of ID.me for access to tax services and unemployment benefits, respectively. No biometric technologies should be adopted by the government to police access to services or benefits, certainly not without cautious consideration of the dangers they pose, due diligence in outside testing, and the consent of those exposed to potential abuse, data exploitation, and other harms that affect us all.

0 comments:

Post a Comment