A Blog by Jonathan Low

 

May 8, 2019

What's Causing the Growing Backlash Against Apple's, Amazon's and Microsoft's Facial Recognition Tech?

There is an increasing realization that these systems are not as accurate as advertised and may accuse, deny access to or discriminate against innocent individuals they are ostensibly designed to protect. JL

Sigal Samuel reports in Vox:

We’re reaching an inflection point where Apple, Amazon and Microsoft are being forced to take complaints seriously. Although they’re trying to telegraph they’re sensitive to the concerns, it may be too late to win trust. Public dissatisfaction has reached such a fever pitch that some, including the city of San Francisco,(Washington State, Massachusetts and California) are now considering all-out bans on facial recognition tech and the US Senate is considering a bipartisan bill that would regulate commercial use. Critics want to see such companies “get out of the surveillance business altogether.”
A teenager is suing Apple for $1 billion. The lawsuit hinges on the alleged use of an increasingly popular — and controversial — technology: facial recognition. The tech can identify an individual by analyzing their facial features in images, in videos, or in real time.
The plaintiff, 18-year-old college student Ousmane Bah, claims the company’s facial recognition tool led to him being arrested for Apple Store thefts he didn’t commit, by mistakenly linking his name with the face of the real thief. NYPD officers came to Bah’s home last autumn to arrest him at 4 in the morning, only to discover that they apparently had the wrong guy. Bah says the whole ordeal caused him serious emotional distress. Meanwhile, Apple insists its stores do not use facial recognition tech.
Whatever the truth turns out to be in this case, Bah’s lawsuit is the latest sign of an escalating backlash against facial recognition. As the tech gets implemented in more and more domains, it has increasingly sparked controversy. Take, for example, the black tenants in Brooklyn who recently objected to their landlord’s plan to install the tech in their rent-stabilized building. Or the traveler who complained via Twitter that JetBlue had checked her into her flight using facial recognition without her consent. (The airline explained that it had used Department of Homeland Security data to do that, and apologized.)
That’s in addition to the researchers, advocates, and thousands of members of the public who have been voicing concerns about the risk of facial recognition leading to wrongful arrests. They worry that certain groups will be disproportionately affected. Facial recognition tech is pretty good at identifying white male faces, because those are the sorts of faces it’s been trained on. But too often, it misidentifies people of color and women. That bias could lead to them being disproportionately held for questioning as more law enforcement agencies put the tech to use.
Now, we’re reaching an inflection point where major companies — not only Apple, but also Amazon and Microsoft — are being forced to take such complaints seriously. And although they’re finally trying to telegraph that they’re sensitive to the concerns, it may be too late to win back trust. Public dissatisfaction has reached such a fever pitch that some, including the city of San Francisco, are now considering all-out bans on facial recognition tech.

A vote to ban Amazon from selling Rekognition

Amazon is not exactly known for playing nice. It’s got a reputation for fighting proposed laws it doesn’t like and for aggressively defending its work with the police and government. For years, it could afford to behave that way. Yet the uproar over facial recognition is making that posture harder to sustain.
Amazon’s facial recognition tool, Rekognition, has already been sold to law enforcement and pitched to Immigration and Customs Enforcement. But leading AI researchers recently argued in an open letter that the tech is deeply flawed. (Case in point: In a test last year, Rekognition matched 28 members of Congress to criminal mug shots.) And Amazon shareholders have been clamoring for a vote on whether the company should stop selling the tool to government agencies until it passes an independent review of its civil rights impact.
Amazon fought hard to keep the vote from happening, but the Securities and Exchange Commission ruled this month that the company has to let it go ahead. It’ll take place on May 22. Although the result will be largely symbolic — shareholder resolutions aren’t binding — the vote stands to bring negative attention to Amazon.
In the meantime, the company has been trying to soften its image by, for example, dialing back some of its aggressive promotional tactics. Kartik Hosanagar, a Wharton School of Business professor, said Amazon is taking “preemptive action” to make nice “before one of the [presidential] candidates makes Amazon the poster child of what they refer to as the problems with Big Tech.”

Microsoft’s mixed messages on the ethical use of facial recognition

Whereas Amazon is still a young company, founded in 1994, Microsoft has reached adulthood — it’s been around since 1975. That longer life span means it’s had more time to make mistakes, but also more time to learn from them. Kim Hart at Axios argues that’s why Microsoft has, for the most part, managed to avoid the backlash against big tech. “Microsoft, which trudged through its own antitrust battle with the Justice Department in the ’90s, has sidestepped the mistakes made by its younger, brasher Big Tech brethren,” she writes.
Yet Microsoft is by no means completely immune to the backlash. After it was reported this month that Microsoft researchers had produced three papers on AI and facial recognition with a military-run university in China, some US politicians lambasted the company for helping a regime that’s detaining a million Uighur Muslims in internment camps and surveilling millions more. Sen. Marco Rubio called it “deeply disturbing ... an act that makes them complicit in aiding the Communist Chinese government’s totalitarian censorship apparatus.”
The press also pointed out that the company has sold its facial recognition software to a US prison, and that Microsoft president Brad Smith has said it would be “cruel” to altogether stop selling the software to government agencies.
Days later, the company took pains to show that it does care about the ethical use of AI. Smith announced that it had refused to sell its facial recognition software to a California law enforcement agency that wanted to install it in officers’ cars and body cams. He said Microsoft refused on human rights grounds, knowing use of the software would cause people of color and women to be disproportionately held for questioning.

The push to ban facial recognition tech outright

This month has made clear that public pressure is working when it comes to facial recognition. Behemoth companies know they can no longer ignore the criticisms — or, as they recently did, simply say they’d welcome regulation of this technology. Critics are making clear that’s not good enough — they want to see such companies “get out of the surveillance business altogether,” as the American Civil Liberties Union told Vox.
Meanwhile, several bills are being considered to limit the use of facial recognition. San Francisco could soon become the first US city to institute an all-out ban on local government use of the tech, if its Stop Secret Surveillance Ordinance passes. Neighboring cities like Oakland and Berkeley have already passed similar but slightly weaker ordinances. (Legislation along these lines was also introduced in the California state Senate, but was quashed after police opposed it.)
Washington state and Massachusetts are weighing bans, too. And the US Senate is considering a bipartisan bill that would regulate the commercial use of facial recognition software.
Still, facial recognition tech is being put to use at an astounding rate, on both the national and city level. In the past month, reports have emerged about how US intelligence wants to train AI systems on video footage of pedestrians who are unaware they’re being filmed, and about how the Metropolitan Transit Authority is trying to use facial recognition to detect criminals and terrorists driving across a New York bridge, even though it failed spectacularly in tests.
Even though some states and cities are pushing back, their momentum hasn’t yet overtaken the rapid, nationwide embrace of this technology. But if this month is any indication, that could soon change.

0 comments:

Post a Comment