How Can Facial Recognition Be Banned When It's Already Everywhere?
It is enabling speed and convenience. And the history of modern technology is that those features will prove more convincing than privacy concerns. JL
Rebecca Heilweil reports in Re/code:
Amid the focus on government use of facial recognition, companies
are still integrating the technology into a wide range of consumer
products. Apple announced that it would beincorporating facial recognitioninto its accessories and its Face ID technology would be expanded to supportlogging into sites. In the Covid-19 pandemic, firms have raced to put forward morecontactless biometric tech, such asfacial recognition-enabled access control. The convenience that many find in consumer devices with facial
recognition stands in contrast to the growing pressure to
regulate and even ban the technology
There’s also pressure from Congress. Reps. Pramila Jayapal and Ayanna Pressley and Sens. Jeff Merkley and Ed Markey haveproposed new legislationthat would prohibit federal government use of facial recognition and encourage state and local governments to do the same. It’s one of the most sweeping proposals to limit the controversial biometric technology in the United States yet andhas been hailed by racial justiceandprivacy advocates.
All of this follows a move by several major technology companies, includingIBM, Amazon, and Microsoft, to pause or limit law enforcement’s access to their own facial recognition programs.
“When we think about all of these seemingly innocuous ways that our images are being captured, we have to remember we do not have the laws to protect us,” Mutale Nkonde, a fellow at Harvard Law School’s Berkman Klein Center, told Recode. “And so those images could be used against you.”
The convenience that many find in consumer devices equipped with facial recognition features stands in stark contrast to the growing pressure to regulate and even ban the technology’s use by the government. That’s a sign that officials looking to effectively regulate the tech will have to take into account its range of uses, from facial recognition that unlocks a smartphone to the dystopian-sounding databases operated by law enforcement.
After all, whenearlier this yearRecode asked Sen. Jeff Merkley what inspired his push to regulate the technology, he pointed out how quickly the Photos app on his iPhone could identify members of his family. He was struck by how easily law enforcement could be able to track people with the technology, but also how powerful it had already become on his own device.
“You can hit that person, and every picture that you’ve taken with that person in it will show up,” Merkley said at the time. “I’m just going, ‘Wow.’”
Facial recognition is becoming more widespread in consumer devices
One of the most popular uses of facial recognition is verification, which is often used for logging into electronic devices. Rather than typing in a passcode, a front-facing camera on the phone snaps a picture of the user and then deploys facial recognition algorithms to confirm their identity. It’s a convenient (though not completelyfool-proof) feature made popular when Apple launchedFace IDwith the iPhone X in 2017. Many other phone companies,includingSamsung, LG, andMotorola, now provide facial recognition-based phone unlocking, and the technology is increasingly being used for easier log-ins ongaming consoles,laptops, andapps of all kinds.
But some consumer-focused applications of facial recognition go beyond verification, meaning they’re not just trying to identify their own users but also other people. One early example of this is Facebook’s facial recognition-based photo tagging, whichscans through photos users post to the platformin order to suggest certain friends they can tag. Similar technology is also at work in apps like Google Photos and Apple Photos, both of which can automatically identify and tag subjects in a photo.
Apple is actually using the tagging feature in its Photos app to power the new facial recognition feature in HomeKit-enabledsecurity cameras and smart doorbells. Faces that show up in the camera feed can be cross-referenced with the database from the Photos app, so that you’re notified when, for instance, a specific friend is knocking on your door.Google’s Nest camerasand otherfacial recognition-enabled security systemsoffer similar features. Face-based identification is also popping up in somesmart TVsthat can recognize which member of a household is watching andsuggest tailored content.
Facial recognition is being used for identification and verification in a growing number of devices, but there will likely be possibilities for the technology that go beyond those two consumer applications. The company HireVuescans faces with artificial intelligenceto evaluate job applicants. Some cars, like theSubaru Forester,use biometrics and cameras to track whether drivers are staying focused on the road, andseveralcompaniesare exploring software that can sense emotion in a face, a feature that could be used to monitor drivers. But that can introduce new bias problems, too.
“In the context of self-driving cars, they want to see if the driver is tired. And the idea is if the driver is tired then the car will take over,” said Nkonde, who also runs the nonprofit AI for the People. “The problem is, we don’t [all] emote in the same way. “
The blurry line between facial recognition for home security and private surveillance for police
Facial recognition systems have three primary ingredients: a source image, a database, and an algorithm that’s trained to match faces across different images. These algorithms can vary widely in their accuracy and, as researchers like MIT’sJoy Buolamwinihave documented, have been shown disproportionately inaccurate based on categories likegender and race. Still, facial recognition systems can differ in the size of their databases — that is, how many people a system can identify — as well as by the number of cameras or images they have access to.
Face ID is an example of a facial recognition technology used for identity verification. The system checks that a user’s face matches up with the face that’s trying to open the device. For Face ID, the details of an individual user’s face have been previously registered on the device. As such, the Apple algorithm is simply answering the question of whether or not the person is the phone’s user. It is not designed to identify a large number of people. Only one user’s biometric information is involved, and importantly, Appledoes notsend that biometric data to the cloud; it remains on the user’s device.
When more than one person is involved, facial recognition-based identity verification is more complicated. TakeFacebook’s facial recognition-based photo tagging, for instance. It scans through a user’s photos to identify their friends, so it’s not just identifying the user, which is Face ID’s only job. It’s trying to spot any of the user’s friends that have opted in to the facial recognition-based tagging feature. Facebook says it doesn’t share peoples’ facial templates with anyone, but it took years for the company to give users control over the feature. Facebookfailed to get users’ permissionbefore implementing the photo-tagging feature back in 2010; this year, the companyagreed to pay $550 billion to settle a lawsuitover violating users’ privacy. Facebook did not start asking users to opt inuntil 2019.
The question of consent becomes downright problematic in the context of security camera footage. Google Nest Cams, Apple HomeKit cameras, and other devices can let users create albums of familiar faces so they can get a notification when the camera’s facial recognition technology spots one of those people. According to Apple, the new HomeKit facial recognition feature lets users turn on notifications for when people tagged in their Photos app appear on camera. It also lets them set alerts for people who frequently come to their doorway, like a dog-walker, but not in their photo library app. Apple says the identification all happens locally on the devices.
Google Nest Cameras provide a video feed through a smartphone app and, for a small monthly fee, offer facial recognition features.Smith Collection/Gado/Getty Images
The new Apple feature is similar to thefamiliar face detectionfeature that can be used with Google’s Nest doorbell and security cameras. But use of the feature, which is turned off by default, is somewhat murky. Google warns users that, depending on the laws where they live, they may need to getthe consentof those they add notifications for, and some may not be able to use it at all. For instance, Google does not make the feature available in Illinois, where the state’s strict Biometric Information Privacy Act requires explicit permission for the collection of biometric data. (This law wasat the centerof the recent $550 billion Facebook settlement.) Google says its users’ face libraries are “stored in the cloud, where it is encrypted in transit and at rest, and faces aren’t shared beyond their structure.”
So Google- and Apple-powered security cameras are explicitly geared to consumers, and the databases used by their facial recognition algorithms are more or less limited.
The line between consumer tech like this and the potential for powerful police surveillance tools, however, becomes blurred with thesecurity systems made by Ring. Ring, which is owned by Amazon, partners withpolice departments, and while Ring says its products do not currently use facial recognition technology,multiplereportsindicate that the companysought to buildfacial recognition-based neighborhood watchlists. Ring has also distributed surveys to beta testers to see how they wouldfeel about facial recognition features. The scope of these partnerships is worrisome enough that on Thursday Rep. Raja Krishnamoorthi, head of the House Oversight Committee, asked for more information aboutRing’s potential facial recognition integrations, among other questions about the product’s long-standing problem with racism.
So it seems that as facial recognition systems become more ambitious — as their databases become larger and their algorithms are tasked with more difficult jobs — they become more problematic. Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation, told Recode that facial recognition needs to be evaluated on a “sliding scale of harm.”
When the technology is used in your phone, it spends most of its time in your pocket, not scanning through public spaces. “A Ring camera, on the other hand, isn’t deployed just for the purpose of looking at your face,” Guariglia said. “If facial recognition was enabled, that’d be looking at the faces of every pedestrian who walked by and could be identifying them.”
So it’s hardly a surprise that officials are most aggressively pushing to limit the use of facial recognition technologyby law enforcement. Police departments and similar agencies not only have access to a tremendous amount of camera footage but also incredibly large face databases. In fact, the Georgetown Center for Privacy and Technology found in 2016 thatmore than half of Americans are in a facial recognition database, which can include mug shots or simply profile picturestaken at the DMV.
And recently, the scope of face databases available to police has grown even larger. The controversial startup Clearview AI claims to have mined the web forbillions of photos posted online and on social mediato create a massive facial recognition database, which it has made available to law enforcement agencies. According to Jake Laperruque, senior counsel at the Project on Government Oversight, this represents a frightening future for facial recognition technology.
“Its effects, when it’s in government’s hands, can be really severe,” Laperruque said. “It can be really severe if it doesn’t work, and you have false IDs that suddenly become a lead that become the basis of a whole case and could cause someone to get stopped or arrested.”
He added, “And it can be really severe if it does work well and if it’s being used to catalog lists of people who are at protests or a political rally.”
Regulating facial recognition will be piecemeal
The Facial Recognition and Biometric Technology Moratorium Act recently introduced on Capitol Hill is sweeping. It would prohibit federal use of not only facial recognition but also other types of biometric technologies, such as voice recognition and gait recognition, until Congress passes another law regulating the technology. The bill follows other proposals to limit government use of the technology, including one that would require acourt-issued warrantto use facial recognition and another that would limit biometrics in federally assisted housing. Some local governments,like San Francisco,have also limited their own acquisition of the technology.
But the ubiquitous nature of facial recognition means that regulating the technology will inevitably require piecemeal legislation and attention to detail so that specific use cases don’t get overlooked. San Francisco, for example, had to amend its facial recognition ordinance after itaccidentally made police-department-owned iPhones illegal. When Boston passed its recent facial recognition ordinance, it created an exclusion for facial recognition used forlogging into personal deviceslike laptops and phones.
“The mechanisms to regulators are so different,” said Brian Hofer, who helped craft San Francisco’sfacial recognition ban, adding that he’s now looking at creating local laws modeled after Illinois’ Biometric Privacy Act that focus more on consumers. “The laws are so different it would be probably impossible to write a clean, clearly understood bill regulating both consumer and government.”
A single law regulating facial recognition technology might not be enough. Researchers from the Algorithmic Justice League, an organization that focuses on equitable artificial intelligence, have called for a more comprehensive approach. They argue that the technology should beregulated and controlledby a federal office. In a May proposal, the researchers outlined how the Food and Drug Administration could serve as a model for a new agency that would be able to adapt to a wide range of government, corporate, and private uses of the technology. This could provide a regulatory framework to protect consumers from what they buy, including devices that come with facial recognition.
Meanwhile, the growing ubiquity of facial recognition technology stands to normalize a form of surveillance. As Rochester Institute of Technology professor Evan Selinger argues, “As people adapt to routinely using any facial scanning system and it fades to the background as yet another unremarkable aspect of contemporary digitally mediated life, their desires and beliefs can become reengineered.”
And so, even if there is a ban on law enforcement using facial recognition and it’s effective to a degree, the technology is still becoming a part of everyday life. We’ll eventually have to deal with its consequences.
As a Partner and Co-Founder of Predictiv and PredictivAsia, Jon specializes in management performance and organizational effectiveness for both domestic and international clients. He is an editor and author whose works include Invisible Advantage: How Intangilbles are Driving Business Performance. Learn more...
0 comments:
Post a Comment