A Blog by Jonathan Low

 

Sep 26, 2019

How AI Goes About Labelling People

Based on data selected by humans. JL

Cade Metz reports in the New York Times:

Facial recognition and other A.I. technologies learn their skills by analyzing vast amounts of digital data. Drawn from old websites and academic projects, this data often contains subtle biases and other flaws that have gone unnoticed for years. “We want to show how layers of bias and racism and misogyny move from one system to the next.” The truth is that A.I. learns from humans and humans are biased creatures. “The way we classify images is a product of our worldview. Any kind of classification system is always going to reflect the values of the person doing the classifying.”
When Tabong Kima checked his Twitter feed early Wednesday morning, the hashtag of the moment was #ImageNetRoulette.
Everyone, it seemed, was uploading selfies to a website where some sort of artificial intelligence analyzed each face and described what it saw. The site, ImageNet Roulette, pegged one man as an “orphan.” Another was a “nonsmoker.” A third, wearing glasses, was a “swot, grind, nerd, wonk, dweeb.”
Across Mr. Kima’s Twitter feed, these labels — some accurate, some strange, some wildly off base — were played for laughs. So he joined in. But Mr. Kima, a 24-year-old African-American, did not like what he saw. When he uploaded his own smiling photo, the site tagged him as a “wrongdoer” and an “offender.”
“I might have a bad sense of humor,” he tweeted, “but I don’t think this is particularly funny.”
As it turned out, his response was just what the site was aiming for. ImageNet Roulette is a digital art project intended to shine a light on the quirky, unsound and offensive behavior that can creep into the artificial-intelligence technologies that are rapidly changing our everyday lives, including the facial recognition services used by internet companies, police departments and other government agencies.
Facial recognition and other A.I. technologies learn their skills by analyzing vast amounts of digital data. Drawn from old websites and academic projects, this data often contains subtle biases and other flaws that have gone unnoticed for years. ImageNet Roulette, designed by the American artist Trevor Paglen and a Microsoft researcher named Kate Crawford, aims to show the depth of this problem.
“We want to show how layers of bias and racism and misogyny move from one system to the next,” Mr. Paglen said in a phone interview from Paris. “The point is to let people see the work that is being done behind the scenes, to see how we are being processed and categorized all the time.”
Unveiled this week as part of an exhibition at the Fondazione Prada museum in Milan, the site focuses attention on a massive database of photos called ImageNet. First compiled more than a decade ago by a group of researchers at Stanford University, located in Silicon Valley in California, ImageNet played a vital role in the rise of “deep learning,” the mathematical technique that allows machines to recognize images, including faces.
Packed with over 14 million photos pulled from all over the internet, ImageNet was a way of training A.I. systems and judging their accuracy. By analyzing various kinds of images — such as flowers, dogs and cars — these systems learned to identify them.
What was rarely discussed among communities knowledgeable about A.I. is that ImageNet also contained photos of thousands of people, each sorted into their own categories. This included straightforward tags like “cheerleaders,” “welders” and “Boy Scouts” as well as highly charged labels like “failure, loser, non-starter, unsuccessful person” and “slattern, slut, slovenly woman, trollop.”
By creating a project that applies such labels, whether seemingly innocuous or not, Mr. Paglen and Ms. Crawford are showing how opinion, bias and sometimes offensive points of view can drive the creation of artificial intelligence.
The ImageNet labels were applied by thousands of unknown people, most likely in the United States, hired by the team from Stanford. Working through the crowdsourcing service Amazon Mechanical Turk, they earned pennies for each photo they labeled, churning through hundreds of tags an hour. As they did, biases were baked into the database, though it’s impossible to know whether these biases were held by those doing the labeling.
They defined what a “loser” looked like. And a “slut.” And a “wrongdoer.”
The labels originally came from another sprawling collection of data called WordNet, a kind of conceptual dictionary for machines built by researchers at Princeton University in the 1980s. But with these inflammatory labels included, the Stanford researchers may not have realized what they were doing.
Artificial intelligence is often trained on vast data sets that even its creators haven’t quite wrapped their heads around. “This is happening all the time at a very large scale — and there are consequences,” said Liz O’Sullivan, who oversaw data labeling at the artificial intelligence start-up Clarifai and is now part of a civil rights and privacy group called the Surveillance Technology Oversight Project that aims to raise awareness of the problems with A.I. systems.
Many of the labels used in the ImageNet data set were extreme. But the same problems can creep into labels that might seem inoffensive. After all, what defines a “man” or a “woman” is open to debate.
“When labeling photos of women or girls, people may not include nonbinary people or women with short hair,” Ms. O’Sullivan said. “Then you end up with an A.I. model that only includes women with long hair.”

In recent months, researchers have shown that face-recognition services from companies like Amazon, Microsoft and IBM can be biased against women and people of color. With this project, Mr. Paglen and Ms. Crawford hoped to bring more attention to the problem — and they did. At one point this week, as the project went viral on services like Twitter, ImageNet Roulette was generating more than 100,000 labels an hour.
“It was a complete surprise to us that it took off in the way that it did,” Ms. Crawford said, while with Mr. Paglen in Paris. “It let us really see what people think of this and really engage with them.”
For some, it was a joke. But others, like Mr. Kima, got the message. “They do a pretty good job of showing what the problem is — not that I wasn’t aware of the problem before,” he said.
Still, Mr. Paglen and Ms. Crawford believe the problem may be even deeper than people realize.
ImageNet is just one of the many data sets that has been widely used and reused by tech giants, start-ups and academic labs as they trained various forms of artificial intelligence. Any flaws in these data sets have already spread far and wide.
Nowadays, many companies and researchers are working to eliminate these flaws. In response to complaints of bias, Microsoft and IBM have updated their face-recognition services. In January, around the time that Mr. Paglen and Ms. Crawford first discussed the strange labels used in ImageNet, Stanford researchers blocked the download of all faces from the data set. They now say they will delete many of the faces.
Their longstanding aim is to “address issues like data set and algorithm fairness, accountability and transparency,” the Stanford team said in a statement shared with The New York Times.
But for Mr. Paglen, a larger issue looms. The fundamental truth is that A.I. learns from humans — and humans are biased creatures. “The way we classify images is a product of our worldview,” he said. “Any kind of classification system is always going to reflect the values of the person doing the classifying.”





0 comments:

Post a Comment