A Blog by Jonathan Low

 

Dec 6, 2013

When Algorithms Grow Accustomed to Your Face

Well of course computers can recognize our facial expressions. They know what we want to buy, where we want to eat and who our friends are, so why wouldnt they take the next step and read our moods?

The question is no longer what technology and the math that guides it can do, but what society wants to do to manage these powers. If anything.

Many people resent or fear being manipulated by forces they can't see - even as they sign over the rights to their location, contacts and email or texting in return for access to new smartphone apps. Did we really think we were getting a totally sweet deal without any cost? Or did we just figure 'they' would get whatever they wanted anyway, so why not cadge a couple of downscale bargains in the process?

Some good things could come out of this, potentially, in terms of safety, security, convenience, ease of access, even wildlife preservation. And it will certainly help advertisers or other commercial users better target consumers, enabling the identification of moods or times when it is optimal to pitch.

But the reality is that we are buying more and powerful devices as soon as they are made available to us. And to use them optimally we are surrendering hard-won rights and liberties without much angst or forethought. The challenge will come when - or if, depending on your view of human nature - there is a collective sense that too much has been lost. Because those who are benefiting will fight to hold their advantage and getting these intangibles back is not going to be easy. JL

Anne Eisenberg reports in the New York Times:

Ever since Darwin, scientists have systematically analyzed facial expressions, finding that many of them are universal. People can be trained to note tiny changes in facial muscles, learning to distinguish common expressions by studying photographs and video. Now computers can be programmed to make those distinctions, too.
People often reveal their private emotions in tiny, fleeting facial expressions, visible only to a best friend — or to a skilled poker player. Humans are remarkably consistent in the way their noses wrinkle, say, or their eyebrows move as they experience certain emotions.Now, computer software is using frame-by-frame video analysis to read subtle muscular changes that flash across our faces in milliseconds, signaling emotions like happiness, sadness and disgust.
With face-reading software, a computer’s webcam might spot the confused expression of an online student and provide extra tutoring. Or computer-based games with built-in cameras could register how people are reacting to each move in the game and ramp up the pace if they seem bored.
But the rapidly developing technology is far from infallible, and it raises many questions about privacy and surveillance.
Companies in this field include Affectiva, based in Waltham, Mass., and Emotient, based in San Diego. Affectiva used webcams over two and a half years to accumulate and classify about 1.5 billion emotional reactions from people who gave permission to be recorded as they watched streaming video, said Rana el-Kaliouby, the company’s co-founder and chief science officer. These recordings served as a database to create the company’s face-reading software, which it will offer to mobile software developers starting in mid-January.
The company strongly believes that people should give their consent to be filmed, and it will approve and control all of the apps that emerge from its algorithms, Dr. Kaliouby said.
Face-reading technology may one day be paired with programs that have complementary ways of recognizing emotion, such as software that analyzes people’s voices, said Paul Saffo, a technology forecaster. If computers reach the point where they can combine facial coding, voice sensing, gesture tracking and gaze tracking, he said, a less stilted way of interacting with machines will ensue.
For some, this type of technology raises an Orwellian specter. And Affectiva is aware that its face-reading software could stir privacy concerns. But Dr. Kaliouby said that none of the coming apps using its software could record video of people’s faces.
“The software uses its algorithms to read your expressions,” she said, “but it doesn’t store the frames.”
So far, the company’s algorithms have been used mainly to monitor people’s expressions as a way to test ads, movie trailers and television shows in advance. (It is much cheaper to use a program to analyze faces than to hire people who have been trained in face-reading.)
Affectiva’s clients include Unilever, Mars and Coca-Cola. The advertising research agency Millward Brown says it has used Affectiva’s technology to test about 3,000 ads for clients.
Face-reading software is unlikely to infer precise emotions 100 percent of the time, said Tadas Baltrusaitis, a Ph.D. candidate at the University of Cambridge who has written papers on the automatic analysis of facial expressions. The algorithms have improved, but “they are not perfect, and probably never will be,” he said.
Apps that can respond to facial cues may find wide use in education, gaming, medicine and advertising, said Winslow Burleson, an assistant professor of human-computer interaction at Arizona State University. “Once we can package this facial analysis in small devices and connect to the cloud,” he said, “we can provide just-in-time information that will help individuals, moment to moment throughout their lives.”
People with autism, who can have a hard time reading facial expressions, may be among the beneficiaries, Dr. Burleson said. By wearing Google Glass or other Internet-connected goggles with cameras, they could get clues to the reactions of the people with whom they were talking — clues that could come via an earpiece as the program translates facial expressions.
But facial-coding technology raises privacy concerns as more and more of society’s interactions are videotaped, said Ginger McCall, a lawyer and privacy advocate in Washington.
“The unguarded expressions that flit across our faces aren’t always the ones we want other people to readily identify,” Ms. McCall said — for example, during a job interview. “We rely to some extent on the transience of those facial expressions.”
She added: “Private companies are developing this technology now. But you can be sure government agencies, especially in security, are taking an interest, too.”
Ms. McCall cited several government reports, including a National Defense Research Institute report this year that discusses the technology and its possible applications in airport security screening.
She said the programs could be acceptable for some uses, such as dating services, as long as people agreed in advance to have webcams watch and analyze the emotions reflected in their faces. “But without consent,” Ms. McCall said, “they are problematic — and this is a technology that could easily be implemented without a person’s knowledge.”

0 comments:

Post a Comment