A Blog by Jonathan Low

 

Feb 23, 2020

What Does It Mean To Be Human In the Age of AI?

It may be that just as the difference between being offline and online has largely disappeared, so the 'reality' created by AI and what used to be called 'the real thing' may no longer matter as much. JL

Leslie Katz comments in CNET:

"Technology is changing our world, with AI both a new frontier of possibility but also a development fraught with anxiety." How AI operates in the world, (needs) to move from polarizing conversation that pitches technophobes and technophiles against each other, to allow us to take a step back and look at how AI operates in the world today? Where are the opportunities? Where are the current problems? "What about algorithmic mistakes, faulty logic? Predictive behavior and algorithms can construct and alter real-life behavior."
More than 3,000 black-and-white mugshots stare out from a wall-size canvas. They are faces of people who've been accused of crimes, and in some cases, incarcerated. They are also the faces of people whose likenesses were used, without their consent, to train facial recognition software before social media became a primary source of visual data for algorithm training. 
This is artist Trevor Paglen's haunting installation "They Took the Faces From the Accused and the Dead," on display at San Francisco's de Young Museum starting Saturday. It's part of a provocative new exhibit that explores, through the lens of international artists, the ever-expanding space where humans and artificial intelligence meet. 
The exhibit's title, "Uncanny Valley: Being Human in the Age of AI," suggests viewers might be in for some revenge-seeking Westworld-style robots, but the only bot on display is social robot head Bina48chatting on video with artist Stephanie Dinkins in an exploration of the human-robot divide. Like Paglen's piece, most other works focus on the invisible forms of AI, like algorithmic data mining and machine learning, reshaping our reality. 



stephanie-dinkins-bina48a
Artist Stephanie Dinkins chats with robot head Bina48 about racism, faith, loneliness and other heady topics in a video installation at the de Young Museum in San Francisco. 
Fine Arts Museums of San Francisco

If it's hard to picture the data economy made into a compelling visual experience, think an AI-generated Taylor Swift and a CGI lizard that spouts poetry generated by a neural network trained on recordings of Doors frontman Jim Morrison. 



Simon Denny brought to life an unrealized Amazon patent for a cage to transport workers. 
Fine Arts Museums of San Francisco

There's a spiky red digital serpentine creature named Bob who morphs in appearance, behavior and personality according to his online interactions with visitors, like a Tamagotchi digital pet of yore. And an artist's rendition of a transport system based on an unrealized Amazon patent for a cage that could ferry workers atop a robotic trolley. 
Heady stuff, for sure. But it's intriguing to see the conversation about AI's promises and pitfalls extend past academia into psychedelic video projections and interactive avatars. In an era when machines are becoming increasingly effective at mimicking human behavior and understanding, the de Young says the exhibit is the first in the US to explore through art the impact of AI on the human experience. Art, of course, is one of many arenas where artificial intelligence is becoming a frequent collaborator
"Technology is changing our world, with artificial intelligence both a new frontier of possibility but also a development fraught with anxiety," says Thomas P. Campbell, director and CEO of the Fine Arts Museums of San Francisco. 



christopher-kulendran-thomas-ground-zero-schinkel-pavillon-6
An AI-generated Taylor Swift appears In Christopher Kulendran Thomas' video Being Humancreated in collaboration with Annika Kuhlmann. It poses questions about what it means to be human and authentic at a time when machines are becoming better and better at synthesizing human intellect and understanding. 
Fine Arts Museum of San Francisco

Paglen's giant grid of faces, culled from the American National Standards Institute's archives, is eerie. Making it even eerier is an aesthetic the artist says intentionally evokes 19th century experiments like one by a professor who believed physical appearance could reveal criminal tendencies. Could the photos we share online every day be used to create algorithms that lead to profiling and put people in danger?
In a nearby room, a short film by Lynn Hershman Leeson touches on related questions. In it, actor Tessa Thompson (star of the futuristic Westworld) describes PredPol software, which uses analytics based on current and historical crime data to help law enforcement predict the likely times and locations of future crimes. The company says the software has dramatically reduced crime, but it's also raised concerns about bias



tessa
Westworld star Tessa Thompson appears inside the same kind of red digital square PredPol predictive policing software puts on maps to show where crimes are likely to take place.   
Video screenshot by Leslie Katz/CNET

"What about algorithmic mistakes, faulty logic?" Thompson asks in a foreboding voice, looking directly into the camera. "Predictive behavior and algorithms can actually construct and alter real-life behavior." 
Pictured inside a red digital square like the one PredPol puts on maps to indicate a likely crime zone, Thompson warns about complacency when it comes to data collection and online privacy. "The red square puts us inside of a coded prison," she says. "The Red Square has also been a place of revolution. We decide which we will become: prisoners or revolutionaries." 
For more uneasiness, stand in front of Hershman Leeson's interactive installation "Shadow Stalker" and you'll see a projection of your body-shaped shadow overlaid on a Google map showing the area around the museum. 
Input your email address, and personal details retrieved from internet databases immediately start to pop up -- your age, old home addresses, the names of relatives. (Don't worry, the museum's legal department has made sure no bank account or other such information will show.) Still, the personal information that flashes for all to see is a sobering reminder of how readily and widely available data collected on us, some without our knowledge, has become. 
But with AI helping to solve critical problems in transportation, retail and health care (spotting breast cancer missed by human eyes, for example), not all works touch on its potentially threatening aspects. The exhibit also spotlights Forensic Architecture, an independent research agency based at the University of London. It uses machine-learning methods to analyze citizen-gathered evidence like phone photos and footage in open-source investigations of civil and human rights violations like a suspected chemical weapons attacks in Syria. 



huyghe-exomind-deep-water-03233-a-1
Pierre Huyghe's sculpture of a woman with a live bee colony for a head. This type of bee is less prone to swarm than other varieties, the museum says.  
Fine Arts Museums of San Francisco

One of the goals of the exhibit, curator Claudia Schmuckli tells me, is "to present a more nuanced picture of how AI operates in the world, to move from this polarizing conversation that pitches technophobes and technophiles against each other, and to really allow us to take a step back and look at how does AI operate in the world today? Where are the opportunities? Where are the current problems?" 
Uncanny Valley: Being Human in the Age of AI runs through Oct. 25, expanding through the first floor of the museum into its sculpture garden. There, visitors will find a sculpture of a crouching, nude woman with a live bee colony for a head. Curated materials from the exhibit suggest Pierre Huyghe's creation is a metaphor for neural networks modeled on the human brain. But it could just as easily represent how many people feel trying to make sense of the increasing complexities of being human in an AI-driven world.

0 comments:

Post a Comment