A Blog by Jonathan Low

 

Jun 19, 2019

Why Apple, Google and Facebook Are Raiding Animal Research Labs For AI Talent

Animal research, particularly involving neuroscience, may offer faster insights into how brains function in ways that may inform faster, more useful artificial intelligence. JL

Sarah McBride and Ashlee Vance report in Bloomberg:

Software development attracts tech companies to neuroscientists just as strongly as their insights about animal cognition. The modern brain researcher has to know how to code and work with volumes of information, much as an AI staffer at Google would to improve an advertising algorithm or the lane-merging abilities of a self-driving car. Animal-centric neuroscientists are accustomed to working with unconventional ideas. Studying this area has led to insights into how neural circuits function, informing how humans move, feel, and emote. Neuroscientists are drawn to the private sector to do more exciting, weirder work
Jaguar is a mouse. He lives at Harvard’s Rowland Institute, where, from time to time, he plays video games on a rig that looks like it belongs in A Clockwork Orange. Metal bars position him inside a small platform in front of a metal lever; his mission is to find a virtual box’s edges by feel. To do this, he reaches with his right paw to grab the joystick, which can rotate 360 degrees, and maneuvers it until he feels feedback from the machine. When he reaches the right target area—say, an edge of the box—a tube rewards him with a dribble of sugar water.
To track Jaguar’s brain activity, researchers have genetically altered him so his neurons emit fluorescent light when they fire. This light is visible through a glass plate fused to part of his skull with dental cement. A microscope affixed above the plate records images of his brain lighting up as he plays. “Within one session, you can teach them new rules and literally watch thousands of neurons learn this process and see how they change,” says Mackenzie Mathis, the neuroscientist leading the experiments.
In decades past, Mathis’s insights would have served only to advance what we know about mice and brain function. Today, however, she’s one of a growing number of specialized animal researchers assisting in the development of artificial intelligence software and brain-computer interfaces. She wants to discover how mice learn, in part because it could inform how we teach computers to learn. Watching mice react to unexpected situations in video games, for instance, could someday let her pass on similar skills to robots.
Other neuroscientists are studying zebra finches’ songcraft. Some are becoming expert in the electrical conductivity of sheep skulls. Still more are opting for the classics of high school biology: fruit flies, whose neural setup is relatively simple to behold, or worms, who wring considerable juice from their few neurons. Over the past few years, technology companies have been raiding universities to hire away such people. Apple, Facebook, Google, and Twitter all hired doctoral candidates from one of Mathis’s recent fellowship programs, she says. “The Ph.D. students would have jobs before they got their degrees.”
Animals have long played important roles in advancing corporate science, of course, particularly for medical treatments. But the leap required to translate insights from the zebra finch’s sound-processing anatomy into Siri’s voice-recognition software—or mouse gaming into a future when Amazon.com Inc. runs all-android warehouses—is of an entirely different order. With whole new industries at stake, the race to unlock the secrets of the animal mind is getting weird.
In 1958, Cornell neurobiologist Frank Rosenblatt unveiled the perceptron, one of the earliest attempts to mimic inside a computer the architecture of a brain. Its processing elements
which he called neurons, coordinated to figure out, say, whether a particular photo depicted a man or a woman—a primitive stab at image recognition. The lingo used to describe the perceptron stuck, and Facebook, Google, and other companies continue to describe their vast AI computing systems as “neural nets” with millions of neurons working in unison.
The shorthand vastly exaggerates the overlap between the realms of computation and cognition even today. It’s tough to replicate something you don’t really understand. The true workings of the brain—for instance, how a group of neurons stores a memory—remain elusive to neuroscience, so the neurons’ digital counterparts can’t help but be flawed imitations. They’re rudimentary processing engines trained to perform reams of statistical calculations and identify patterns, with the imprimatur of a biological name.
relates to Apple, Google, and Facebook Are Raiding Animal Research Labs
Mice playing video games are helping researchers puzzle out the secrets of neural networks.
Photographer: Cassandra Klos for Bloomberg Businessweek
Still, with the technology industry chasing what’s known as artificial general intelligence, or AGI, the walls between the two realms have grown more porous. The implicit goal is a functionally sentient machine that can figure out things by itself, instead of relying on humans to train it, and that independently wants things. To the relief of some ethicists, we’re a long way from AGI, but many computer scientists and neuroscientists are betting that brains will show us the way.
Separately, several companies are battling to build brain-computer interfaces that could help prostheses behave like natural limbs or allow people to download knowledge into their minds. Elon Musk’s Neuralink Corp. is one such company; another is Kernel, run by tech multimillionaire Bryan Johnson. Neuroscientists are advising these startups on everything including how to blast information through skulls and make sure electrodes don’t cause infections in test subjects.
The scientific principles common to both endeavors are evident at Mathis’s Harvard lab. “Here’s our mouse palace,” she says, opening the door to a room filled with dozens of mice in plastic cages. The animals scamper around, cocking their heads and twitching their whiskers as they inspect visitors. Their clean quarters emit only a mild whiff of rodent. A red light fills the habitat to make sure the creatures, nocturnal by nature, stay awake during the day, ready to contribute to science.
That science includes the virtual-box game and a much harder one that looks like a primitive form of Mario Kart. For the latter, a mouse straddles two custom, motorized circular plates, its paws nestled into grooves on either side. A screen displays a green pathway with a blue rectangle at the end. As the mouse begins to run in place, trying to approach the blue rectangle, it must steer carefully to stay on the virtual pathway. Like humans, the mice take on a glassy-eyed cast as they play. The sessions last about a half-hour before they lose interest.
The microscopes peering into their brains record an incredible amount of information. “We can cover most all of their sensory, motor cortex, and decision-making areas at the same time,” Mathis says. The researchers sometimes change the games’ rules and controls—for instance, by making joystick pulls result in zigzag motions instead of straight ones—then look for differences in how the neurons light up. Mathis has also been working to shut off subsets of neurons, such as the nodes associated with learning, to check how the remaining ones react. One early insight: When it comes to decoding motion, the sensory cortex seems to play a larger role, alongside the motor cortex, than previously thought. “These neurons are doing a lot more than engaging in one specific thing,” she says.
relates to Apple, Google, and Facebook Are Raiding Animal Research Labs
Mackenzie and Alex Mathis.
Photographer: Cassandra Klos for Bloomberg Businessweek
One of her primary motivations is to learn more about how animals rapidly adjust to changes in their physical environment. When you pick up an object of unknown weight, for example, your brain and body quickly compute what kind of force is needed to deal with it. Robots can’t currently do that, but one infused with the neuronal learning patterns of a mouse potentially could. Mice are an unusually strong candidate to help bridge the gap, Mathis says. Their brains are complex enough to demonstrate high-level decision-making but simple enough for the researchers to deduce the connections given enough time.
We’ve only relatively recently developed computers powerful enough to capture, process, and analyze the volume of data produced by a subset of the average mouse brain’s roughly 75 million neurons. And it’s only within the last couple of years that AI software has advanced far enough to automate much of the research. Mathis and her husband, Alex Mathis, a fellow neuroscientist, have developed open source software called DeepLabCut to track their subjects’ movements. The application uses image recognition to follow a mouse’s tiny digits as it plays a game and track its reaction to the sugar-water reward.
Scientists used to do this type of work manually, jotting down every sip of water in their notebooks. The software now performs in minutes tasks that once required weeks’ or months’ worth of attentive human labor. “There’s a paper on primates from 2015 where they track quite a few body parts, like knuckles and limbs and one arm, and the monkey has different tasks, like reaching for things and holding them,” Alex says. “The first author of the paper wrote me and said his Ph.D. could have been two years shorter.” More than 200 research centers now use DeepLabCut to follow all manner of animals.
This type of software development and analysis attracts tech companies to neuroscientists just as strongly as their insights about animal cognition. The modern brain researcher has to know how to code and work with incredible volumes of information, much as an AI staffer at Google would to improve an advertising algorithm or the lane-merging abilities of a self-driving car. Animal-centric neuroscientists are also accustomed to working with unconventional ideas. “You tend to get creative people that are a little bit cowboy,” Mackenzie says. “People who are willing to bet their career on trying to study a black box.”
Tim Otchy doesn’t do mice. He’s a bird man. A research assistant professor at Boston University, Otchy sports a tattoo of a zebra finch on his right forearm. It shows the short, squat bird with a bright orange beak sitting on a branch and gazing pensively at the sky. “I do really like birds,” he says, sitting in an office filled with books—The Cellular Slime Molds, Nonlinear Dynamics and Chaos, and Principles of Brain Evolution, to name a few.
relates to Apple, Google, and Facebook Are Raiding Animal Research Labs
Birds’ semantic understanding of their songs, if properly understood, could be applied to voice-recognition software.
Photographer: Cody O'Loughlin for Bloomberg Businessweek
While Otchy was majoring in mechanical engineering at the Georgia Institute of Technology in the late 1990s, he also worked for a company that specialized in automating factory systems. His job was to teach robots to identify things, whether gadgets or auto parts, and sort them as they came down a conveyor belt. “It was just astounding to me how difficult it was,” he says. “These were tasks that children do.” His frustrations left him determined to uncover the inner workings of perception, decision-making, and learning. He left the factory line and, eventually, made his way to neuroscience and the zebra finch.
Songbirds such as the zebra finch have an unusual skill set. Whereas most creatures know instinctively how to make noises, songbirds learn to imitate what they hear, then vary the tunes, demonstrating some semantic understanding of their songs. Decades of research have pinpointed the structure in the finch’s brain, what’s known as the song nucleus, responsible for this behavior. Studying this area has led to rich insights into how neural circuits function, in turn informing other research around how humans move, feel, and emote. Figuring out how the birds imitate one another could help explain how we do the same thing, which could prove important in, say, teaching language skills to a machine.
Otchy works with about 300 birds at a BU aviary. For one experiment, a researcher will outfit a zebra finch with a backpack containing batteries that power a host of electronics attached to its skull. The bird is then placed in a sound booth about the size of a microwave, where it sings for days while Otchy and his team peer into its brain via mechanisms similar to the ones Mathis uses for her mice. As researchers have learned more about the zebra finch’s sound processing centers, they’ve sought to answer increasingly precise questions about its brain. “We don’t know how the information of how to ride a bicycle, or fly a helicopter, or speak Japanese, is stored in the brain,” Otchy says. “One day, we will have that knowledge.”
relates to Apple, Google, and Facebook Are Raiding Animal Research Labs
 Otchy in his lab at Boston University.
Photographer: Cody O’Loughlin for Bloomberg Businessweek
He came to run this research center, the Gardner Lab, after its namesake, Tim Gardner, took a leave of absence to work at Neuralink, which seeks to augment the human brain with a superfast computer processor. The departure created considerable buzz among neuroscientists and among students excited by Musk’s vision. (Gardner, who didn’t respond to requests for comment, is moving the lab to the University of Oregon; he’ll stay on at Neuralink part time.) “It’s a fantasy at this point, but I find the idea that we could, one day in the distant future, really write information directly into the brain … amazing,” Otchy says. “I would love to be able to contribute in even a small way to figuring out how.”
Birdsong researchers are among the hottest hires in a wide range of AI fields. After his dissertation at the University of California at Berkeley and a stint at Apple Inc., Channing Moore joined Google’s sound-understanding group, where he creates sound-recognition systems as sophisticated as the company’s image-recognition software, capable of distinguishing a siren from a crying baby. At Intel Corp., another Berkeley Ph.D., Tyler Lee, is drawing on his zebra finch research to improve voice processing—the type of technology that ends up in voice-command software such as Siri. “We’re trying to ask very similar questions,” he says. “How can I take auditory input, process it in a way that I can understand what a person is saying, what is the noise they’re in, what’s the environment they’re in?”
Berkeley professor Frederic Theunissen, who runs the lab where Moore and Lee studied, says many potential applications arise from the focused research he oversees. “It’s a special set of skills you gain if you’re interested in automatic speech recognition, voice recognition, and so forth,” Theunissen says. Voiceprint-based security systems for phones and other devices are one example. Another is noise reduction in phone calls and videos. That application came out of Moore’s work with the noise-resistant birds. The neurons of the zebra finch are capable of isolating another finch’s song from the surrounding cacophony.
Academics have been trying to declare it the age of neuroscience since the Reagan era, but in the early years of this century, the prospects for a young neuroscience graduate were low, and so were their numbers. Fifteen years ago, American universities counted fewer than 1,500 neuroscience undergrads and handed out fewer than 400 doctorates, according to the U.S. Department of Education. And even with such modest numbers, schools didn’t have enough full-time work or grant money to go around.
When Drew Robson graduated from Princeton with a math degree in 2005, his undergrad counselor gave him a memorable piece of advice: Whatever you do, don’t pursue neuroscience. Robson ignored it and went on to found the Rowland Institute’s RoLi Lab with Jennifer Li, his partner and Princeton sweetheart. They’ve seen the field grow to the point that U.S. schools now award about 5,000 neuroscience bachelor’s degrees and 600 doctorates a year. “We’ve had this explosion of tools in the last 10 years,” Robson says.
Team RoLi studies zebra fish, members of the minnow family whose bodies are transparent when they’re young, which allows researchers to observe their neurons without skull-plate surgeries and dental glue. A special mobile microscope Robson and Li developed helps them record which neurons are active while the fish swim. To capture different facets of zebra fish behavior, they might vary the current—leading an animal to turn away or swim harder in the same direction.
relates to Apple, Google, and Facebook Are Raiding Animal Research Labs
Hundreds of genetic varieties of zebra fish are used for research at Drew Robson and Jennifer Li’s Harvard lab.
Photographer: Cassandra Klos for Bloomberg Businessweek
Like many of their peers, Robson and Li are well-versed in the relationship between brain science and AI technology. Last year the couple bought a Tesla, and they take professional delight in watching the car’s self-driving systems evolve. As it dodges other vehicles, it recalls strategies their zebra fish use to achieve goals, such as quickly switching from hunter mode to fast-swimmer mode when they spot a predator. Their deep knowledge of such behaviors could someday inform Tesla Inc.’s neural nets, as the company tries to advance its self-driving technology beyond basic object recognition to humanlike decision-making.
“That’s many orders of magnitude more data,” Li says. “If you were to use biology, you can essentially cheat and look at what the solution should be without having to reinvent the wheel.” Robson says he wouldn’t mind trying to help Tesla solve those kinds of problems someday.
The fluid borders between public and private enterprise in neuroscience have opened the question of who’ll control prospective mergers between humans and machines. The universities that long performed the most ambitious research are now rivaled by tech companies with access to larger computers and datasets. A fresh Ph.D. can expect to earn about $50,000 a year at a typical university, whereas private companies are offering well into six figures and a vastly higher ceiling beyond. Chris Fry, another zebra fincher, was earning $10.3 million a year as senior vice president for engineering at Twitter within a decade and a half of leaving Theunissen’s lab. “There is a massive exodus of talent from academia right now,” says Mackenzie Mathis, the mouse researcher. “It’s a choice to stay in academia.”
Beyond the pay, many neuroscientists are drawn to the private sector because it tends to give them a chance to do more exciting, even weirder work—not to mention a break from writing grant applications. Yet decamping for Silicon Valley can also mean cutting off promising lines of research or leaving colleagues adrift. When Gardner went to work for Neuralink, one of his Ph.D. students switched schools, only to see his next eminent adviser take a leave of absence to work on his own startup.
Li and Robson are heading to the government-funded Max Planck Institute for Biological Cybernetics in Tübingen, Germany, and starting in September. The fish couple stay on the public side because they like the freedom and flexibility of what Robson calls the “playground setting.” Yes, the animal experiments can do unnatural things to harmless, helpless creatures. They can also encourage a humanizing perspective—something we might want to see AI exhibit.
Four years ago, before they’d finished their trackable microscope, Li and Robson were using an adhesive gelatin to keep young zebra fish swimming in place for a couple of hours, to measure how their neurons lit up. One morning the two arrived at the lab to find a big surprise: A larva they’d left swimming was still going 18 hours later, far beyond what they’d expected. “This animal was a champion,” Robson says. “Perfect,” Li adds. “His behavior was perfect.” Because of the rigors of the experiment, the researchers couldn’t save their hero for posterity, but they did the next best thing: Li and Robson installed his mom in a special aquarium as their pet. They named her Fred, after Amy Acker’s whip-smart character from the TV show Angel.
Robson and Li say the development of AI and brain-computer interfaces is going to force humans to become more humane. After all, if one of our goals is to imbue thinking machines with our own morals, we’ll have to grapple more than we’re used to with what morality is. Questions like: Who deserves the power of enhanced thought? Should a self-driving car choose to save a passenger over a pedestrian? And how smart do machines have to get before they’re considered part of that equation? “That’s a fundamentally very moral question—how do you value life?” says Li, who studied philosophy as an undergrad.
“It forces us to be rigorous in what our morality really boils down to,” Robson says. “You have to commit to something

1 comments:

Rent a car said...

what happens if you drive through a toll without paying, enterprise northglenn enterprise northglenn rental cars los angeles

Post a Comment