A Blog by Jonathan Low

 

May 4, 2017

The Race To Build An Artificial Intelligence Chip... For Everything

The cost and complexity of contemporary digital operations is pushing the search for AI chips specifically designed to enhance the demand for speed, quality and the complex calculations required to drive them.

They are appearing in data centers, but are headed for personal devices. JL

Cade Metz reports in Wired:

Neural networks can run faster and consume less power when paired with chips designed to handle the massive calculations AI systems require. Google says that in rolling out its TPU chip, it saved the cost of building 15 data centers. Google and Facebook push neural networks onto phones and VR headsets so they can eliminate the delay that comes when shuttling images to distant data centers. They need AI chips that can run on personal devices, too.
Yann LeCun once built an AI chip called ANNA. But he was 25 years ahead of his time.
The year was 1992, and LeCun was a researcher at Bell Labs, the iconic R&D lab outside New York City. He and several other researchers designed this chip to run deep neural networks—complex mathematical systems that can learn tasks on their own by analyzing vast amounts of data—but ANNA never reached the mass market. Neural networks were pretty good at recognizing letters and numbers scrawled onto personal checks and envelopes, but they didn’t work all that well when performing other tasks, at least not in any practical sense.
Today, however, neural networks are rapidly transforming the internet’s biggest players, including Google, Facebook, and Microsoft. LeCun now oversees the central artificial intelligence lab inside Facebook, where neural networks identify faces and objects in photos, translate from one language to another, and so much more. Twenty-five years later, LeCun says, the market very much needs chips like ANNA. And these chips will soon arrive in large numbers.
Google recently built its own AI chip, called the TPU, and this is widely deployed inside the massive data centers that underpin the company’s online empire. There, packed into machines by the thousands, the TPU helps with everything from identifying commands spoken into Android smartphones to choosing results on the Google search engine. But this is just the start of a much bigger wave. As CNBC revealed last week, several of the original engineers behind the Google TPU are now working to build similar chips at a stealth startup called Groq, and the big-name commercial chip makers, including Intel, IBM, and Qualcomm, are pushing in the same direction.
Companies like Google, Facebook, and Microsoft can still run their neural networks on standard computer chips, known as CPUs. But since CPUs are designed as all-purpose processors, this is terribly inefficient. Neural networks can run faster and consume less power when paired with chips specifically designed to handle the massive array of mathematical calculations these AI systems require. Google says that in rolling out its TPU chip, it saved the cost of building about 15 extra data centers. Now, as companies like Google and Facebook push neural networks onto phones and VR headsets—so they can eliminate the delay that comes when shuttling images to distant data centers—they need AI chips that can run on personal devices, too. “There is a lot of headroom there for even more specialized chips that are even more efficient,” LeCun says.
In other words, the market for AI chips is potentially enormous. That’s why so many companies are jumping into the mix.

Tech Specialists

After acquiring a startup called Nervana, Intel is now building a chip specifically for machine learning. IBM is too, creating a hardware architecture that mirrors the design of a neural network. And more recently, Qualcomm has started building chips specifically for executing neural networks, according to LeCun, who is familiar with Qualcomm’s plans because Facebook is helping the chip maker develop technologies related to machine learning. Qualcomm vice president of technology Jeff Gehlhaar confirms the project. “We’re very far along in our prototyping and development,” he says.
Meanwhile, nVidia is apparently pushing into the same area. Just last month, the Silicon Valley chip maker hired Clément Farabet, who explored this kind of chip architecture while studying under LeCun at NYU and went on to found a notable deep learning startup called Madbits, which was acquired by Twitter in 2014.
nVidia is already a dominant force in the world of AI. Before companies like Google and Facebook can use a neural network to, say, translate from one language to another, they must first train it for this particular task, feeding it an enormous collection of existing translations. nVidia makes the GPU chips that are typically used to accelerate this training stage. “For training, GPUs basically have cornered the market, particularly nVidia GPUs,” LeCun says. But Farabet’s arrival may indicate that much like Qualcomm, nVidia is also exploring chips that can execute neural networks once they’re trained.
GPUs—or graphics processing units—were not designed for AI. They were designed for rendering graphics. But about five years ago, companies like Google and Facebook started using them for neural network training, just because they were the best option for the task, and LeCun believes they will continue to play this role. Coders and companies are now so familiar with GPUs, he says, and they have all the tools needed to use them. “[GPUs] are going to be very hard to unseat,” he says, “because you need an entire ecosystem.” But he also believes that a new breed of AI chips will significantly change the way the big internet companies execute neural networks, both in the data center and on consumer devices—everything from phones to smart lawn mowers and vacuum cleaners.As Google’s TPUs have shown, dedicated AI chips can bring a whole new level of efficiency in data centers, especially as demand for image recognition services increases. In executing neural networks, they can burn less electrical power and generate less heat. “If you don’t want to boil a small lake, you might need specialized hardware,” LeCun quips.
Meanwhile, as virtual and augmented reality become more pervasive, phones and headsets will need similar chips. As Facebook explained last week in unveiling its new augmented reality tools, this kind of technology requires neural networks that can recognize the world around you. But augmented reality systems can’t afford to run this AI back in the data center. Sending all that imagery over the internet takes too long, ruining the effect of the simulated reality. As Facebook chief technology officer Mike Schroepfer explains, Facebook is already starting to lean on GPUs and other chips, called digital signal processors, for some tasks. But in the long run, devices will surely include an entirely new breed of chip. The need is there. And chipmakers are racing to fill it.

0 comments:

Post a Comment