A Blog by Jonathan Low

 

Nov 18, 2016

Artificially Intelligent Chips Are the Cloud's Next Frontier. And Microsoft (!) Is the Leader

In a co-evolving technological universe, the software and algorithms required to sort and interpret the vast amounts of data now being generated need equally 'intelligent' hardware to perform their tasks. 

This poses both a challenge and an opportunity to the enterprises which produce both of them. And the fact that Microsoft has taken the lead in this space, at least for now, suggests that tech companies' 'muscle memory' from the early days when devices were destiny retains considerable value. JL

Cade Metz reports in Wired:

Deep neural networks learn discrete tasks by analyzing vast amounts of data, and lean heavily on GPUs and other specialized chips. As deep learning continues to spread across the tech industry—driving everything from image and speech recognition to machine translation to security—companies and developers will require cloud computing services that provide this new breed of hardware.
To build OpenAI—a new artificial intelligence lab that seeks to openly share its research with the world at large—Elon Musk and Sam Altman recruited several top researchers from inside Google and Facebook. But if this unusual project is going to push AI research to new heights, it will need more than talent. It will needs enormous amounts of computing power.
Google and Facebook have the resources needed to build the massive computing clusters that drive modern AI research, including vast networks of machines packed with GPU processors and other specialized chips. Google has even gone so far as to build its own AI processor. But although OpenAI says it’s backed by more than a billion dollars in funding, the company is taking a different route. It’s using cloud computing services offered by Microsoft and perhaps other tech giants. “We have a very high need for compute load, and Microsoft can help us support that,” says Altman, the president of the tech incubator Y Combinator and co-chairman of OpenAI alongside Musk, the founder of the electric car company Tesla.
“Anyone who wants a trained neural net model to handle real enterprise workloads either uses multiple GPUs or twiddles their thumbs for days,” says Chris Nicholson, founder of Skymind, a San Francisco startup that helps other companies build deep learning applications. “So every company that needs AI to improve the accuracy of its predictions and data recognition [will run] on them. The market is large now and will be huge.” Skymind’s own operations run on GPU-backed cloud computing services offered by Microsoft and Amazon.
A research outfit like OpenAI, which is trying to push the boundaries of artificial intelligence, requires more specialized computing power than the average shop. Deep learning research is often a matter of extreme trial and error across enormous farms of GPUs. But even if you’re training existing AI algorithms on your own data, you still need help from chips like this.
At the same time, as Altman points out, the hardware used to train and execute deep neural networks is changing. Google’s TPU is an example of that. Inside its own operation, Microsoft is moving to FPGAs, a breed of programmable chip. Chip makers like IBM and Nervana, now owned by Intel, are developing similar chips devoted to AI applications. As Altman explains, GPUs were not designed for AI. They were designed for rendering graphics. “They just happen to be what we have,” he says.Altman says that although OpenAI won’t use Azure exclusively, it is moving a majority of its work to the Microsoft cloud computing service. OpenAI chose Azure, he explains, in part because Microsoft CEO Satya Nadella and company gave the startup an idea of where their cloud “roadmap” is headed. But it’s unclear what that roadmap looks like. He also acknowledges that OpenAI picked Azure because Microsoft has provided his high profile operation with some sort of price break on the service.
According to Altman and Harry Shum, head of Microsoft new AI and research group, OpenAI’s use of Azure is part of a larger partnership between the two companies. In the future, Altman and Shum tell WIRED, the two companies may also collaborate on research. “We’re exploring a couple of specific projects,” Altman says. “I’m assuming something will happen there.” That too will require some serious hardware.

0 comments:

Post a Comment