Ben Dickson reports in Venture Beat:
As society turns to artificial intelligence to solve problems across ever more domains, we’re seeing an arms race to create specialized hardware that can run deep learning models at higher speeds and lower power consumption.Given the variety of industries and domains that are finding applications for deep learning, there’s little chance that a single architecture will dominate the market. But what’s certain is that the AI chips of the future will be very different from the classic CPUs that have been sitting in our computers and servers for decades.
As society turns to artificial intelligence to solve problems across ever more domains, we’re seeing an arms race to create specialized hardware that can run deep learning models at higher speeds and lower power consumption.
Some recent breakthroughs in this race include new chip architectures that perform computations in ways that are fundamentally different from what we’ve seen before. Looking at their capabilities gives us an idea of the kinds of AI applications we could see emerging over the next couple of years.
Neuromorphic chips
Neural networks, composed of thousands and millions of small programs that perform simple calculations to perform complicated tasks such as detecting objects in images or converting speech to text are key to deep learning.
But traditional computers are not optimized for neural network operations. Instead they are composed of one or several powerful central processing units (CPU). Neuromorphic computers use an alternative chip architecture to physically represent neural networks. Neuromorphic chips are composed of many physical artificial neurons that directly correspond to their software counterparts. This make them especially fast at training and running neural networks.
The concept behind neuromorphic computing has existed since the 1980s, but it did not get much attention because neural networks were mostly dismissed as too inefficient. With renewed interest in deep learning and neural networks in the past few years, research on neuromorphic chips has also received new attention.
In July, a group of Chinese researchers introduced Tianjic, a single neuromorphic chip that could solve a multitude of problems, including object detection, navigation, and voice recognition. The researchers showed the chip’s functionality by incorporating it into a self-driving bicycle that responded to voice commands. “Our study is expected to stimulate AGI [artificial general intelligence] development by paving the way to more generalized hardware platforms,” the researchers observed in a paper published in Nature.
While there’s no direct evidence that neuromorphic chips are the right path to creating artificial general intelligence, they will certainly help create more efficient AI hardware.
Neuromorphic computing has also drawn the attention of large tech companies. Earlier this year, Intel introduced Pohoiki Beach, a computer packed with 64 Intel Loihi neuromorphic chips, capable of simulating a total of 8 million artificial neurons. Loihi processes information up to 1,000 times faster and 10,000 more efficiently than traditional processors, according to Intel.
Optical computing
Neural networks and deep learning computations require huge amounts of compute resources and electricity. The carbon footprint of AI has become an environmental concern. The energy consumption of neural nets also limits their deployment in environments where there’s limited power, such as battery-powered devices.
And as Moore’s Law continues to slow down, traditional electronic chips are struggling to keep up with the growing demands of the AI industry.
Several companies and research labs have turned to optical computing to find solutions to the speed and electricity challenges of the AI industry. Optical computing replaces electrons with photons, and uses optical signals instead of digital electronics to perform computation.
Optical computing devices don’t generate heat like copper cables, which reduces their energy consumption considerably. Optical computers are also especially suitable for fast matrix multiplication, one of the key operations in neural networks.
The past months have seen the emergence of several working prototypes of optical AI chips. Boston-based Lightelligence has developed an optical AI accelerator that is compatible with current electronic hardware and can improve performance of AI models by one or two orders of magnitude by optimizing some of the heavy neural network computations. Lightelligence’s engineers say advances in optical computing will also reduce the costs of manufacturing AI chips.
More recently, a group of researchers at Hong Kong University of Science and Technology developed an all-optical neural network. For the moment, the researchers have developed a proof-of-concept model simulating a fully connected, two-layer neural network with 16 inputs and two outputs. Large-scale optical neural networks can run compute-intensive applications ranging from image recognition to scientific research at the speed of light and with lower energy consumption.
Huge chips
Sometimes, the solution is to scale larger. In August, Cerebras Systems, a Silicon Valley startup that came out of stealth in May, unveiled a massive AI chip that packs 1.2 trillion transistors. At 42,225 square millimeters, the Cerebras chip is more than 50x larger than the largest Nvidia graphics processor and contains 50x more transistors.
Big chips speed up data processing and can train AI models at faster rates. Cerebras’s unique architecture also reduces energy consumption in comparison with GPUs and traditional CPUs.
The size of the chip will limit its use in space-constrained settings, of course, although the makers have mostly designed it for research and other domains where real-estate is not a serious issue.
Cerebras recently secured its first contract with the U.S. Department of Energy. The DoE will be using the chip to accelerate deep learning research in science, engineering, and health.
Given the variety of industries and domains that are finding applications for deep learning, there’s little chance that a single architecture will dominate the market. But what’s certain is that the AI chips of the future will probably be very different from the classic CPUs that have been sitting in our computers and servers for decades.
0 comments:
Post a Comment