A Blog by Jonathan Low

 

Aug 21, 2019

To Power AI, Startup Creates Giant Macro Chip

Big data and big demand fuel need for bigger, more powerful chips to process all that information. JL

Cade Metz reports in the New York Times:

As big as a dinner plate — about 100 times the size of a typical chip — it would barely fit in your lap. The engineers behind the chip believe it can be used in giant data centers and help accelerate the progress of artificial intelligence in everything from self-driving cars to talking digital assistants like Amazon’s Alexa.  New A.I. systems rely on neural networks. These complex mathematical systems can learn tasks by analyzing vast amounts of data. These chips will play a key role in the race to create artificial intelligence. They could feed the creation of commercial products and government technologies, including surveillance systems and autonomous weapons.
The largest computer chips would usually fit in the palm of your hand. Some could rest on the tip of your finger. Conventional wisdom says anything bigger would be a problem.
Now a Silicon Valley start-up, Cerebras, is challenging that notion. On Monday, the company unveiled what it claims is the largest computer chip ever built. As big as a dinner plate — about 100 times the size of a typical chip — it would barely fit in your lap.
The engineers behind the chip believe it can be used in giant data centers and help accelerate the progress of artificial intelligence in everything from self-driving cars to talking digital assistants like Amazon’s Alexa.
Many companies are building new chips for A.I., including traditional chip makers like Intel and Qualcomm and other start-ups in the United States, Britain and China.

Some experts believe these chips will play a key role in the race to create artificial intelligence, potentially shifting the balance of power among tech companies and even nations. They could feed the creation of commercial products and government technologies, including surveillance systems and autonomous weapons.
Google has already built such a chip and uses it in a wide range of A.I. projects, including the Google Assistant, which recognizes voice commands on Android phones, and Google Translate, which translates one language into another.
“There is monstrous growth in this field,” said Cerebras’s chief executive and founder, Andrew Feldman, a chip industry veteran who previously sold a company to the chip giant AMD.
New A.I. systems rely on neural networks. Loosely based on the network of neurons in the human brain, these complex mathematical systems can learn tasks by analyzing vast amounts of data. By pinpointing patterns in thousands of cat photos, for instance, a neural network can learn to recognize a cat.
That requires a particular kind of computing power. Today, most companies analyze data with help from graphics processing units, or G.P.U.s. These chips were originally designed to render images for games and other software, but they are also good at running the math that drives a neural network.
About six years ago, as tech giants like Google, Facebook and Microsoft doubled down on artificial intelligence, they started buying enormous numbers of G.P.U.s from the Silicon Valley chip maker Nvidia. In the year leading up to the summer of 2016, Nvidia sold $143 million in G.P.U.s. That was more than double the year before.
But the companies wanted even more processing power. Google built a chip specifically for neural networks — the tensor processing unit, or T.P.U. — and several other chip makers chased the same goal.
A.I. systems operate with many chips working together. The trouble is that moving big chunks of data between chips can be slow, and can limit how quickly chips analyze that information.
“Connecting all these chips together actually slows them down — and consumes a lot of energy,” said Subramanian Iyer, a professor at the University of California, Los Angeles, who specializes in chip design for artificial intelligence.
Hardware makers are exploring many different options. Some are trying to broaden the pipes that run between chips. Cerebras, a three-year-old company backed by more than $200 million in funding, has taken a novel approach. The idea is to keep all the data on a giant chip so a system can operate faster.
Working with one big chip is very hard to do. Computer chips are typically built onto round silicon wafers that are about 12 inches in diameter. Each wafer usually contains about 100 chips.

Many of these chips, when removed from the wafer, are thrown out and never used. Etching circuits into the silicon is such a complex process, manufacturers cannot eliminate defects. Some circuits just don’t work. This is part of the reason that chip makers keep their chips small — less room for error, so they don’t have to throw as many of them away.
Cerebras said it had built a chip the size of an entire wafer.
Others have tried this, most notably a start-up called Trilogy, founded in 1980 by the well-known IBM chip engineer Gene Amdahl. Though it was backed by over $230 million in funding, Trilogy ultimately decided the task was too difficult, and it folded after five years.
Nearly 35 years later, Cerebras plans to start shipping hardware to a small number of customers next month. Mr. Feldman said the chip could train A.I. systems between 100 and 1,000 times faster than existing hardware.
He and his engineers have divided their giant chip into smaller sections, or cores, with the understanding that some cores will not work. The chip is designed to route information around these defective areas.
Significant questions hang over the company’s hardware. Mr. Feldman’s performance claims have not been independently verified, and he did not reveal how much the chip will cost.
The price will depend on how efficiently Cerebras and its manufacturing partner, the Taiwanese-based TSMC, can build the chip.
The process is a “lot more labor intensive,” said Brad Paulsen, a senior vice president with TSMC. A chip this large also consumes large amounts of power, which means that keeping it cool will be difficult — and expensive. In other words, building the chip is only part of the task.
This is a challenge for us,” Mr. Paulsen said. “And it is a challenge for them.”
Cerebras plans to sell the chip as part of a much larger machine that includes elaborate equipment for cooling the silicon with chilled liquid. It is nothing like what the big tech companies and government agencies are used to working with.
“It is not that people have not been able to build this kind of a chip,” said Rakesh Kumar, a professor at the University of Illinois who is also exploring large chips for A.I. “The problem is that they have not been able to build one that is commercially feasible.”

0 comments:

Post a Comment