A Blog by Jonathan Low

 

Nov 2, 2016

Looking Beyond Silicon To Squeeze More Out of Chips

Silicon - and Moore's Law - may have limits, but new materials and new ideas about how to apply them could spur further growth in the cost to power equation. JL

John Markoff reports in the New York Times:

The A.I. system is emblematic of something even more significant for the microelectronics industry as it inches closer to the physical limits of semiconductors made with silicon: It uses 1/32 of the memory and operates 58 times as fast as rival programs. Better algorithms and new kinds of hardware circuits could help scientists continue to make computers that can do more and at a lower cost.
Ali Farhadi holds a puny $5 computer, called a Raspberry Pi, comfortably in his palm and exults that his team of researchers has managed to squeeze into it a powerful program that can recognize thousands of objects.
Dr. Farhadi, a computer scientist at the Allen Institute for Artificial Intelligence here, calls his advance “artificial intelligence at your fingertips.” The experimental program could drastically lower the cost of artificial intelligence and improve privacy because you wouldn’t need to share information over the internet.
But the A.I. system is emblematic of something even more significant for the microelectronics industry as it inches closer to the physical limits of semiconductors made with silicon: It uses 1/32 of the memory and operates 58 times as fast as rival programs.
There is a growing sense of urgency feeding this sort of research into alternative computing methods. For decades, computer designers have been able to count on cheaper and faster chips every two years. As transistors have shrunk in size, at regular intervals, computing has become both more powerful and cheaper at an accelerating rate — a concept known as Moore’s Law.
Two years ago, with manufacturing costs exploding and severe technical challenges growing, the cost of individual transistors stopped falling. That has ended — at least temporarily — the ability of computer makers to easily make new chips that are faster and cheaper.
But if silicon has its limits, ingenuity may not. Better algorithms and new kinds of hardware circuits could help scientists continue to make computers that can do more and at a lower cost.
“It’s been a fun ride,” said Thomas M. Conte, an electrical engineer at the Georgia Institute of Technology. “Today you’re entering this patchwork world where you are going to find a better solution for a particular problem, and that’s how we’re going to advance in the future.”
This summer, for example, Intel acquired Nervana Systems, a small maker of specialized hardware designed to run A.I. programs more efficiently.
Earlier this month, researchers at Argonne National Laboratory, Rice University and the University of Illinois at Urbana-Champaign published research demonstrating how a programming technique for an Intel microprocessor chip uses significantly less power to accomplish the same work.
The new approach is significant, according to supercomputer designers, because the high energy requirements of the fastest computers have become the most daunting challenge as scientists try to move from today’s petaflop — a quadrillion computations per second — machines to exaflop computers, which could perform a quintillion computations per second.
Such computers are considered necessary to solve fundamental scientific problems like predicting the risk of climate change to the future of humanity.
Because of the slowdown in Moore’s Law, the arrival of exascale computing has repeatedly been pushed back. Though it was originally expected in 2018, projections now set the next generation off as far as 2023.
The Argonne paper notes that a future supercomputer capable of an exaflop will multiply energy costs by a factor of a thousand. To reduce those energy demands, the researchers demonstrated how they used a conventional Intel chip and turned off half of its circuitry devoted to what engineers call mathematical precision. Then they “reinvested” the savings to improve the quality of the computed result.
“Mathematical precision is like a knob you can turn,” said Krishna V. Palem, a Rice University computer scientist. “The question is what you do with the saved energy.”
The researchers experimented with using the various modes of the microprocessor in a manner similar to a gearshift in a car, automatically shifting from higher to lower precision and back as needed to solve a problem.
“There is a lot to be done by thinking more carefully on how you can save energy,” said Marc Snir, a veteran supercomputer designer and a computer scientist at the University of Illinois at Urbana-Champaign.
The Argonne researchers are exploring ideas put forward by Dr. Palem, who in 2003 first proposed an idea he described as “inexact computing.” He suggested trading off precision to make dramatic gains in computing efficiency. Originally, he explored the idea of inexactness as a way to make use of imperfect chips where portions of the transistors were not working because of manufacturing flaws.
More recently, he has turned to using his ideas to gain significant energy savings from today’s common processors.
Dr. Palem said that the group was planning to extend the Argonne research to more efficiently run mathematical models that relate to climate change.
With colleagues from Rice University and Seoul National University, he recently demonstrated how inexactness could be applied to the challenge of pinpointing an indoor location, since GPS usually doesn’t work inside buildings. The Rice researchers employed a technique known as a “hash function,” which involves representing a large chunk of data, like a digital photo, with a much smaller numerical value. They rely on that image to nail down the location.
While the Allen Institute researchers identified objects using extremely efficient versions of programs known as neural networks, the Rice scientists matched the surrounding scene captured by a smartphone camera from a library of imagery stored on the phone itself. The approach compresses all the bits that make up those photos
and does location calculations on a simple hand-held computer — something that would normally require pinging a data network over the internet.
Like the Allen researchers, the Rice University scientists think that the energy efficiency of their algorithm can preserve privacy since nothing is sent over the internet. What’s more, they said in a recent paper, they were able to do it “500 times cheaper, both in energy and computation cost” over existing methods.
The lesson is that as engineering progress slows, advances will increasingly come from human creativity, computer scientists said.
In a Stanford University lecture last month, Alan Huang, an electrical engineer, showed how — by reconfiguring internet links in the shape of a doughnut rather than the two-dimensional mesh that is used now — it would be possible to cut internet delays in half, drastically speeding the delivery of digital video, while cutting the amount of computer equipment needed to deliver that data.
“You don’t need a quantum computer to do this,” he said, referring to a concept for a supercomputer. “You just need high school math.”

0 comments:

Post a Comment