A Blog by Jonathan Low

 

Jan 30, 2019

How Intel Figured Out the Way To Compute In 3 Dimensions - And Why It Matters

Because it may open up the bottleneck that has been holding back the scaling of new technologies like artificial intelligence. JL

Greg Satell reports in Digital Tonto:

Intel rose to dominance because of its confidence in Moore’s law.We’ve gotten innovation for free, as every few years new chips allow us to do more than we could before. (But) it takes time for information to travel from one chip to another. As chips have become faster they need to wait longer to get the information they need. (And) we want to have more specialized chips for advanced applications. The more different chips the more computing time we lose. If you can’t increase performance by speed, then reduce the distance between chips. That’s 3D stacking. If you don’t explore, you won’t discover. If you don’t discover you won’t invent and if you don’t invent you will be disrupted
In 1965, Intel co-founder Gordon Moore published a remarkably prescient paper, which predicted that the number of transistors on a computer chip would double about every two years. For a half century, this process of doubling has proved to be so remarkably consistent that today it is commonly known as Moore’s Law and has driven the digital revolution.
For most of the past 50 years, Intel has led the industry by doubling transistors faster than its rivals, investing billions into massive fabs to produce the next generation of chips before anyone else. More recently, however, the giant has stumbled, losing ground to firms such as AMD, Qualcomm and Nvidia.
Yet the company recently announced a breakthrough with its Foveros technology that has the potential to put it back on top. No longer just cramming more transistors onto a silicon wafer, it has solved a decades old physics problem to stack chips on top of each other. This has the potential to not only change Intel’s fortunes, but to reshape the industry for years to come.

The Race To The Bottom

Ever since Moore published his famous paper, chip manufacturers have been able to continually come up with new techniques to keep cramming more transistors into the same space. That’s what mad every new generation of chip faster and how we’ve been able to shrink the size of computers from giant mainframes to smartphones we can fit in out pockets.
To put this pace of advancement in perspective, consider that the average smartphone today is more powerful than the computers that put a man on the moon and you get a sense of what an incredible accomplishment this is. We’ve basically gotten innovation for free, every few years new chips come off the line that allow us to do far more than we ever could before.
Yet there are theoretical limits to every technology. The fact is that atoms are only so small and that limits how many transistors we can put on a chip before subatomic forces begin to inhibit their function. We’re beginning to approach that limit now with the current generation of 7nm chips. We may be able to shrink them down to 5nm or 3nm, but probably not further than that.
Yet there is another physical impediment to improving chip performance besides the size of atoms: the speed of light. That is where Intel’s new breakthrough comes in.

The Von Neumann Bottleneck

In 1945, the legendary mathematician John von Neumann came up with a new model for the computer. Previously, computing machines had fixed programs, which allowed them to perform a single function. For example, a desk calculator can do arithmetic for you, but it can’t to run a database. Von Neumann suggested storing programs in memory to add flexibility.
That idea, known as the von Neumann architecture has long been the standard for how computers are run. It consists of a set of chips, including a central processing unit, a control unit, and memory chips as well as other types of chips that provide long term data storage, graphics capability and so on.
These provide a computer with full functionality, but come with a built-in problem. It takes time for information to travel from one chip to another. At first this wasn’t a big deal, but as chips have become faster they need to wait longer and longer, in terms of computing cycles, to get the information they need to do their work.
A related problem is that as technology advances we want to have more specialized chips for advanced applications, like artificial intelligence. The more different kinds of chips, the more information has to move around and the more computing time we lose. Faster chips don’t really solve this problem, we just lose more cycles.

Integrating The Integrated Circuit

If you can’t increase performance by speeding up the chip, then the obvious thing to do is to reduce the distance between chips. That’s the idea behind the architecture called 3D stacking, which integrates different chips together much like the integrated circuit combined various components together on a single chip.
Stacking chips on top of each other would not only vastly reduce the time circuits need to wait for instructions from each other and increase speed significantly. It would also decrease power usage due to far shorter communication paths. It seems like the perfect solution to a thorny problem.
However, this approach comes with its own problems. Because the chips are stacked on top of one another and heat travels upward, they tend to overheat. So while the technique has been used for memory chips, no one has been able to make it work for processing chips.
Until now, that is. That is essentially what Intel has achieved with its Foveros technology. The company has announced that it cracked the problem and that the technology will become available in the latter half of 2019. It is a tremendous achievement and has the potential to put Intel back on top.

What Great Companies Do Differently

Intel rose to dominance largely because of its confidence in Moore’s law. Throughout its history, it invested massive amounts of money into new fabs capable of making next generation of chips. It was a risky strategy, but allowed the company to consistently beat its competitors, over a period spanning decades, in getting faster chips to market.
More recently, however, the company has faltered. It’s fallen behind in getting new 10nm and 7 nm chips to market, forfeiting its performance advantage. To make matters worse, computer scientists found that graphical processing units (GPUs), dominated by rival Nvidia, run artificial intelligence algorithms better than the chips Intel makes.
For many firms, having a successful strategy go awry would be a death knell. Yet rather than than simply trying to execute faster or cut costs, Intel kept exploring. “We’ve been working on this packaging technology for nearly 20 years,” Raja Koduri, a Senior Vice President at the company told Reuters.
Compare that to General Electric, which had its strategy in the power industry similarly disrupted. Unlike Intel, however, it wasn’t able to invent its way out of it. The once innovative firm had mostly abandoned inventing truly new technology since it came out with CT scanners in the 1970s. Today, the company is in dire straits.
Make no mistake. If you don’t explore, you won’t discover. If you don’t discover you won’t invent and if you don’t invent you will be disrupted eventually. Intel had no guarantees that its investment in 3D technology would ever pan out. Yet it kept exploring that and many other things so that it could continue to compete long after its core technology became obsolete.
That’s what great companies do differently.