A Blog by Jonathan Low

 

Oct 21, 2019

Will IoT And the End Of Moore's Law Contribute To Climate Change?

Unless new innovations in technology dramatically improve productivity and lower energy use, the answer to the question is yes. JL


Dean Takahashi reports in Venture Beat:

150 billion chips shipped, 50 billion coming in the next two years, a trillion IOT devices by 2035 eventually leads to a data explosion. But I’m concerned that Moore’s Law is slowing down at the same time this happens. Does that mean that we have to build a ton of new things, a ton of new datacenters, all kinds of new computers, without the efficiency gains of Moore’s Law? And if that’s true, what is the ultimate impact on climate change?
At Arm’s recent TechCon 2019 conference in San Jose, California, the chip licensing company said its partners have shipped more than 150 billion chips to date. And thanks to the rise of the internet of things (IoT), or making everyday objects smart and connected, another 50 billion chips are expected to ship in the next two years. By 2035, Arm estimates there will be a trillion IOT devices.
These IoT devices will connect to the cloud and require a lot of computing in datacenters. But will those datacenters contribute to climate change? After all, the efficiencies we get from Moore’s Law are slowing down. For decades, the advances of technology, as predicted by Intel chair emeritus Gordon Moore, made chips faster, smaller, and cheaper by doubling the number of transistors on a chip every couple of years. But that progress is slowing down, as chip companies have acknowledged.
Of course, the rise of IoT devices could also make the world more energy-efficient, as these devices will monitor energy waste and curtail it. I have no idea what’s going to happen here, but I had an interesting conversation about it with Drew Henry, senior vice president at Arm, in an interview at TechCon 2019.
VentureBeat: I had an interesting calculation I wanted to see if you had done or could do. You’re pretty good at all these numbers, as far as 150 billion chips shipped, 50 billion coming in the next two years, a trillion IOT devices by 2035. That eventually leads to a data explosion.
But I’m concerned that Moore’s Law is slowing down at the same time that this happens. Does that mean that we have to build a ton of new things, a ton of new datacenters, all kinds of new computers, without the efficiency gains of Moore’s Law? And then if that’s true, what is the ultimate impact on climate change and whether this melts the planet down? How much more power gets involved, even though these things are more energy-efficient than they used to be?
Drew Henry: Right, what happens with it all running? What’s amazing to me now is, given how creative engineers are — there are new sensors evolving that are just sipping power out of the ether … these capacitive-driven little tiny sensors. There are some that just sip RF power that’s generally in the ether.
You remember Phil Carmack, from Nvidia? Phil runs all the silicon at Google now, but he did a little stint between Nvidia and there, where he worked for a small startup that was doing just that, [thinking] about how to put a capacitive load into a little sensor and sip power out of it so you could fire it up when you needed to and it would send a small amount of data out, and then it could go back to sleep. But my point is there are these emerging technologies that are reusing power. That’s really interesting stuff.
When I started thinking about this problem — I’ve kind of left it to the fact that, you know what? Engineers are pretty creative. There are new ideas where you can see early glimpses of them, but then also there are new ideas that will just be discovered as people try to figure this thing out.
There’s a great blog written by James Hamilton, on the point of datacenters. He’s the infrastructure guru at Amazon. At every Amazon Reinvent for years, he would do a Monday night talk, something like that, and talk about their infrastructure and all that. This guy sails around the world on a boat and runs datacenters from his boat. But he writes about the future, and he’s written some really interesting things about the future of datacenters and the need for datacenters to be able to handle the amount of data that’s being driven. He references a little bit of that. He’s pretty responsive to his blogs, by the way. If you pop the question on it, I bet he’d respond to it.
But it is true that the amount of computing that we’re going to have to do is going to increase, and it’s going to have to become more efficient.
VentureBeat: And we talk about this on a day when our power grid is going crazy. One answer I hear is that IoT is going to save a lot of energy, because there’s so much waste that already happens in the world. But the notion of somebody inventing some kind of new Bitcoin analog, that would keep all the datacenters busy again … a lot more demand for graphics cards, that’s not exactly helping the planet.
Are there other calculations that you’re doing around — clearly you’re thinking a lot about what has to happen. Given some of the things you talked about with the edge and datacenters, do you think things are on track to make all these visions come to life in a fairly short time?
Henry: On track, generally I think yes. You can begin to see just little tiny early indicators of it on interesting new things. Autonomous vehicles, everyone talks about that, but I like just walking into the Amazon Go stores and seeing how that whole experience is. Have you been in one? The cashierless ones. You walk in and scan [your items] right there.
It’s interesting to see those types of applications happening now, because those are these really super early indicators of how things are going to evolve. You take the labor out of managing the cash register and put the labor out on the floor. It becomes much more involved in interacting with customers and less involved in checking them out.
I did an analysis last year of a billion drop cams. A billion drop cams generate like 400 to 600 exabytes of data a month, more data than transits the internet. I was interested in that because I believe the video sensor is such an information-rich device. It generates a lot of information. For instance, the Amazon retail stores, these Go stores, they’re just equipped with video sensors sitting there monitoring what goes on in the store, which enables you to do this kind of cashierless stuff. Those are the kinds of things we’re involved with. There are these early indicators of stuff that’s happening. Being able to go in and change the way that purchasing experience happens, where online meets brick and mortar, those are pretty interesting early indicators.
The reason I say that is because you equip a bunch of video sensors in those stores, and then you absolutely have to have an edge compute device that does the merging of that data. You need to do real-time analytics, real-time decision-making with it. But then you’re backhauling all that data to the cloud to be able to make longer-term decisions based on what’s a better way to operate the whole system. It’s a good early indicator of the way these systems are going to work.
VentureBeat: Is 5G helping there?
Henry: It does, in a couple of ways. 5G for smartphones is really interesting, what it’s going to be able to do, but I think it’s far less interesting compared to what will happen in the IoT world. In the IoT world, you have the option with 5G to move from operator-controlled to private network-controlled [networks]. You don’t have to have an operator manage your 5G network for a private environment. On a factory floor, you don’t want to wire the entire thing for the sensors you’ve got. You want the benefit of the high bandwidth and connectivity and latency you get out of it, so it becomes close to wired speed.
Nokia’s CEO talks about this all the time. Their view is that we’ll take these 5G systems and run them in industrial IoT applications, and the operator doesn’t need to be involved because it’s not an operator-controlled world. It’s an unlicensed spectrum. Being able to apply that same model into what’s effectively a much more massed Wi-Fi type of network today — It’s a completely different class of wireless connectivity.
I think that’s really interesting. 5G, to me, becomes much more enabling of those types of things as a use case in the industrial IoT space than it is about making cell phones better. That’s interesting as well, but I don’t think it’s all that enabling.
VentureBeat: I wonder if 5G — you mentioned it’s 10 to 100 times faster. That’s a wide range. I wonder if it just gets filled up, or if it provides you with some extra bandwidth to do a lot more on the computing front — more cloud computing, more edge computing, or a combination of the two.
Henry: It does from a standpoint of shipping data back and forth. It doesn’t really change the speed of compute, because that’s controlled by the compute systems you have. But the ability to ingest a whole bunch of data really fast and be able to ingest that, process it, and return an answer to it — I think that’s where it’s going to be interesting.
VentureBeat: And more so through private 5G, essentially?
Henry: Like I said, going back to the industrial IoT example, yes. But from a standpoint of how your smartphone evolves into new types of applications that are 5G applications, that will take advantage of those capabilities, that’ll emerge. It’s hard to predict, because a decade ago who could predict any of this?
VentureBeat: The supercomputer in the car, that still might be necessary? Or can you cram enough cloud computing in so that it’s not?
Henry: It’s necessary today because we have a view of how you have to make decisions today. You’re merging so much data to make these decisions. Remember, I was at Nvidia not that long ago when that whole DARPA autonomous car thing happened. Remember that? It’s not long between that and where we are now. The understanding of the kind of data you needed to merge together to make decisions back then, the computing you needed to do, compared to what we do now, it’s pretty advanced.
The point is that the supercomputer in a car has become less of a buzzword because we’re becoming much more efficient at understanding what data you need to be able to do the kind of autonomy we need to be able to do. Certainly at some point — your smartphone is the supercomputer of whatever past decade. But things like the introduction of Bfloat that I was talking about today — half the amount of data structure that you have to deal with, and yet generally speaking you’re able to make about the same kind of decisions on it.
There’s a lot of invention happening right now to make it even more efficient. You don’t need to have this Lawrence Livermore kind of supercomputer sitting in a car doing decision-making, because you’ve filtered out what information is most important to make your decisions. You’ve figured out better ways of doing compute.
This Bfloat thing that we did is really cool. One of the principal engineers that did it, Nigel Stephens — this is a funny story. One of the guys I worked with at Silicon Graphics was the principal guy behind FP16. He now works at Arm. He’s been working on this Bfloat thing with us. It’s just one of those funny connections that happen in our industry.
But Nigel Stephens, the guy that wrote the blog about Bfloat last week — the big thing we’ve realized is you could do a lot of this training and inferencing on CPUs. You can do it with really small amounts of silicon area. And then, of course, that leads to less power consumed doing it. These are these interesting inventions that are starting to happen as people have a deeper understanding of the algorithms you need to run to do this kind of stuff. That’s one of those early examples of how we’re becoming more efficient at the way we do this stuff. We don’t have to be so brute-force.
VentureBeat: How is ARM in the datacenter doing?
Henry: I’m so happy. You have the public announcements, and then there’s a lot more progressing that’s not public today. I’ll leave the public announcements as the indicator, but it’s happening in both ways. Amazon has deployed [ARM] throughout. They’re always the first mover. They’ve deployed ARM-based Graviton CPUs that they’ve built. Their Annapurna group built it. Those are now deployed in their A1 instances, as well as deployed in their infrastructure that does things like function as a service. The stuff that you don’t buy the compute cycle for. You actually buy an execution of some kind of application. They’ve deployed it there, as well. We’re thrilled with that deployment.
VentureBeat: What is the niche for it? Or is it really going head-on with Xeon?
Henry: Our focus has been — the internet infrastructure is about networking technologies, storage technologies, compute technologies, security technologies. It’s devices at the edge, switches, offload devices that sit inside every server, server application processors. The sum total of that is 300 million processor units a year, something like that. We’ve grown from 27% last year to 30% this year in that.
On the server side, which is the thing that most people fixate on — though from a volume standpoint it’s actually really small — that side, two interesting phenomena are happening. The first is that workloads are moving off the application processor. This is the network storage security processing. We have, by now, millions of these systems deployed that network storage security processing that is what used to run as an application load on a Xeon. Now it’s running on ARM. Millions of systems are deployed with that now. If you want to count how workload TAM is, ARM has taken a pretty substantial portion of that workload TAM.
Now, in the classic application processor — this is what’s doing the web frontend and Nginx processing, or it’s doing database processing with Redis, or it’s doing other types of application processing. That application tier, that’s where ARM doing the work with Graviton is the first indicator of hyper-scale adoption of that. And then, of course, merchant silicon providers from guys like Ampere and Marvell who have entered into the marketplace, the work that’s happening in China with HiSilicon and others — these guys are now building it, and they are addressing what I think is a growing compute market between the edge stuff I talked about today and the compute side.
The adoption is quite good. This is where, as I said last year, pay close attention to the next six months, because the next six months is where you’re going to see a lot of interesting announcements happening.

0 comments:

Post a Comment