X
Innovation

Q&A: Computing with an infinite power supply

A computer trend that has been quietly proving itself since the 1940s will have a profound impact on wireless data gathering.
Written by Christie Nicholson, Contributor

Today in the palm of our hands we can hold a smart phone which has the similar computing power of machines that required large computer rooms back in the 1960s. The very fact that we are able to own such tiny yet powerful devices is also amazing. Only huge governments could afford such computers less than four decades ago. It is Moore’s Law—the prediction that the number of components on a computer chip will double approximately every two years—that explains this exponential change in the computing industry. It might seem impossible to truly grasp the consequences of such rapid change, but consider this: Such exponential change is akin to our average lifespan of 75 years shrinking, over four decades, to a mere 4 minutes. Just imagine the ramifications!

But as amazing as this trend is, there is another equally profound trend that has been quietly changing computing and we are on the verge of a revolution in the power sources that fuel our digital devices. Battery power is a big deal, but what we fail to see is the incredible change towards exponentially greater energy efficiency in computing itself. And this will lead to a surge of new wireless data gathering unlike anything we’ve experienced before.

To explain this trend and its mind-boggling consequences, SmartPlanet turned to energy expert and visiting professor at Stanford University, Jonathan Koomey, author of Cold Cash, Cool Climate and Turning Numbers into Knowledge: Mastering the Art of Problem Solving.

SmartPlanet: You’ve spoken and written about “The computing trend that will change everything.” Can you briefly describe that trend?

Jonathan Koomey: Everyone knows about Moore’s Law. Back in 1965 when Gordon Moore looked at the changes in the density of computing chips, he found that every year or so there was a doubling of the number of components you could fit on a chip. In 1975 he went back and found that things had changed a bit. The density of computer chips was doubling then every couple of years. And that trend has held more or less constant for the last thirty plus years. Now, in concert with that increase in density we also had an increase in the performance of computers. And it turns out since the mid-1970s the performance of computers has doubled about every year and a half, so a little bit faster than the density of chips.

SP: OK so computing performance is thought of as separate from density of computing chips, even though they are somewhat related.

JK: Yes, when you ask what Moore’s Law is you often get back, “Oh it’s the doubling of performance every year and a half.” And that turns out to be true in the PC era, but Moore never talked about performance, he talked about density of components on a chip. What most folks don’t know is that better energy efficiency of computers has also doubled every year and a half. And that trend actually started back in the 1940s.

SP: It started that far back.

JK: It’s something that has gone on throughout the history of digital computing since the dawn of the computer age.

SP: Can you give us some examples of how profound this trend actually is?

JK: Sure. You have your modern day Mac Book Air, which is a very efficient computer nowadays and it’s a very light computer—only a few pounds at the most. If that Mac Book Air operated at the energy efficiency of computers of twenty years ago, its fully charged battery would last all of 2.5 seconds. Today it’s fully charged battery lasts 10 hours. That’s 10,000 times better.

And the world’s fastest super computer, the Fujitsu K, operates at what is called 10.5 petaflops, that’s a huge number of operations per second, 1015 operations per second. It currently uses 12.7 megawatts, which is enough to power a middle-sized town. But assuming that these trends continue, in twenty years a computer that has the same calculating power will consume as much electricity as a toaster oven. That’s the rate of improvement that we’re talking about, a truly remarkable rate of increase in the efficiency of computing.

SP: Why is battery power considered our biggest remaining challenge for improving everything from the personal computer to robotics?

JK: The importance of this trend is that the energy efficiency of computing devices doubles every year and half. That means that for a computer that has the exact same performance you’ll be able to use half the battery in a year and a half than we’re using now. And over the span of the decade that’s about a hundredfold improvement in the energy efficiency of the computer. So the need for battery power goes down by a factor of a hundred every ten years.

The most interesting case are sensors that can scavenge power from stray light, heat, motion, or even radio and TV signals. With such sensors you can actually operate them indefinitely.

SP: You mention sensors that can harvest energy from stray television and radio signals and transmit data from a weather station to an indoor display every five seconds. How are these sensors harvesting background energy?

JK: Okay, so if you have a wire that is your radio antenna, the radio waves induce currents in the wire. And those currents are very small, but the way the currents ebb and flow contains information about the signal that your are receiving and allow you then to hear the voice that’s coming over the radio waves. You need to have an amplifier because these signals are very faint, but that amplifier then makes the sound audible. Now, those currents are tiny, but if your computer uses a very tiny amount of electricity, those currents are enough to power the device. The amount that the device needs is so small that they can just use those little currents.

SP: But this doesn’t necessarily mean that we need sensors to be harvesting energy, we can still use batteries that will be able to last a lot longer simply because the actual computing devices won’t need as much battery power as they do today, right?

JK: Yes, absolutely. Some data centers use mobile sensors that use lithium batteries that last for twenty or thirty years. Having a much more efficient computer, or sensor, helps you even if you decide to stick with batteries. To me the revolutionary applications are those where you completely separate them from the power source and operate them indefinitely. If you can do it in a small, inexpensive package, you can then manufacture millions of these sensors and spread them around. UC Berkeley and Intel came up with the concept of what they called smart dust, which are basically these tiny, tiny sensors that are very cheap and use very little power. And ultimately the vision is that they’ll be able to create millions of these sensors and use them in ways that we’ve never been able to do before. And of course you could always have sensors that use batteries, that’s fine, but avoiding the battery altogether will help you reduce size, weight, and cost.

SP: OK so we know that the performance of computing is becoming exponentially more efficient.  What about data transmission?  If there is a limiting factor to sending data, then what’s the point of performance efficiency?

JK: The efficiency is still improving for wireless transmission as well, it’s just not improving quite as fast. The important thing is that we’re pretty far from the theoretical limits there. And a lot of it relates to our own cleverness in using software in an efficient way to transmit information.

SP: Could you give us an example of such cleverness?

JK: Most of these sensors transmit their data wirelessly. One of the innovations is avoiding the need for one huge wireless network. For instance, in your house you have a wireless router. And that wireless router is the single point that connects to the Internet. Everything in your house connects to that router. So the innovation in these sensors was having the sensors communicate with each other instead of having them communicate directly with one wireless router. They can send information in little jumps between sensor number one, to sensor number two, to sensor number three, to sensor number four to bring it back to the main router.

So the amount of power that the little sensor needs to send the information is a lot less because they just need to send it to the sensor next to them instead of having to send it all the way back to the main router. That’s an innovation in our cleverness, a way that we changed our thinking about sending information that allowed us to become a whole more efficient in shipping it over a wireless network. It’s innovations like that that we haven’t thought of yet, that can have a huge impact on data transmission efficiency. And since we’re far from the theoretical limits we just have to get a lot more clever, and we should be able to be a whole lot more efficient in our transmission of data.

SP: What will this ultimately mean for the rest of us?

JK: The increase in the efficiency of computing allows these cheap wireless sensors to exist. And with the advent of these sensors you’ll be able to put many, many more sensors everywhere. And that means that we’ll have very fine grained information about how a building might be operating, the details about temperature and airflow in each room, where if someone is uncomfortable you can identify why it is that they’re uncomfortable and then modify the system to deliver the energy services to help make that person more comfortable.

SP: This is part of the whole big data movement?

JK: A lot of people talk about big data. My friend Erik Brynjolfsson at MIT likes to talk instead about nano-data. By nano-data he means data that describes in detail characteristics of a person, or a transaction, or an information transfer. And those detailed data turn out to be really important when you’re trying to deliver value to customers. If you know a lot about the characteristics of the current service that people are getting from some product you can then design products that are even better. But to do this you need data at a really disaggregated level, and the key to getting such data is having really inexpensive sensors.

Here’s a good example that might help people think about the power of this trend. Around the year 1600 people created the first microscopes. That allowed visibility onto the world of microscopic organisms in a way that wasn’t possible before. What this new technology does for us is give us visibility into human institutions and technological systems in a way that we never had before. We think we’ve seen big data now, just wait until there are billions and trillions of individual sensor nodes everywhere. And so that means that we’re going to have to be innovative in developing software to help us turn these data into meaningful information.

SP: Well that’s it. What are we going to do with all the collected data? Our limitation could be ourselves.

JK: Right, so people are going to need tools to help understand this explosion of data. And that’s where I think a lot of these opportunities are going to be. Lots of people can create mobile sensors, but there’s a relatively small number of people who are good at turning numbers into knowledge. And those are skills we need a whole lot more of.

This post was originally published on Smartplanet.com

Editorial standards