I chatted with Steve Lohr of the NY Times yesterday about the implications of the last six decades of progress in computing efficiency, and his blog today reflects our conversation nicely. He also talked about my new book, Cold Cash, Cool Climate: Science-based Advice for Ecological Entrepreneurs, which will be released on February 15, 2012, which gives some examples of why those trends are so powerful and important.
The new gadgetry at the International Consumer Electronics Show this week owes a lot to the crisp articulation of ever-increasing computer performance known as Moore’s Law. First proclaimed in 1965 by Intel’s co-founder Gordon Moore, it says that the number of transistors that can be put on a microchip doubles about every two years.
But a new descriptive formulation that focuses on energy use seems especially apt these days. So much of the excitement and product innovation today centers on battery-powered, mobile computing — smartphones, tablets, and a host of devices based on digital sensors, like personal health monitors that track vital signs and calorie-burn rates. And the impact of low-power sensor-based computing is evident well beyond the consumer market.
The trend in energy efficiency that has opened the door to the increasing spread of mobile computing is being called Koomey’s Law. It states that the amount of power needed to perform a computing task will fall by half every one and a half years.
The description of improving energy efficiency was the conclusion of an analysis published last year in the IEEE Annals of the History of Computing, with the title “Implications of Historical Trends in the Electrical Efficiency of Computing.” (An early draft [PDF] of the paper is here.) Jonathan G. Koomey, a consulting professor at Stanford University, was the lead author. His collaborators were three other scientists — Stephen Berard of Microsoft, Maria Sanchez of Carnegie Mellon University, and Henry Wong of Intel. (Mr. Koomey did not use the term “Koomey’s Law,” but others have.)
Like Moore’s Law, the significance of Koomey’s Law is more as an influential observation than a scientific discovery. Both are concepts that credibly measure what has happened and what is possible with investment and effort.
Last Halloween (October 31, 2011) I gave a talk on the long-term trends in the efficiency of computing at Stanford, and I’m finally getting around to posting the link.
The EPA today announced stricter rules on mercury emissions from power plants, which is an important development for those interested in greenhouse gas emissions. That’s because many of the older coal plants have no pollution controls and have social costs much higher than the value of the electricity they generate. It’s long past time for these plants to retire. And it turns out that there’s plenty of spare natural gas-fired generation capacity to pick up the slack, so CO2 emissions from these plants will go down a lot
“About 15% of existing US coal plants (about 50 GW out of 300 GW total) are old, inefficient, polluting plants that were grandfathered under the Clean Air Act, so they have few or no pollution controls.[1] More than half of US coal plants are 35 years of age or older.[2] The total social cost of running many of these plants is higher than the cost of alternative ways of supplying that electricity (even without counting the damages from greenhouse gas emissions),[3] so they represent an obsolete capital stock from society’s perspective. The most effective action we as a society can take would be to enforce existing environmental regulations, develop new ones (as the US EPA is now considering for mercury, mining, and other environmental issues), and charge these plants the full social cost of the damages they inflict upon us, which would double the cost per kWh of existing coal-fired plants even using low estimates of pollution costs. This will force lots of old polluting coal plants to retire, many others to reduce their hours of operation, generate lots of economic benefits in reduced health costs, give a boost to coal’s competitors, and reduce greenhouse gas emissions, so it’s a win all the way around.”
[2] See Figure 5-6 in Lovins, Amory B., Mathias Bell, Lionel Bony, Albert Chan, Stephen Doig, Nathan J. Glasgow, Lena Hansen, Virginia Lacy, Eric Maurer, Jesse Morris, James Newcomb, Greg Rucks, and Caroline Traube. 2011. Reinventing Fire: Bold Business Solutions for the New Energy Era. White River Junction, VT: Chelsea Green Publishing, p. 175.
[3] For details, see Muller, Nicholas Z., Robert Mendelsohn, and William Nordhaus. 2011. “Environmental Accounting for Pollution in the United States Economy." American Economic Review vol. 101, no. 5. August. pp. 1649–1675, and Epstein, Paul R., Jonathan J. Buonocore, Kevin Eckerle, Michael Hendryx, Benjamin M. Stout III, Richard Heinberg, Richard W. Clapp, Beverly May, Nancy L. Reinhart, Melissa M. Ahern, Samir K. Doshi, and Leslie Glustrom. 2011. ”Full cost accounting for the life cycle of coal.“ Annals of the New York Academy of Sciences. vol. 1219, no. 1. February 17. pp. 73-98. [http://dx.doi.org/10.1111/j.1749-6632.2010.05890.x].
The performance of electronic computers has shown remarkable and steady growth over the past 60 years, a finding that is not surprising to anyone with even a passing familiarity with computing technology. What most folks don’t know, however, is that the electrical efficiency of computing (the number of computations that can be completed per kilowatt-hour of electricity) has doubled about every one and a half years since the dawn of the computer age (See Figure 6-1).[1] The existence of laptop computers, cellular phones, and personal digital assistants was enabled by these trends, which presage continuing rapid reductions in the power consumed by battery-powered computing devices, accompanied by new and varied applications for mobile computing, sensors, wireless communications and controls.
The most important future effect of these trends is that the power needed to perform a task requiring a fixed number of computations will fall by half every 1.5 years, enabling mobile devices performing such tasks to become smaller and less power consuming, and making many more mobile computing applications feasible. Alternatively, the performance of some mobile devices will continue to double every 1.5 years while maintaining the same battery life (assuming battery capacity doesn’t improve). These two scenarios define the range of possibilities. Some applications (like laptop computers) will likely tend towards the latter scenario, while others (like mobile sensors and controls) will take advantage of increased efficiency to become less power hungry and more ubiquitous.
These technologies will allow us to better match energy services demanded with energy services supplied, and vastly increase our ability to collect and use data in real time. They will also help us minimize the energy use and emissions from accomplishing human goals, a technical capability that we sorely need if we are to combat climate change in any serious way. The future environmental implications of these trends are profound and only just now beginning to be understood.[2]
As one of many examples of what is becoming possible using ultra low power computing, consider the wireless no-battery sensors created by Joshua R. Smith of Intel and the University of Washington.[3] These sensors scavenge energy from stray television and radio signals, and they use so little power (60 microwatts in this example) that they don’t need any other power source. Stray light, motion, or heat can also be converted to meet slightly higher power needs, perhaps measured in milliwatts.
The contours of this exciting design space are only beginning to be explored. Imagine wireless temperature, humidity, or pollution sensors that are powered by ambient energy flows, send information over wireless networks, and are so cheap and small that thousands can be installed where needed. Imagine sensors scattered throughout a factory so pollutant or materials leaks can be pinpointed rapidly and precisely. Imagine sensors spread over vast areas of glacial ice, measuring motion, temperature, and ambient solar insolation at very fine geographical resolution. Imagine tiny sensors inside products that tell consumers if temperatures while in transport and storage have been within a safe range. Imagine a solar powered outdoor trash can/compactor that notifies the dispatcher when it is full, thus saving truck trips (no need to imagine this one, it’s real[4]). In short, these trends in computing will help us lower greenhouse gas emissions and allow vastly more efficient use of resources.
[1] Koomey, Jonathan G., Stephen Berard, Marla Sanchez, and Henry Wong. 2011. “Implications of Historical Trends in The Electrical Efficiency of Computing." IEEE Annals of the History of Computing. vol. 33, no. 3. July-September. pp. 46-54. [http://doi.ieeecomputersociety.org/10.1109/MAHC.2010.28]
[2] Greene, Kate. 2011. "A New and Improved Moore’s Law.” In Technology Review. September 12. [http://www.technologyreview.com/computing/38548/?p1=A1]
“A deeper law than Moore’s?” In The Economist. October 10, 2011. [http://www.economist.com/blogs/dailychart/2011/10/computing-power]
[3] Eisenberg, Anne. 2010. “Bye-Bye Batteries: Radio Waves as a Low-Power Source.” The New York Times. New York, NY. July 18. p. BU3. [http://www.nytimes.com/2010/07/18/business/18novel.html]
Steve Lohr wrote a great article for the NY Times today titled “The Internet Gets Physical”, where he explores what he thinks is the next big thing (and I think he’s right). The article states:
“…the protean Internet technologies of computing and communications are rapidly spreading beyond the lucrative consumer bailiwick. Low-cost sensors, clever software and advancing computer firepower are opening the door to new uses in energy conservation, transportation, health care and food distribution. The consumer Internet can be seen as the warm-up act for these technologies.”
Internet watchers are just now waking up to this new potential, which is driven by trends in the efficiency of computing that we identified in our recent paper in the IEEE Annals of the History of Computing (Koomey et al. 2011). The electrical efficiency of computing (the number of computations that can be completed per kilowatt-hour of electricity) has doubled about every one and a half years since the dawn of the computer age, so that the power needed to perform a task requiring a fixed number of computations will fall by half every 1.5 years. Devices performing such tasks can thus become smaller and less power consuming, making many more mobile computing applications feasible.
These technologies will allow us to better match energy services demanded with energy services supplied, and vastly increase our ability to collect and use data in real time. They will also help us minimize the energy use and emissions from accomplishing human goals, a technical capability that we sorely need if we are to combat climate change in any serious way. The future environmental implications of these trends are profound and only just now beginning to be understood (Greene 2011, The Economist 2011)
If you know of specific examples of innovations in low power computing, sensors, and controls, I’m eager to hear about them, as I’m starting to think about how to describe these trends for a broader audience. So send me email!
Climate Progress points to a Yale University study on adoption of photovoltaics (PVs) in residences, in which the time lag between installations falls as the number of installations increases. The authors call this a “peer” effect, in which a greater concentration of PV panels makes it even more likely that neighbors will also install PVs.
This is a specific example of what in the economics literature is called “increasing returns to scale”. There are many different forms of this effect, including economies of scale, network externalities, learning by doing, and zero marginal costs for reproducing information (by using information technology). For those interested in carbon mitigation opportunities, this effect is critical, but it is omitted by assumption from virtually all computable general equilibrium models, because including it would result in path dependence and multiple possible end-points for a given starting point. The real world is full of such effects, and that’s one reason why conventional economic assessments of the costs of reducing carbon emissions almost invariably overestimate the costs of taking action. I will have a lot more to say about increasing returns in upcoming posts.
The NY Times.com article yesterday on the deluge of data from DNA sequencing raised a a couple of interesting issues for me.
Here’s one important item I noticed:
“The cost of sequencing a human genome — all three billion bases of DNA in a set of human chromosomes — plunged to $10,500 last July from $8.9 million in July 2007, according to the National Human Genome Research Institute.
That is a decline by a factor of more than 800 over four years. By contrast, computing costs would have dropped by perhaps a factor of four in that time span.”
This example highlights an important point: the cost to perform computations is driven by more than Moore’s law. It’s also a function of our cleverness in designing efficient algorithms and characterizing problems in the most effective ways, and that kind of cleverness can lead to much more rapid improvements in our ability to do useful computations than just the trends in raw computing horsepower would indicate.
Now on to my second point. The big constraint in DNA research is fast becoming our ability to make sense of the voluminous data being generated by the new sequencing machines, and that takes human thinking, it’s not just a computational task. Just as in many other areas that are likely to see an explosion in data generation (caused by the revolution in ultra low power mobile information technology) there will be big opportunities for those who can combine careful critical thinking with information technology to sort through vast piles of data and help people generate actionable information. This is also one of the conclusions of the recently released ebook by Brynjolfsson and McAffee titled “Race Against the Machine”, which I highly recommend.
The Canadian Broadcasting Company just posted my interview for their “Spark” radio show, which is “an ongoing conversation about technology and culture, hosted by Nora Young”. It talks about our work on trends in the energy efficiency of computing over the past six decades, which I wrote about here and here.
Unfortunately, the tagline from the announcer at the end incorrectly indicates that I called the long term trends (doubling of energy efficiency every year and a half) “Koomey’s law”, instead of noting that it was MIT’s Technology Review that popularized the term in their recent article. And the first person to use the term publicly was Max Henrion of Lumina Systems at a talk he gave at the Uptime Institute Symposium in 2010. I’ve asked the producer to correct that in the web version. Ah well..
This interview followed a news piece reporting on why many companies are considering building data centers in Colorado Springs. The interviewer asked some good questions and the discussion illuminates some important aspects of data centers and electricity use. It’s a good non-technical introduction to these issues. Listen to it here.
“A New and Improved Moore’s Law: Under "Koomey’s law,” it’s efficiency, not power, that doubles every year and a half or so“
"Researchers have, for the first time, shown that the energy efficiency of computers doubles roughly every 18 months.
The conclusion, backed up by six decades of data, mirrors Moore’s law, the observation from Intel founder Gordon Moore that computer processing power doubles about every 18 months. But the power-consumption trend might have even greater relevance than Moore’s law as battery-powered devices—phones, tablets, and sensors—proliferate.”
The reference for the actual article is below.
Koomey, Jonathan G., Stephen Berard, Marla Sanchez, and Henry Wong. 2011. “Implications of Historical Trends in The Electrical Efficiency of Computing." IEEE Annals of the History of Computing. vol. 33, no. 3. July-September. pp. 46-54. <http://doi.ieeecomputersociety.org/10.1109/MAHC.2010.28>
Subscription is required but I can send a pre-pub version if you email me.
This article describes long-term trends in the electrical efficiency of computation that enabled the creation of laptops and other mobile computing devices. If these trends continue (and we have every reason to believe that they will) they presage continued rapid improvements in battery powered computers, sensors, and controls.
The electrical efficiency of computation (measured in computations per kilowatt-hour, or kWh) grew about as fast as performance for desktop computers starting in 1975, doubling every 1.5 years, a pace of change comparable to that from 1946 to the present. Computations per kWh grew even more rapidly during the vacuum tube computing era and during the transition from tubes to transistors but more slowly during the era of discrete transistors. In 1985, Richard Feynman identified a factor of one hundred billion (1011) possible theoretical improvement in the electricity used per computation. Since that time computations per kWh have increased by less than five orders of magnitude, leaving significant headroom for continued improvements.
My interview on Intel’s Chip Chat was just posted. It’s about 7 minutes long, and we talked about my recent study on electricity used by data centers. I think it’s a good introduction to the topic, so please forward to interested colleagues as you see fit.
This week Google announced more details on their efficiency, electricity use, and carbon emissions. This is a big deal because it will give much of the rest of the industry efficiency targets to which they can aspire. Most of the other cloud computing companies have figured out clever tricks to improve their efficiency, but it’s the “in-house” data centers (the ones owned and operated by companies whose primary business is not computing) who have the most to learn from these announcements (and from the announcement of the Open Compute Project by Facebook awhile back).
Google’s announcement is also important because it puts real data on how much electricity is actually used for a search or the download of a Youtube video. Some of you may recall the little dustup about whether a Google search uses as much as boiling a pot of tea (it doesn’t, as Evan Mills and I documented here). But the biggest story is not about direct electricity use, it’s about the efficiency improvements in other energy uses enabled by the electricity used by data centers and other information technology equipment. As I describe in my recent report, the world’s data centers use roughly 1.3% of global electricity use, but they help us use the other 98.7% of that electricity as well as most of the rest of the other energy use a whole lot more efficiently.
Google’s announcements also confirm the points I made in my post on why cloud computing is more efficient, and the large savings from cloud computing estimated by WSP Environment and Energy when analyzing salesforce.com’s operations. Cloud computing will continue to pressure “in-house” data center operations because costs in the cloud are so much lower, a result driven significantly by much greater energy efficiency and equipment utilization.
There have been some helpful summaries exploring these announcements. The article at Data Center Dynamics was particularly interesting because it gives detail on the techniques Google uses to achieve high efficiency. Katie Fehrenbacher at GigaOm also did a nice job in one of her articlesof putting the announcements in a larger context, as well as giving some details in another article about the relative efficiency of Gmail compared to “in-house” email hosting.
Vivek Kundra, formerly the Chief Information Officer in the Obama Administration, argued today in a NYT op-ed that the economic benefits of cloud computing for government agencies will encourage more and more of them to adopt this innovation instead of running their own IT facilities (except in special cases). As I explained here, these economic benefits are driven in large part by the greater energy efficiency of cloud computing facilities, and they are large enough to encourage people to work out the non-trivial security, legal, and other issues with shifting computing to the cloud.
When people think of Lawrence Berkeley National Laboratory, where I worked for more than two decades, they often think of huge supercomputers and really smart computer scientists, which are two hallmarks of that institution. LBNL has no shortage of people who know computing, but even that pinnacle of computing innovation decided in the last few years to shift its email, calendar, and other routine computing services to the cloud. That to me says that even very technically sophisticated institutions have good reasons for shifting some of their computing services to the cloud, and those reasons will only become more numerous and compelling as the years progress.
“The debt was $10.626 trillion on the day Mr. Obama took office. The latest calculation from Treasury shows the debt has now hit $14.639 trillion.
It’s the most rapid increase in the debt under any U.S. president.
The national debt increased $4.9 trillion during the eight-year presidency of George W. Bush. The debt now is rising at a pace to surpass that amount during Mr. Obama’s four-year term.”
For technical reasons the real number as calculated by CBS news should have been just the debt owed by the US government to creditors, excluding the debt owed to itself for social security, and that’s about $3.7T.
But that’s not the big problem in these claims.
First, as TPM points out, the relevant metric is the size of the debt relative to the size of the GDP, not the absolute nominal amount of debt.
Second, there’s the question of causality. All statisticians know that assigning causality is often difficult, but that’s really the crux of the matter. If you can assign causality accurately you’ve got the gold standard of proof. And in that regard this report is deeply flawed.
The report implies that Obama’s policies are to blame for this increase in debt, and certain of his policies contributed, but they are small compared to the bigger structural problems he inherited. For example, one can fairly assign Obama responsibility for the stimulus package in 2009, the bailout of the auto companies, the end of year 2010 tax deal (which extended the Bush tax cuts), and the costs of war in Afghanistan in 2009 and 2010 (which he argued in support of in the campaign, and which he continued and expanded).
But Obama inherited an economy in freefall, and it turned out to be significantly worse than what the Congressional Budget Office initially projected in early 2009. He also inherited the Iraq war, which he strenuously argued against and it almost certainly wouldn’t have happened had he been president (he wound it down as quickly as he could, but those things take time to do responsibly). He inherited TARP (which has largely been paid back) and the AIG bailout (which still will cost us). He inherited Medicare Part D, which was not paid for. He also inherited the Bush tax cuts, which were slated to expire at the end of 2010 (and also weren’t paid for). Surely President Obama shouldn’t be assigned responsibility for the deficits induced by the tax cuts (except post 2010, when he extended the cuts in a deal with Republicans, but even then, he got some things in return for that deal that need to be weighed against those costs). I also suspect that some of this increase in debt is related to the honest accounting of the costs of the Iraq and Afghan wars, which were treated by the previous administration as “off budget” (perhaps someone with more knowledge of the budget process can illuminate us about that).
And remember that when Obama came into office in January 2009 the government was operating under budgets approved by the previous congress and administration, so it’s not reasonable to argue that the first month or two of his administration Obama is responsible for the debt. He’s also clearly not responsible for the unemployment figures in February 2009, which were the worst in 25 years. At some point (probably starting in May or June of 2009) he can reasonably be assigned responsibility but it’s not reasonable to argue that events in the first couple of months of his term were clearly caused by his policies, because they hadn’t been implemented yet. This was the same argument Ronald Reagan made when he came into office–it was correct then and it is correct now, as any fair minded observer must admit.
So whatever your political persuasion might be, there is a reasonable and fair minded way to look at this issue, and by any measure, the idea that President Obama’s policies added to the debt in an unprecedented way is simply false (for a text and graphical presentation of that result go to this Wikipedia entry on the national debt and a graph from the NYT based on Congressional Budget Office data). Those who imply otherwise are arguing against the laws of arithmetic, and they just embarrass themselves by doing so. One can reasonably argue against some of the President’s policies but disputing basic math and confusing elementary assignation of causality is something I’d expect from middle school students, not our elected representatives (but of course, I’m an optimist).
Finally, there is a separate question of what the debt is being used to do. Not all debt is created equal. Debt that is used to finance investments (like improved infrastructure, research and development (R&D), or education) is very different than that used to fund consumption (like wars or consumer spending). More debt to fund infrastructure that will yield returns to the economy in faster shipping or travel times is clearly worth doing when the private sector won’t fund these things. Same for educating the populace to do more skilled jobs or funding basic R&D. The justification for such investments is well established in economics, because they yield “public goods”, and the private sector will underinvest in those (because no one actor can exclude others from benefitting from those investments).
Distinguishing between the costs and benefits of debt used to do different things may be too much to ask (alas), but at least let’s agree that the laws of arithmetic are valid and that facts are facts. My granddad used to say that if his employees sent him budgets and 2 + 2 didn’t equal 4, he’d send them back until things added up, and that’s what we should do with this CBS report and the people trying to make political hay with it.
Addendum October 11, 2011: I just saw this nice chart summarizing the contribution of the various Bush administration policies for the national debt, which reinforces the arguments I made above.
Addendum October 21, 2011: Talking Points Memo posted a wonderful chart of the sources of US government revenue over the past 60 years, which I repost below. It shows significant declines in corporate and excise taxes, and an increase in payroll taxes. The explanation of the graph is here.
This is an outrage to data lovers everywhere: The Statistical Abstract of the US (as well as other compendia of statistics created by the census department) is on the chopping block and will be eliminated after the 2012 edition is published. Robert J. Samuelson, economics columnist for the Washington Post, writes about this travesty here. This is just ridiculous. The Stat. Ab. is an essential data book for US researchers, and it can’t be replaced by other sources. Write to Congress, folks!