A NY Times article appeared today on the effect of the new lighting efficiency regulations and it describes the complexity facing consumers as new technologies enter the market. What this article first made me realize was that the advent of the new efficiency standards is a real world example of the “Porter hypothesis” in action. Michael Porter at Harvard Business school has postulated that strict environmental (or energy efficiency) regulations can under certain conditions lead to increased innovation. This hypothesis is contrary to what I call the simpleminded Econ 101 view of the world, in which regulations are invariably less efficient than pricing mechanisms in inducing innovation. In any case, these particular regulations have clearly induced innovation in an industry that has been famously slow to change. Having old depreciated plants churning out incandescent bulbs is extremely profitable, so there was little incentive for the lighting companies to innovate except in certain niche markets driven by other efficiency policies (like utility efficiency programs promoting efficient lighting in California).
I’ll say more about the high level implications of such regulations in another post, but I wanted to describe our own personal experience with new lighting technologies for our new house. There are a lot of ceiling cans in the house, and the previous owner had used CFLs in most of them. The decorative parts of the cans were original equipment, about 12 years old, so they looked dingy and needed to be replaced. Our contractor said that would cost about $20 per can. He brought us a new LED downlight and said it cost $50 and could quickly be placed into every can. They use 11W and give off as much light as a 60W incandescent, but our perception was that they were much brighter than a typical bulb because they were directional. They come on very rapidly and can be used on dimmers with no problem (for some reason they don’t come on quite as quickly when dimmed, but are about as fast as incandescents in turning on when at full brightness). We tried the light in our old house to see if the color rendition was much different than an incandescent bulb, and we couldn’t see any difference. My wife even gave them her seal of approval (she hates CFLs and would never have allowed them except in a few fixtures). Finally, the long life of the LEDs (35,000 hours, or 35 years at 3 hours/day of use) was really important to us because we have high ceilings in the new house, and in a house with 48 ceiling cans that’s a lot of trips up and down a ladder to replace burned out incandescents. I had no interest in wasting my time doing that, so the LEDs were just the ticket. And we would have had to spend $20 anyway to replace the decorative parts of the can, so the economics were a lot better than in some other applications.
Here’s a link to the latest version of the lights we chose (updated July 2013). Definitely stick with major manufacturers for new technologies like LEDs. We had one out of 48 LEDs go bad on us, but that one was promptly returned for an exchange. We were told that the newer LEDs have a square LED active surface, whereas most of the older ones use individual high intensity LEDs that are shaped like the ones on your stereo (but are of course much more luminous). Go for the square LED active surface.
Finally, a note to people who are upset by the changes induced by the new regulations: you’ll still get to buy incandescents, they’ll just be lots more efficient. Eventually LEDs will sweep all the other technologies away, and I’d expect that to happen in 5 to 10 years, but we’ll see. The main lesson of rapid adoption of efficiency technologies over time is that they need to be better than what they replace to gain wide acceptance. That’s why the good light of LEDs combined with the much longer lifetime and lower heat output makes them such a powerful competitor to the standard bulbs (as well as to CFLs). And with most electronic devices, prices pretty much always go down, and we’re nowhere near the theoretical limits of efficiency for lighting, so more progress lies ahead. Exciting times!
I just released my new study on data center electricity use in 2010. I did the research as an exclusive for the New York Times, and John Markoff at the Times wrote an article on it that will appear in the print paper August 1, 2011. You can download the new report here.
Key findings:
• Assuming that the midpoint between the Upper and Lower bound cases accurately reflects the history, electricity used by data centers worldwide increased by about 56% from 2005 to 2010 instead of doubling (as it did from 2000 to 2005), while in the US it increased by about 36% instead of doubling.
• Electricity used in global data centers in 2010 likely accounted for between 1.1% and 1.5% of total electricity use, respectively. For the US that number was between 1.7 and 2.2%.
• Electricity used in US data centers in 2010 was significantly lower than predicted by the EPA’s 2007 report to Congress on data centers. That result reflected this study’s reduced electricity growth rates compared to earlier estimates, which were driven mainly by a lower server installed base than was earlier predicted rather than the efficiency improvements anticipated in the report to Congress.
• While Google is a high profile user of computer servers, less than 1% of electricity used by data centers worldwide was attributable to that company’s data center operations. To my knowledge, this is the first time that Google has revealed specific details about their total data center electricity use (they gave me an upper bound, not an exact number, but something is better than nothing!).
In summary, the rapid rates of growth in data center electricity use that prevailed from 2000 to 2005 slowed significantly from 2005 to 2010, yielding total electricity use by data centers in 2010 of about 1.3% of all electricity use for the world, and 2% of all electricity use for the US.
The new study is a follow-onto my 2008 article:
Koomey, Jonathan. 2008. “Worldwide electricity used in data centers." Environmental Research Letters. vol. 3, no. 034008. September 23. <http://stacks.iop.org/1748-9326/3/034008>
Suggested citation: Koomey, Jonathan. 2011. Growth in Data center electricity use 2005 to 2010. Oakland, CA: Analytics Press. August 1. <http://www.analyticspress.com/datacenters.html>
I can think of four reasons why cloud computing is (with few exceptions) significantly more energy efficient than using in-house data centers:
1) Economies of scale: It’s cheaper for bigger cloud computing folks to make efficiency improvements because they can spread the costs over a larger server base and can afford to have more dedicated folks focused on efficiency improvements. For example, there are usually significant fixed costs of implementing simple techniques to improve Power Usage Effectiveness (PUE), like the costs of doing an equipment inventory and assessment of data center airflow (same for implementing institutional changes like charging users per kW instead of per square foot of floor area). Whenever there are costs that are substantially fixed (i.e. only weakly related to the size of the facility), bigger operations have an advantage because they can spread the costs over more transactions, equipment, or floor area. There’s also a substantial advantage to having “in house” expertise devoted to efficiency, instead of having staff split between different jobs–technology changes so rapidly that it’s hard for people not devoted to efficiency to keep up as well as those that are.
2) Diversity and aggregation: More users, more diverse users, and more users in different places means computing loads are spread over the day, allowing for increased equipment utilization. Typical in house data centers have server utilizations of 5-15% and sometimes much less, whereas cloud facilities for major vendors are more in the 30-40% range.
3) Flexibility: Cloud installations use virtualization and other techniques to separate the software from the characteristics of physical servers (some call this “abstraction of physical from virtual layers”). This sounds like a great thing for software and total costs, but why is it an energy issue? Using this technique means that you can redesign servers to optimize them and drop certain energy costly features. For example, if software can route around physical servers that die, you no longer need to have two power supplies in each server–the death of any one particular server doesn’t matter to the delivery of IT services. In essence, this technique redefines the concept of reliability from one that is based on the reliability of a particular piece of hardware to one that is based on the reliability of the delivery of the IT services of interest, and this is a much more sensible approach.
4) Ease of sidestepping organizational issues instead of having to address them head on (which is hard and slow): For example, the problem of IT driving server purchases but facilities paying the electric bill is still a big issue for most in-house facilities, but it has largely been solved for the cloud providers (they generally have one data center budget and clear responsibilities assigned to one person with decision making authority). Economies of scale are more powerful in this scenario, because you’ve gotten rid of the impediments to taking action and can allow those economies to work their magic. Finally, it’s much easier and cheaper for people stuck with the in-house organizations to use a credit card to buy cloud services instead of waiting around for their internal IT organization to get its act together.
These big energy advantages will over time translate into more and more pressure for companies to adopt cloud services, because the economic advantages (driven by the energy advantages) are so large. And it’s not just energy costs, it’s the capital cost of all the supporting equipment, which in a standard in-house facility can be $25,000/kW and (together with the energy costs) add up to half or more of the total costs of the facility (for details see Koomey, Jonathan G., Christian Belady, Michael Patterson, Anthony Santos, and Klaus-Dieter Lange. 2009. Assessing trends over time in performance, costs, and energy use for servers. Oakland, CA: Analytics Press. August 17.)
Of course, there are still issues to work out. For example, people haven’t really ironed out the complexities about liability for cloud outages. And there will always be providers who will want to have their own in-house facilities for security reasons (like big financial institutions). But even in that case, the benefits of a virtualized cloud infrastructure can be brought to the in-house facilities. You won’t get the same diversity but the other benefits of cloud will still be powerful. I’ve also heard of companies creating “private clouds” for use by other companies that pay in to use them on a “members-only” basis, thus dealing with the diversity and security issues. So things are evolving rapidly, but the economic benefits are so large that we’ll see a whole lot more cloud computing in coming years.
Gary Gutting just posted a well-reasoned article about how to think about the expert consensus in the climate debate. He begins by noting that logical claims based on authority only have standing if there is a recognized means of determining who is an expert and who isn’t (see also Chapter 14 in Turning Numbers into Knowledge).
For climate, the experts can easily be identified:
“All creditable parties to this debate recognize a group of experts designated as ‘climate scientists,’ whom they cite in either support or opposition to their claims about global warming. In contrast to enterprises such as astrology or homeopathy, there is no serious objection to the very project of climate science. The only questions are about the conclusions this project supports about global warming.”
His concluding paragraph sums up the implications for his line of argument:
“…once we have accepted the authority of a particular scientific discipline, we cannot consistently reject its conclusions. To adapt Schopenhauer’s famous remark about causality, science is not a taxi-cab that we can get in and out of whenever we like. Once we board the train of climate science, there is no alternative to taking it wherever it may go.”
Another way to make this argument when confronted by a climate denier is to ask them “do you feel qualified to dispute the latest developments in quantum physics or thermodynamics? If not, what makes you think you are qualified to debate the latest climate science?” For someone who is actually qualified to make judgments about the fields to which you refer you can ask them about how likely it is that someone from another field could accurately critique his/her field even if that person is qualified in another scientific field. The answer in all but the rarest of cases is “not bloody likely”.
Another angle on this line of argument is to use the peer reviewed literature compiled on Skeptical Science. Virtually every argument that the deniers make is analyzed and debunked there, so when someone pesters me with their denialism I say “there’s an app for that!” and pull out Skeptical Science. When I can quickly pull up the critique of their claim and explain its implications it often has an impact.
The New York Times reports that the World Bank has released huge amounts of its previously proprietary data to the general public. The Bank talks about it in a news release here and on its data page here.
The key data are in the form of indicators of development and economic activity. For a list of the most popular indicators, go here. You’ll find data on GDP, finance, business activities, population, infrastructure, environmental insults, energy use, and much more.
For people I call “ecological entrepreneurs” (i.e. those who want to make the environment better while making a profit in the bargain) the World Bank database is truly a treasure trove. If you are considering a venture of this type (particularly one that involves commerce in countries outside the US), I heartily recommend digging into these data.
Of course, as with all data, you’ll need to check it for both internal consistency and accuracy, but having more data is always better, and having good data is better still.
The New York Times just had an article about how insiders in the natural gas industry are concerned that the reserve and production estimates for recent shale gas discoveries have been significantly overestimated. The article states
“Money is pouring in” from investors even though shale gas is “inherently unprofitable,” an analyst from PNC Wealth Management, an investment company, wrote to a contractor in a February e-mail. “Reminds you of dot-coms.”
That development in itself is interesting, because the speculation in the industry in the past couple of years is that shale gas made available through fracking would enable a new era of cheap and abundant natural gas for the US. Some of that speculation is turning out to be hype, although there’s likely some truth in it also.
These developments point to a deeper issue, however: that of corporate governance and the government’s role in preserving the integrity of the market system. Two days ago, the New York Times reported that the SEC changed the reporting rules a few years ago so that natural gas companies drilling for shale gas could be more aggressive in how they booked reserves. As the article states
“In 2008, the stocks of many natural gas companies were sinking because of the financial meltdown, recession fears and falling gas prices.
But they began to rebound after a sweeping rule change by the Securities and Exchange Commission, intended to modernize how energy companies report their gas reserves.
As part of that change, the commission acquiesced to industry pressure by giving these companies greater latitude in how they estimated reserves in areas that were not yet drilled. The new rules, which were several years in the making, were officially adopted only weeks before the S.E.C. chairman under President George W. Bush, Christopher Cox, stepped down.”
The importance of this rule change cannot be overstated. Estimating the size of projected reserves is hard even with well understood technology. Doing it with a new technology like fracking is much more difficult. Erring on the high side (as the SEC appears to have done in this case) can create bubbles and hurt investors. Erring on the low side would make it harder for the companies to get capital, hurting investors in those companies and perhaps restricting the rate at which promising reserves could be tapped.
I think there’s a good argument for being conservative in such situations (i.e., erring on the low side): eventually reality will intervene, and if reserves are artificially overstated, then the correction, when it comes, will be sharp and painful. On the other hand, if reserves are understated, eventually it will become clear that the proven reserves are higher than initially expected, and that adjustment can be made with fuller knowledge. The investors in those reserves would be duly rewarded for their foresight.
Finally, this example shows the importance of having a functioning government regulatory system. Markets are human constructs and they can be made and operated well or poorly (for examples, see the terrific book McMillan, John. 2003. Reinventing the Bazaar: A Natural History of Markets. New York, NY: W.W. Norton & Company. ). If the regulators create rules not based on reality to favor certain industries, then disaster will eventually ensue, but the disaster doesn’t just hurt the company making the decisions, it also hurts investors who are misled by the incorrect claims of high reserves. That’s why the public interest demands regulation of markets so that the investing public has full and accurate knowledge about the claims companies make (that’s true also for consumer and food safety, as well as environmental issues).
The rules about such reporting play a critical role in ensuring accurate information, and they should be as reality based as we can make them. Too often that has not been the case. It’s true for the specific example of shale gas, but it’s also clearly true for financial markets, as Adam Smith pointed out more than 200 years ago. Having weak, ineffective, and feckless regulation hurts the market, and sensible folks ought to fight for reality-based regulation. Markets can’t survive long without it.
An article in today’s New York Times discusses the “safety myth” promoted by Japan’s nuclear industry and the effect of that myth on the country’s ability to respond to the Fukushima accidents when they occurred. It’s a fascinating cautionary tale of what can happen when fallible humans interact with unforgiving technology. And it shows how people and institutions create myths about technology that can lead to unintended consequences. Finally, it illustrates that technology and culture interact in complex and unpredictable ways (a lesson that is not a surprise to those who study the history of technology, but it’s a fact not as widely accepted in the hard-core technical community). All in all, a great read for students of these issues.
It’s often stated that the Three Mile Island (TMI) accident in 1979 was the main cause of the demise of nuclear power in the United States. This line of argument posits that the safety regulations promulgated after TMI (along with associated legal wrangling) increased reactor costs and construction times so much that the industry could no longer compete. This argument has arisen anew in the aftermath of the nuclear meltdowns at Fukushima, in which people have been quick to draw parallels to TMI. In particular, some argue that the industry would have expanded in the 1980s were it not for the regulatory response after TMI, and caution against such a response again.
For example, a 2006 New York Times Magazine article stated “The received wisdom about the United States nuclear industry is that it began a long and inexorable decline immediately after the near meltdown, in 1979, at Three Mile Island in central Pennsylvania, an accident that — in one of those rare alignments of Hollywood fantasy and real-world events — was preceded by the release of the film “The China Syndrome” two weeks earlier.” The article then goes on to explain that the real story is more complicated than that, but the “received wisdom” still seems to be widely accepted.
“In the 1970s it looked as if nuclear power was going to play a much bigger role than eventually turned out to be the case. What happened was Three Mile Island, and the birth of an anti-nuclear movement that stopped dozens of half-built or proposed reactors…”
The op-ed contained some errors, laid out in a post by Joe Romm on Climate Progress (the errors about the area taken up by wind turbines and the area of Japan were fixed with a corrective note that is included in the current version of the op-ed). The important point for our purposes is that Lynas and others who make this argument are attempting to blame TMI and the anti-nuclear movement (which started well before TMI, it should be noted) for being the main factors stopping nuclear power plants either proposed or under construction. But the story is not nearly as simple as that.
What does this argument really mean?
Let’s begin by examining the argument itself so we are clear on the claim we’re evaluating. The complaint is that regulations after TMI unfairly burdened the industry, leading to increasing costs, longer construction times, and comparative economics that made it impossible for nuclear plants to compete with other sources of electricity. The emphasis on the unfairness of these regulations conveys the idea that somehow the regulators (prompted, it is implied, by the anti-nuclear movement) went too far after TMI, and that caused most of the industry’s problems.
In evaluating this claim it is worth keeping several factors in mind:
1) We will probably never know for sure exactly how much blame to assign TMI vs. other factors, because such historical events are usually so complex as to defy precise attribution after the fact.
2) It may be true that safety regulations following TMI did cause costs to increase and completion times to lengthen for plants under construction at the time. It may also be true that the regulators went overboard in some cases. What is certain, however, is that some increased regulatory stringency was justified after TMI, which was arguably the worst reactor accident the US had ever experienced. These regulations could therefore be seen as a justifiable expense, to the extent that the creation and enforcement of those regulations was not an overreaction to the accident.
3) TMI may also have made it impossible for the industry to ignore earlier regulations, or that enforcement of existing regulations became more stringent after TMI (this hypothesis came to us from Mark Cooper at Vermont Law School). It’s hard to argue against improved enforcement of existing regulations but if enforcement were overzealous or capricious, it could have had negative effects on the industry that were not justified by real safety risks. On the other hand, stricter enforcement may also fall into the category of real costs inherent to making the technology safe that had not fully been incorporated before TMI.
4) It is important to distinguish the effect of TMI (and the associated regulatory response) on plants that were already under construction in 1979 from its effect on potential future reactor orders. It is plausible that the regulations promulgated after TMI would have had some effect on reactors “in the pipeline”, but it’s far less clear that the absence of TMI would have helped the industry overcome the other factors that we discuss below somehow leading to new reactor orders after 1979, given the other big challenges the industry faced.
5) Other countries (like France, Germany, and the UK) faced cost escalation and plant delays without having reactor accidents comparable to TMI, so clearly other things were going on with the economics of reactors than just the response to TMI. See reference 10, Arnulf Grubler’s Energy Policy article on France, and Joe Romm’s blog post on French reactor costs for more details.
What do the data show?
To shed some light on this question, we revisited our data (see references, 4, 5, 6 and 8 below) and combined it with other data sources to determine what they could tell us about the effect of TMI on the US nuclear industry. In particular, we examined the history of US reactors, focusing on the year of project initiation (measured here as the approval of a construction permit), the year of project completion or cancellation, and the year of reactor shutdown (where applicable). It is clear from our review that TMI caused changes in safety regulations that increased costs (to some degree) and delayed construction of some plants that were already under construction (to some degree); however, it’s wise not to read too much into the importance of TMI. Even before the accident, nuclear plant costs were already rising rapidly and completion rates slowing (see references 1, 7, and 8), and while TMI might have accelerated these trends somewhat, they were already well underway by 1979. New reactor orders had slowed to a tiny trickle before TMI and the last US reactor order was in 1978 according to the Nuclear Energy Institute. All reactors ordered after 1973 were subsequently cancelled.
One historical example in support of these conclusions is reference 1 (Bupp and Derian 1981). This book, which is widely regarded as one of the most authoritative sources on the history of US nuclear power, was originally published in 1978 with the title: Light Water: How the Nuclear Dream Dissolved, indicating that many of the problems of the industry preceded TMI (we’re indebted to Peter Bradford for this insight).
The difference between the date of project initiation and project completion gives the construction duration for the reactor, which is one of the most important parameters affecting reactor costs. We used our database from reference 8 combined with two sources for cancellations, NRC and Clone Master. We cross checked the cancellation databases against each other to make sure they agreed, and when they disagreed we used the NRC data (the Clone Master data was in a more convenient form).
Figure 1 shows the annual history of reactor construction starts, completions, cancellations, and shut downs. No reactors were granted construction permits after 1979. It also shows that there were many cancellations of reactors before 1979. Finally, the bimodal nature of reactor completions suggests that construction slowed down around 1980, with completions picking up again in the mid 1980s.
Figure 1: US nuclear construction starts, completions, cancellations and shut downs by year (Copyright Jonathan Koomey 2011)
Figure 2 shows the annual data plotted cumulatively, which shows that more than half of all reactors ordered were subsequently cancelled. About 40% of all cancelled reactors were cancelled before 1979, and these can’t be attributed to TMI. That means that cost escalation and other forces afflicting the nuclear industry were well underway before TMI.
Figure 2: Cumulative US nuclear construction starts, completions, cancellations and shut downs (Copyright Jonathan Koomey 2011)
Now consider Figure 3, which shows construction duration as a function of construction start date (reactors in red were those early reactors for which we didn’t have cost data but for which we had construction duration data). The graph shows that the minimum construction duration for plants begun between 1966 and 1972 bounced around a bit but remained around 5 years, but it began to rise substantially for plants begun in 1973. It peaked in 1974, then declined again. These data are suggestive, but not conclusive. It is possible that plants begun in 1973 and 1974 were just far enough along in their construction process in 1979-80 for the changes in regulations from TMI (as well as the other factors discussed below) to have a significant effect on their construction duration. Plants started later were not as far along in construction in 1980 and were presumably easier to modify to reflect regulatory and other changes.
Figure 3: US reactor construction duration as a function of date of construction start (Copyright Jonathan Koomey 2011)
Chapter 6 of Charles Komanoff’s classic book Power Plant Cost Escalation (reference 7, which can be downloaded here) describes in great detail the character and scope of the changes in safety regulations that were likely to be the result of TMI. This chapter was written (according to Komanoff) in 1979-80, and his predictions were largely borne out. The changes were indeed substantial and there’s no doubt they affected the way reactors were constructed in the US. What is not clear is exactly how important these regulatory changes were compared to the other factors affecting the electric power industry at the same time. Komanoff gave his own list of 10 important factors here. Our own assessment of the most important factors combines some of Komanoff’s categories and include three additional factors–declining electricity demand growth, structural problems in the industry, and the rise of non-utility generation:
1) Declining demand growth for electricity, culminating in overcapacity in the early 1980s.
2) High interest rates
3) Structural problems in the industry
4) Changing public perceptions of the credibility of the nuclear industry.
5) The rise of the independent power industry in the US combined with restructuring and integrated resource planning.
Let's examine each of these in turn.
1) Electricity demand growth: We plotted the annual rate of change in electricity consumption from 1949 to 2009 using EIA Annual Energy Review data (see Figure 4). We also plotted a lagging 5 year average starting in 1954 (for the few years before that we averaged the years we had, starting with the 1st year, then averaging years 1 and 2, then years 1,2, and 3, then years 1, 2, 3, 4). While there are more sophisticated ways to analyze trends, this simple approach is both easy to explain and also illustrative of the kind of analysis typical utility analysts would have conducted of those trends in the 1970s and 80s.
Figure 4: US electricity consumption growth over time (Copyright Jonathan Koomey 2011)
From around 1960 until the early 1970s electricity demand was growing steadily and rapidly at about 7% per year. The 1973 and 1979 oil shocks reduced demand growth to around 3% per year from the late 1970s until the early 1990s, and the volatility of demand from year to year increased a great deal. From the early 1990s through 2008 electricity demand growth declined from around 2.5%/year to around 1% per year, and then absolute demand dropped significantly in 2009 as a result of the global economic crisis.
An environment of declining electricity demand growth accompanied by increased volatility is a difficult one for long lead-time capital-intensive projects like nuclear plants, as NRC Commissioner Victor Gilinsky argued in January 1980 (reference 2). The construction of such plants is predicated on sufficient and reliable demand growth, and when growth slows, the economic justification for such plants deteriorates rapidly. We (and Commissioner Gilinsky) believe this factor was one of the most important ones that led to longer lead times and higher construction costs. Further analysis is needed to see just how important it was, and a state by state regression analysis that correlated construction lead times with state level electricity growth data would probably yield insights.
The end result of declining demand growth and the utility construction boom of the 1970s was overcapacity in the early 1980s (many utilities argued that high demand growth would return, so they kept building plants). The EEI Statistical Yearbook of the Electric Utility Industry gives data on capacity margins (defined as the difference between installed capacity and peak demand divided by installed capacity). The term reserve margin is actually a more common term and it is defined similarly (reserve margin is the difference between installed capacity and peak demand divided by peak demand).
Figure 5 shows reserve margins calculated from the EEI capacity margin data for 1963 through 1984 (we’re working on getting the EEI data for later years). Utilities typically target reserve margins of 15-20% to preserve reliability, and they did so successfully from 1966 through 1973. Things changed in 1974, when reserve margin jumped to 27% in 1974 and 35% in 1975, and stayed between 30% and 41% through 1984. These data indicate overcapacity on a large scale. To truly understand the importance of this factor would require state-by-state or regional analysis, but the overcapacity in the early 1980s was large enough at the national level to suggest that it probably affected utility decisions about the rate of construction progress on nuclear plants under construction.
Figure 5: US electric utility reserve margins 1963-1984 (Copyright Jonathan Koomey 2011)
For a fascinating contemporaneous account of the important effects of declining demand growth on the decision making of utilities in Illinois circa 1980, go here. That account, by the journalist William Lambrecht, stated:
“The Three Mile Island nuclear accident near Harrisburg, Pa., shook public perception of nuclear power, while resulting in new safeguards required by the Nuclear Regulatory Commission. But it apparently contributed less to the industry slowdown than is commonly believed, and substantially less than the simple effects of supply and demand. In the late 1960’s and early 1970’s, when many utilities drew expansion plans of grandiose proportions, the average estimated load growth rate around the country was 7 percent annually.”
“However, the 1973 oil embargo by Arab nations set in motion a chain of events that would markedly alter patterns of energy consumption. In the words of Scott Peters of the Atomic Industrial Forum, ‘Utilities suddenly found that they were overcommitted, and their load growth projections had to be tossed in the wastebasket.’ ”
Soon after TMI it was clear to some observers (like Gilinsky and Lambrecht) that overcapacity caused in part by lower demand growth led some utilities to slow down construction of reactors in the pipeline. It also had a chilling effect on the market for new reactors.
2) Interest rates peaked at around 20% in the 1979 to 1981 period, according to the site Tradingeconomics (see the site Economagic for a time series of the US prime rate, which also peaks at 20% in 1980-81). While it’s not clear exactly which interest rates TradingEconomics used as the proxy for US rates, the graph from that site (Figure 6) is striking, and the message unmistakeable: at the same time as TMI occurred, interest rates were at unprecedented levels. That had to weigh heavily on the financing and cash flows of an industry constructing capital-intensive power plants, and this had absolutely nothing to do with TMI.
Figure 6: Nominal US interest rates over time
3) Structural problems in the industry: in the late 1960s and early 1970s, the industry rushed to build plants before sufficient experience had been gained with the earlier generation of reactors, with the assumption that economies of unit scale would continue to accrue without corresponding diseconomies (references 1 and 3 show that this assumption turned out not to be correct). At the same time, many reactors started construction with as little as 10% of their design completed, which made them even more subject to cost escalation than they would otherwise have been (see reference 8 for some discussion on this point).
4) Changing perceptions of the nuclear industry: TMI represented a pivotal event that made the public think twice about supporting nuclear power, in large part because the nuclear industry had assured us that such an accident couldn’t happen here. There were other such events that had a similar effect (See Komanoff's “10 blows” article for more details): those included a fire at Brown’s Ferry in 1975, three former GE nuclear engineers who joined antinuclear organizations, and the reversal of pipes at the Diablo Canyon reactors in 1981 that “virtually [disabled] their seismic protection systems”. These events, combined with TMI, served to undermine the credibility of the nuclear industry in a way that had lasting effects (and these effects were distinct from the effect of the new safety regulations imposed by the NRC after TMI).
5) The rise of the US independent power industry and the increasing prevalence of restructuring/deregulation: In one of the great success stories of new technology development in the past few decades, the independent power industry arose from humble origins to dominate the supply of new electric generation capacity in the US. As Peter Bradford argued in a Wall Street Journal Book Review, “Nuclear-plant construction in this country came to a halt because a law passed in 1978 [PURPA] created competitive markets for power. These markets required investors rather than utility customers to assume the risk of cost overruns, plant cancellations and poor operation. Today, private investors still shun the risks of building new reactors in all nations that employ power markets.”
This shift of expectations about who was responsible for funding the development of new supply resources coincided with the rise of what’s called “Integrated Resource Planning” (IRP) or “Least Cost Planning” (LCP), which encouraged utilities to do a least-cost integration of all resources, including energy efficiency. The conceptual foundations of this approach were fully fleshed out by the late 1980s (see reference 9). The net result of this new approach was to bring previously ignored utility resources (like independent power generation and energy efficiency) into the mainstream, and made it possible for utilities to avoid building new capacity using standard financing methods.
As one example of the effect of those changes, see Figure 7 below. This figure shows utility and non-utility generation over time from US EIA data. Non-utility generation dropped from 108 BkWh/year in 1970 to 71 BkWh/year in 1979. It began to rise after 1979 and by 1995 it had reached almost 400 BkWh/year, which is the equivalent of 133 typical 500 MW coal plants. This graph is a testament to the effects of the changes in utility resource acquisition practices that came into use after the passage of the Public Utility Regulatory Policies Act of 1978 (PURPA).
Figure 7: US utility and non-utility net generation 1970 to 1996 (billion kWh/year, Copyright Jonathan Koomey, 2011)
The nuclear industry faced big challenges even before TMI: of all reactors cancelled, 40% of were abandoned before 1979. The interlinked issues of declining demand growth, high interest rates, nuclear industry structural problems, changing public perceptions, and the rise of alternative means of acquiring utility resources all had powerful effects on the viability of the nuclear enterprise. It is therefore not correct to conclude that the Three Mile Island accident was the sole or even the most important factor leading to the difficulties the US industry has faced. The accident clearly had some effect on reactors then under construction, but we are convinced that the other factors we list above were in the aggregate more important than TMI in their effect on the likelihood of new nuclear orders in the post-TMI period.
–Jonathan Koomey and Nate Hultman
Acknowledgements
This post benefited greatly from discussions with Peter Bradford, Ralph Cavanagh, Mark Cooper, Victor Gilinsky, Jim Harding, Charles Komanoff, Amory Lovins, and Joe Romm. We’d also like to thank Gregory Carlock, who helped us with data collection and analysis.
References
1. Bupp, Irvin C., and Jean-Claude Derian. 1981. The Failed Promise of Nuclear Power: The Story of Light Water. New York, NY: Basic Books, Inc.
5. Hultman, Nathan E., and Jonathan G. Koomey. 2007. “The risk of surprise in energy technology costs." Environmental Research Letters. vol. 2, no. 034002. July. <http://www.iop.org/EJ/abstract/1748-9326/2/3/034002/>
6. Hultman, Nathan E., Jonathan G. Koomey, and Daniel M. Kammen. 2007. "What history can teach us about the future costs of U.S. nuclear power." Environmental Science & Technology. vol. 41, no. 7. April 1. pp. 2088-2093.
8. Koomey, Jonathan G., and Nathan E. Hultman. 2007. "A reactor-level analysis of busbar costs for U.S. nuclear plants, 1970-2005." Energy Policy. vol. 35, no. 11. November. pp. 5630-5642. <http://dx.doi.org/10.1016/j.enpol.2007.06.005> (Subscription required).
9. Krause, Florentin, and Joseph Eto. 1988. Least-Cost Utility Planning: A Handbook for Public Utility Commissioners (v.2): The Demand Side: Conceptual and Methodological Issues. National Association of Regulatory Utility Commissioners, Washington, DC. December.
10. Krause, Florentin, Jonathan Koomey, David Olivier, Pierre Radanne, and Mycle Schneider. 1994. Nuclear Power: The Cost and Potential of Low-Carbon Resource Options in Western Europe. El Cerrito, CA: International Project for Sustainable Energy Paths. <http://www.mediafire.com/file/kjwo9gjwtj5p11t/nuclearpowerbook.pdf>
Gore’s article in Rolling Stone lays out what’s going on in the climate debate with exceptional clarity.
See also Joe Romm’s excellent commentary about how the media just ignore Gore’s criticisms of the media’s behavior, focusing on the Gore vs. Obama angle.
Gore’s article frames the current situation really well. It’s up to all of us to use this framing to fight back against the bad actors Naomi Oreskes has dubbed “The Merchants of Doubt”, who use the same dishonest arguments and techniques fighting climate action that they used against seat belts, air bags, DDT, cigarettes, efficiency standards for autos, lead in gasoline, asbestos–the list goes on and on. How many times are people going to keep falling for this act?
The New York Times today reports that airlines are exploring ways to reduce the amount of paper each plane carries as a way to reduce weight and fuel costs. The article implies that each pound of additional weight costs $17,600 (=$440,000/25 lbs) in added fuel costs over the course of a year, which I’m convinced is an overestimate. The statistic I recall from my work on Winning the Oil Endgame is each pound of additional weight costs 100 to 150 gallons of jet fuel each year for each aircraft, which is still enough to justify switching to electronic documents, but far less than what the article implies (even at $4/gallon of fuel!).
This is another nice example of how information technology allows us to substitute bits for atoms, thus allowing for substantial environmental improvements while also reducing the costs of goods and services. For more discussion of this and related issues, go here.
Rayner uses our work as backdrop to tell the story of five key technologies that will have a huge impact on environmental issues in coming years. Here’s his list:
I had heard of motion sensors controlling display cases before, but the Target in Emeryville, CA (which we visited this morning) has a particularly nice implementation of that technology. The display cases are lit with LEDs (see below).
And here’s a closer shot of the motion detector:
There were motion detectors every 8-10 feet along each row, and if you started walking down the aisle the lights inside the cases came one one by one. I suspect that motion sensors could be used with incandescent lights but the lifetimes of such bulbs are short enough to make an application like this less feasible than with LEDs, which typically last thirty times longer than incandescents. LED lifetime is also not affected by turning them on and off (see this Wikipedia article for more details).
It took a little more than a minute for the lamps to switch off when no motion was detected, at least for the aisle I visited. I’m not sure if delay times vary by aisle or food type.
James Kanter has a very nice article on NYTimes.com today on data centers and the search for ever better ways to cool those facilities. Typical “in-house” data centers nowadays devote one kWh of electricity to cooling, pumps, fans, and power systems for every kWh of electricity used for servers (for details, see the EPA Report to Congress on Data Centers). That additional kWh is “overhead”, and to the extent that it can be reduced, data centers can save big on both energy costs and capital costs to construct these facilities. The service we really care about is computing, and so eliminating the waste associated with cooling the computers is all gravy.
Cloud computing providers have made big progress on reducing this waste, reaching overhead levels of 0.1 to 0.2 kWh per kWh of IT electricity use. For example, Google documents the PUE for their data centers here, and they boast an average overhead of 0.16 kWh/kWh of IT usage averaged across nine facilities. Centralized computing providers can achieve such savings because they have at least four advantages over the “in-house” facilities, as I’ve explained frequently (for one example, go here):
1) Diversity
2) Economies of scale
3) Flexibility
4) Ease of enabling structural change
These advantages mean that there will be ever more compelling economic reasons to using centralized computing facilities over in-house IT with each passing year. There still are good reasons for certain kinds of computing to be done in-house, but the powerful advantages of centralized computing will continue to create economic pressure to move to the cloud for many applications.
“The Obama administration is leading a global effort to deploy “shadow” Internet and mobile phone systems that dissidents can use to undermine repressive governments that seek to silence them by censoring or shutting down telecommunications networks.
The effort includes secretive projects to create independent cellphone networks inside foreign countries, as well as one operation out of a spy novel in a fifth-floor shop on L Street in Washington, where a group of young entrepreneurs who look as if they could be in a garage band are fitting deceptively innocent-looking hardware into a prototype “Internet in a suitcase.””
This development is of course good news for people who believe in the free exchange of information, but it’s also a reminder of the power of the long-term trends in computing efficiency I’ve blogged on previously (here and here).
The research (which will be published this year in the IEEE Annals of the History of Computing) shows that computations per kWh (ie energy efficiency of computation) for all types of computers has doubled roughly every year and a half since Eniac in 1946.
The big implications of these trends are for mobile computing devices, the existence of which was an inevitable outcome of these trends. Doubling of computing efficiency every year and a half means a factor of 100 improvement every decade. That means that at a fixed level of computation the need for battery capacity will decrease at that rate every decade. Alternatively, you could have 20 times more computation at 5 times the battery life. And that progress implies ever greater use of mobile computing, sensors, and controls, which will allow us to better match services supplied with services demanded. It will also lead to an explosion of data the likes of which we haven’t yet experienced (we ain’t seen nothing yet!).
And it’s not right to call it “Moore’s law” (as some still do) because these trends PRECEDE Moore’s law (they hold true for tube and discrete transistor computers, not just microprocessors). In fact, they are an inherent feature of electronic information technology, as our paper shows.
The industry hasn’t fully absorbed the implications of these trends, but I think they are critically important to understanding its future. Internet in a suitcase is a nice result, but there are many other such innovations that will no doubt surprise us all in coming years.
On June 3, 2011, Climate Progress reported on the development of a new type of corporation, the B-corp, “which modifies governance so that managers respond to long-term interests of investors, stakeholders, and the environment, rather than just focusing on short-term profits.” This is an important development because it changes the "rules of the game", as B Lab co-founder Coen Gilbert explained in a Harvard Business School case study of his company.
Many people forget that the economy can’t exist without a government to define property rights, enforce contracts, charter corporations, regulate socially destructive corporate behavior, and determine the rules of the game. When government doesn’t take these responsibilities seriously enough we end up with lead in children’s toys, e coli in food, global financial crises, and rivers that catch on fire, and no one should find that to be a particularly good outcome.
I’m excited about this idea because it does what some of us have advocated for a long time: change the charter of corporations to redefine property rights, and add some responsibilities while we’re at it. Corporations are socially constructed entities and we get to choose how their governance and underlying incentives should be structured. By changing the assumptions behind their design it’s possible to set a new course for the economy, one where private profit can be more closely aligned with public good.
Of course, changing the charter for existing corporations would be difficult, but there may be other ways to accomplish the same goal for those institutions. In any case, creating a new type of corporation that deals explicitly with issues of standards, transparency, legal status, and incentives is a good first step, and I applaud the B Lab folks for starting down this path.
And this development is also a reminder that the innovations we’ll need to truly face the climate challenge are not just technical, though we’ll need plenty of those to get to the greenhouse gas emissions reductions that will be required to stabilize the climate. Social and institutional innovations will play an important role as well, given the rate, scale, and scope of changes that are required.