Our summary of the driving forces behind the 2017 IEA “Beyond 2 degrees” (B2DS) scenario

There’s been discussion recently about the upcoming release of the 2019 International Energy Agency (IEA) scenarios related to the World Energy Outlook (WEO). We’ve worked in the past few months to disentangle the drivers for a key 2017 scenario, what IEA calls their “Beyond 2 degrees” (B2DS) scenario and I wanted to post these results so we’ll have something to which to compare when the 2019 scenarios are released. We express the results in what we call a “dashboard of key drivers” (see below).

The 2017 scenario is based on IEA’s Energy Technology Perspectives (ETP) model, which is different from the World Energy Outlook model. While the WEO comes out every year the ETP analyses come out at irregular intervals. You can read about both sets of analyses here.

For more background, see the post on IEA historical drivers of energy sector carbon dioxide emissions here. The reference for our 2019 article [1] upon which the decomposition analysis is based is at the end of this post. Email me for a PDF copy if you can’t get access otherwise. We also have an excel workbook all set up to do these decompositions and graphs, so let me know if you’d like a copy.

Let’s first look at an equation known as the Kaya Identity, which describes fossil carbon emissions as the product of four terms: Population, GDP/person (wealth), Primary Energy/GDP, and Carbon dioxide emissions/primary energy.

image

Over time, analysts have realized that this four-factor identity collapses some important information. That’s why, in our 2019 article, we moved to the expanded Kaya identity, with several more terms:

image

The components of this identity are as follows:

CFossil Fuels represents carbon dioxide (CO2) emissions from fossil fuels combusted in the energy sector,

P is population,


GWP is gross world product (measured consistently using Purchasing Power Parity here),

FE is final energy,

PE is total primary energy, calculated using the direct equivalent (DEq) method (electricity from non-combustion resources is measured in primary energy terms as the heat value of the electricity to first approximation),

PEFF is primary energy associated with fossil fuels,

TFC is total fossil CO2 emitted by the primary energy resource mix,

NFC is net fossil CO2 emitted to the atmosphere after accounting for fossil sequestration.

For historical data, there is no sequestration of carbon dioxide emissions, so the last term is dropped in the previous blog post, but included for future scenarios.

Note that this identity applies only to carbon dioxide emissions from the energy sector. We use an additional additive dashboard for future scenarios to describe industrial process emissions, land use changes, and effects of other greenhouse gases, but that one isn’t quite ready for prime time, so I’m focusing just on the energy sector here.

Discussion of Figure 1 (Factors)

The first graph is what we call our graph of key factors, from the indented list above. In the first row we show each term in its raw form for both the reference case (in black) and the intervention case (in red).  The second row shows indices with 2025 = 1.0. And the last term shows the annual rate of change in each term for reference and intervention cases. In each case, we plot historical trends from IIASA’s PFU database for each factor from 1900 to 2014 (in green dashed lines) and 1995 to 2014 (in blue dashed lines)

The total fossil carbon is the end result of the other factors, which drive emissions. It grows modestly from 2025 to 2060 (when the scenarios end). This was unexpected for me, and it suggests an area of fruitful inquiry (and comparison to other reference cases). I would have expected higher growth in emissions in a reference case.

Population doesn’t vary at all between reference and intervention scenarios, which is commonplace for such projections. Population is not seen as a lever for climate policy except in rare cases, mainly for ethical reasons. There may be policies (like educating and empowering women) that we should do for other reasons, but almost never are these considered as climate policies (and that’s appropriate, in my view).

Another observation about population emerges from these data also. Projected population growth to 2060 is much slower than historical trends. This result mainly comes from long term changes that almost all demographers agree are underway, and this picture of slowing population growth is almost universal in long run energy scenarios. Unlike the 1971 to 2016 period, when population was responsible for half of growth in energy sector GHGs, this driver will be far less important to emissions in the future.

image

Download higher resolution version of Energy Sector Factors for B2DS

Gross World Product (GWP) is another key driver, and that term is projected to increase by more than a factor of two by 2060 in both the reference and intervention cases.

Final energy (e.g., energy consumption measured at the building meter or the customer’s gas tank) is projected to grow modestly in the reference case and decline modestly in the intervention case. Same for primary energy. Both grow much more slowly than historical trends, which is another interesting area of investigation.

Fossil primary energy is roughly constant in the reference case and declines substantially in the intervention case. Same for total fossil carbon and net fossil carbon. Note the green line in the last column for the top two rows, where we plot the contribution of biomass CCS to net emissions reductions in the intervention case for comparison. Though these net emissions savings are often counted outside of the energy sector, they are linked to the energy sector and it’s useful to show their magnitude here for comparison.

The last row of the dashboard shows annual rates of change, which reveal some interesting trends and suggests further investigations. Population grows at a modest and mostly steady rate after 2030. GWP growth slows substantially for reference and intervention cases from 2025 onwards. Why should that be?

Final energy in the reference case shows modest but declining annual growth rates, while the Intervention case averages about zero growth over the analysis period. It’s not clear why final energy use should grow in the later years of the projection.

Primary energy growth rates decline for the reference case and increase for the intervention case. This means that there are more conversion losses in the energy system over time in the intervention case (because final energy growth rates are mostly negative during the forecast period for the intervention case).

Fossil primary energy growth is modest over the reference case, but is strongly negative (about -3%/year) for the first couple of decades of the analysis period. Then the negative growth moderates, for reasons that are unclear. Annual growth rates for total fossil carbon and net fossil carbon show the same “V” shape as fossil primary energy, and that’s a prime area of investigation. Why, in an aggressive mitigation case, would mitigation efforts let up at the end? It’s possible that the last bits of mitigation would get harder, but aggressive mitigation would drive costs down fast, and it’s an open question as to which of these factors would prevail in an aggressive mitigation case.

Discussion of Figure 2 (Ratios)

The 2nd graph below shows the expanded Kaya identity ratios. Population is the same, but all the other columns show ratios from the 2nd equation above. Population and wealth per person (the first two terms in the Kaya identity) are the biggest drivers of emissions in the reference case, while the energy intensity of economic activity declines to offset some of the growth in the first two terms.

image

Download higher resolution version of Energy Sector Ratios for B2DS

The ratio of final energy to GWP tracks trends since 1995 for the reference case, and declines more rapidly in the intervention case. Why the rate of decline should be so rapid in the early years and then decline to about -1% in later years is a question worth asking the modelers.

As expected from the discussion above, the energy supply loss factor suggests losses are roughly constant in the reference case and grow in the intervention case. The fossil fuel fraction declines substantially over the analysis period, as does the carbon intensity of fossil energy supply. Interestingly, there’s a step change in the rates of change for the carbon intensity of energy supply in the final years of the forecast, and it would be interesting to know from the modelers why this comes about.

The last column shows the extent of carbon sequestration as well as carbon sequestration from biomass. This column is measured as a fraction of total fossil carbon emitted, so some of the drop in this ratio is associated with declining absolute amounts of fossil carbon over time. Nevertheless, this graph indicates substantial use of carbon sequestration (both conventional and biomass related) in this scenario.

This example illustrates the use of our decomposition dashboards for the 2017 IEA Beyond 2 Degrees scenario (B2DS).  We will do a similar exercise for the 2019 WEO results that are soon to be released.

References

1. Koomey, Jonathan, Zachary Schmidt, Holmes Hummel, and John Weyant. 2019. “Inside the Black Box:  Understanding Key Drivers of Global Emission Scenarios.” Environmental Modeling and Software. vol. 111, no. 1. January. pp. 268-281. [https://www.sciencedirect.com/science/article/pii/S1364815218300793]

A look at historical global trends in energy and emissions

As part of our work decomposing growth in greenhouse gas emissions into its key factors that was published in 2019 [1], we delved into historical data to create benchmarks against which trends in scenario projections could be compared. For our historical trends we relied on the long term data in the Primary, Final, and Useful (PFU) energy database, from IIASA. That data source goes back many decades, which makes it unique among such data sources

Before our article was published we examined comparable historical energy balance data from the International Energy Agency to see what it could tell us. The IEA global energy balances don’t go back as far as the PFU data, but are more detailed in some ways. This post describes the high level results from that review.

First, let’s look at an equation known as the Kaya Identity, which describes fossil carbon emissions as the product of four terms: Population, GDP/person (wealth), Primary Energy/GDP, and Carbon dioxide emissions/primary energy.

image

Over time, analysts have realized that this four-factor identity collapses some important information. That’s why, in our 2019 article, we moved to the expanded Kaya identity, with several more terms:

image

The components of this identity are as follows:

CFossil Fuels represents carbon dioxide (CO2) emissions from fossil fuels combusted in the energy sector,

P is population,


GWP is gross world product (measured consistently using Purchasing Power Parity here),

FE is final energy,

PE is total primary energy, calculated using the direct equivalent (DEq) method (electricity from non-combustion resources is measured in primary energy terms as the heat value of the electricity to first approximation),

PEFF is primary energy associated with fossil fuels,

TFC is total fossil CO2 emitted by the primary energy resource mix,

NFC is net fossil CO2 emitted to the atmosphere after accounting for fossil sequestration.

For historical data, there is no sequestration of carbon dioxide emissions, so the last term is dropped in our graphs below.

Note that this identity applies only to carbon dioxide emissions from the energy sector. We use an additional additive dashboard for future scenarios to describe industrial process emissions, land use changes, and effects of other greenhouse gases, but we haven’t yet compiled those additional data for historical analysis and we only present the graphs for energy sector total fossil carbon dioxide emissions here.

The first graph is what we call our graph of key factors, from the indented list above. In the first row we show each term in its raw form. The second row shows indices with 1971 = 1.0. And the last term shows the annual rate of change in each term.

The total fossil carbon is the end result of the other factors, which drive emissions. It grows by about a factor of two from 1971 to 2016.

image

Download higher resolution version of Energy Sector Factors

The 2nd graph below shows the expanded Kaya identity ratios. Population is the same, but all the other columns show ratios from the 2nd equation above. Population and wealth per person (the first two terms in the Kaya identity) are the biggest drivers of emissions, while the energy intensity of economic activity declines to offset some of the growth in the first two terms. The other terms don’t show much change over the past 45 years.

Quantitatively, population and GWP per person both roughly double, while energy intensity of economic activity drops by half, with other factors roughly constant. That is consistent with Total Fossil Carbon increasing by a factor of two over this period.

image

Download higher resolution version of Energy Sector Ratios

These graphs are a handy summary of key historical data from IEA. If you want to see the longer term trends from IIASA’s PFU data, please email me and I’ll send you a copy of our 2019 article, which has those graphs. Happy to share the spreadsheets + graphs for those interested.

References

1. Koomey, Jonathan, Zachary Schmidt, Holmes Hummel, and John Weyant. 2019. “Inside the Black Box:  Understanding Key Drivers of Global Emission Scenarios.” Environmental Modeling and Software. vol. 111, no. 1. January. pp. 268-281. [https://www.sciencedirect.com/science/article/pii/S1364815218300793]

A quick investigation of solar PV projections in the International Energy Agency’s World Energy Outlook

My colleague @AukeHoekstra (on Twitter) has for several years produced a graph of the International Energy Agency’s (IEA’s) projections of global photovoltaic (PV) installations. The graph has become iconic. Auke documents his methods here.

image

The graph shows that IEA’s projections in their “New Polices Scenario” indicate that annual installations of PVs will stay constant at current historical levels. Every year as actual shipments grow rapidly, IEA ramps up its starting point to reflect historical data, but never seems to adjust the projection’s general trend after the first year of the projection.

This is a puzzling graph to those of us who understand historical technology trends for mass produced products like PVs. In this post, I’ll explore briefly why I think so.

First, it’s important to understand IEA’s terminology for their annual World Energy Outlook (WEO). Their “New Policies Scenario” is one in which current policies continue and then are renewed when they would otherwise have expired. It’s a way to characterize a more or less constant policy environment. They also show a “Current Policies Scenario” in which existing policies exist until their expiration date, and then they disappear. They also have a Sustainable Development Scenario in which more aggressive policies are assumed to be implemented than in the New Policies Scenario.

I asked my colleague Zach to plot PV installations for these three scenarios for the WEO 2017 and 2018. The graph below shows the results.

The difference between the Current Policies Scenario and the New Policies Scenario is the expected effect of renewing current policies when they would have otherwise expired. In the 2017 WEO, there’s almost a doubling in annual installations by 2040 in going from Current to New Policies. It’s more like a 50% increase for the 2018 WEO. In both cases, there’s about a doubling in annual installations by 2040 to go from New Policies to the Sustainable Development scenario.

The lesson I take from this is that WEO assumes that policy is the main (and perhaps sole) driver of penetration of renewables. This is odd for those who understand learning rates for mass produced technologies. When the cumulative production of a mass produced device doubles, its cost per unit declines by a more or less predictable amount. For PVs, that cost decline is 20-25% per doubling of cumulative production historically.

Why wouldn’t penetration of PVs increase if annual installations stayed in the 100 GW range for two decades? Cost per unit would come down significantly in such a scenario, so there’s a real inconsistency here that needs further investigation.

It’s also counterintuitive that exponential growth in annual sales of PVs would immediately be followed by zero growth in annual sales forever more. The idea, I think, is that the growth is policy driven, so that if you don’t accelerate the policies then there won’t be any more growth. But if historical growth is solely policy driven, why wouldn’t that growth continue as it has for the past decade if you maintain current policies? It’s another contradiction in the logic of the projections.

One possible explanation is that IEA’s model may have arbitrary internal constraints that prevent variable renewables from penetrating the market more than a certain fraction to reflect the complexities of integrating these technologies into current electricity grids. It is not clear why these constraints should kick in immediately once a scenario starts. They clearly haven’t been affecting growth much so far!

Unfortunately, those constraints are rarely reflective of actual constraints in the real world. Most models have such constraints, but they say more about the modelers’ limited understanding of power systems and renewables than they do about actual constraints.  We will increasingly need to examine and abandon those arbitrary modeling constraints as the penetration of variable renewables increases.

For those who want a detailed historical example of arbitrary constraints in a widely used energy model, see our 1999 Lawrence Berkeley National Laboratory (LBNL) report on wind energy in the Energy Information Administration’s National Energy Modeling System (NEMS) [1]. As I recall, we found three levels of arbitrary penetration constraints in NEMS that prevented wind adoption even when our scenarios included high carbon taxes.

Something is clearly amiss with IEA’s projections for PV adoption, and I would be very surprised if similar issues don’t exist for wind and electricity storage. In a time of rapid technological change, it isn’t wise to rely on things staying static. The only thing constant is change!

References

1. Osborn, Julie, Frances Wood, Cooper Richey, Sandy Sanders, Walter Short, and Jonathan G. Koomey. 2001. A Sensitivity Analysis of the Treatment of Wind Energy in the AEO99 Version of NEMS. Berkeley, CA: Ernest Orlando Lawrence Berkeley National Laboratory and the National Renewable Energy Laboratory. LBNL-44070. January. [https://emp.lbl.gov/publications/sensitivity-analysis-treatment-wind]

The importance of standardized utility rate data for energy innovation

In response to a recent twitter thread, I dug up an idea I proposed more than a decade ago about requiring utilities to release their rate structures in a standardized structured electronic format. I originally proposed this in my testimony to the Joint Economic Committee of the United States Congress on July 30, 2008 (Archived webcast).

Here is the key paragraph (in the context of ways information technology (IT) can promote efficiency):

IT also helps users manage data more effectively, particularly when data are released in a standardized format. For example, electric utility rates, which are now almost exclusively printed on paper, are difficult to manage for large companies with facilities in many states. The rates are complicated and they vary state-by-state and over time in unpredictable ways. If the federal government were to promote the development of a standardized electronic format for utility rates it would allow greater efficiencies in the design and energy management of facilities owned by multi-state and multi-national companies. The Lawrence Berkeley National Laboratory tariff analysis project made a first pass at creating a database of such tariffs manually <https://energyanalysis.lbl.gov/publications/tariff-analysis-project-database-and>, but that’s a far cry from having such data released and updated automatically by each utility. A nice side effect of such standardization would be that web-based energy analysis tools could more easily evaluate utility bills for residential and smaller commercial customers as well.

And here’s the bullet point recommending action:

Second, the U.S. Department of Energy and the Federal Energy Regulatory Commission should be asked to assess the benefits and costs of promoting standardized electronic formats for utility rates.

Here’s how I described the benefits of this idea in a summary I wrote after the hearing:

The benefits of enabling such customer comparisons would be substantial.  Utility customers would save billions by choosing the tariffs most beneficial to them.  Utilities would face new pressure to rationalize their tariffs and align them with actual costs. The implications of discriminatory tariffs that inhibit adoption of new electricity technologies would become immediately apparent, and thus adoption of alternative generation and efficiency technologies would be accelerated. Finally, non-utility companies devoted to offering energy services at lowest total cost would gain a powerful new tool to help their customers.

Senator Jeff Bingaman approached me after the hearing about this idea and put me in touch with his staff person in charge of these issues (Alicia Jackson). I wrote up a few pages describing the idea and how to move forward. Ms. Jackson eventually drafted legislative language that made its way into one of the big energy policy bills then being considered.

Senator Bingaman is not in the Senate anymore and the big bill was never passed, but I remain convinced that this is an idea that could make a real difference. Key links are below.

My short writeup of the idea: “A proposal for standardized metadata formats for retail utility tariffs that will promote economic productivity, energy efficiency, and technological innovation”. PDF.  Microsoft Word 2016.

The 1 page draft legislative language in its most advanced form.  PDF.  Microsoft Word 2016.

My 2017 book, Turning Numbers into Knowledge: Mastering the Art of Problem Solving, which talks about the uses and limitations of structured data for analysis and sharing of data (Chapter 39).

The Open EI project has an updated database of utility rates, but like all efforts before it, this database was created by scraping rate data from utility PDFs.

Whenever sanity returns to DC someone should take up this idea and run with it. I’m happy to share my thoughts with anyone who wants to take on this challenge.

A vignette of technology change for lighting

In August 2011, I summarized our experience in retrofitting old downlights with new LEDs. We installed fifty of these beauties at $50/fixture (including free installation because they were so easy for the contractor to install). These avoided having to spend $20 to replace each dingy old fixture, so the economics were pretty good to start with.

One of the fixtures failed quickly, and it was swapped out for a new one. Since 2011 (i.e. 8 years) I’ve had to buy four more replacements (including the one I just purchased for my home office, which is used much more than other fixtures in the house).

The cool thing about LEDs is that they are on the electronics learning curve rather than the old industrial sector learning curve, so costs come down quickly. A better version of the device that cost $50 in 2011 now costs $22.36 on Amazon. The new version weighs 206 g instead of 484 grams for the original one (a 57% reduction), and it’s significantly smaller in volume. It looks like the body has been constructed of fewer pieces, which may explain part of the cost reduction.

Progress in electronics continues to amaze and astonish, but there’s a bigger lesson: energy efficiency is a renewable resource. It gets cheaper and better over time!

My annual appearance on the Energy Transition Show podcast was just posted

image

As I’ve done for the past couple of years, I appeared again on Chris Nelder’s Transition Show podcast anniversary show. Here’s how he summarized the episode:

In this anniversary episode, we welcome back Jonathan Koomey to talk about some of the interesting developments and raucous debates we have seen over the past year.

A favorite quotation of mine from the episode:

The rule is we need to reduce emissions as much as possible as fast as possible starting immediately. That’s it. You don’t need to see any more studies.

And another favorite quotation:

We can envision the world we want to create and by our choices, we can create that world.

One of the best things about the Transition Show is that it brings in real experts to talk about complicated issues. It’s the closest thing you can get to a primary source on the key issues of the day in energy transitions, short of reading the actual articles. Subscribe if you can!

https://xenetwork.org/ets/episodes/episode-104-4-year-anniversary-show/

A wonderful summary of the climate problem

Writing in New York Magazine, @EricLevitz explains the real issues with the terrible article by Franzen in the New Yorker (”Jonathan Franzen’s Climate Pessimism Is Justified. His Fatalism Is Not”). As part of his article, he summarizes with accuracy and nuance the true challenge we face regarding climate in this long but wonderful paragraph:

“We have already burned an unsafe amount of carbon, and nothing we do now is likely to prevent the climate from growing evermore inhospitable for the rest of our lives. We cannot know with certainty quite how much ecological devastation we’ve already bought ourselves, or exactly how much carbon we can burn without triggering mass starvation, civilizational collapse, or human extinction. Those 1.5- and two-degree warming targets you’ve heard so much about are informed by science, but they’re still inescapably arbitrary. Keeping warming below 1.5 degrees won’t be sufficient to prevent wrenching ecological disruptions (some of which will be tantamount to “end of the world” for those most severely afflicted). And at the rate we’re going, we almost certainly not going to keep warming below even two degrees, anyway. A better climate (than our current one) is not possible; at least, not for us, or our children, or their children. But the faster we decarbonize the global economy, the better our chances of sparing the world’s most vulnerable communities from near-term destruction — and our civilization from medium-term collapse — will be.”

Read it carefully. It’s the real deal.

Addendum: We can still keep warming below 2C (or even 1.5C) but it will require us to reduce emissions as fast as possible and as much as possible, starting immediately. Whether we will or not is up to us.

Addendum 2: Dr. Kate Marvel concisely nails it here, writing in Scientific American (“Shut Up, Franzen”): “Climate change is real and things will get worse—but because we understand the driver of potential doom, it’s a choice, not a foregone conclusion”

My new report: Estimating Bitcoin Electricity Use: A Beginner’s Guide

My new report on Bitcoin electricity use is out this morning:

Koomey, Jonathan. 2019. Estimating Bitcoin Electricity Use: A Beginner’s Guide. Washington, DC: Coin Center. May 7. [https://coincenter.org/entry/bitcoin-electricity]

Here’s the abstract for the report:

Abstract

The rapid emergence of Bitcoin and other cryptocurrencies has taken many in the energy sector by surprise. This report summarizes complexities and pitfalls in analyzing the electricity demand of new information technology, focusing on Bitcoin, the mostly widely used cryptocurrency. It also gives best practices for analyses in this space, and reviews recent estimates in light of those best practices. Things change rapidly for cryptocurrency, so special care (such as including an exact date for each estimate) is needed in describing the results of such analyses.

The most reliable estimates of Bitcoin electricity use for June 30, 2018 total about 0.2% of global electricity consumption. Because of the collapse in Bitcoin prices in the latter half of 2018, some estimates indicate that this total has begun to decline, though nobody knows if that trend will continue.

Future studies of cryptocurrency electricity use can avoid the pitfalls identified in this report by following some simple rules, which the report describes in more detail:

•  Report estimates to the day

•  Provide complete and accurate documentation

•  Avoid guesses and rough estimates about underlying data

•  Collect bottom-up measured data in the field for both components and systems

•  Properly address locational variations in siting of mining facilities

•  Explicitly and completely assess uncertainties

•  Avoid extrapolating into the future

Studies that don’t follow these best practices should be viewed with skepticism.

The key graph (Figure 2) is here:

There are three current credible estimates for Bitcoin electricity use  (Vranken, Bevand, and Krause and Tolaymat). Their estimates are consistent with each other and are based on transparent and sensible data and assumptions. The other three estimates I reviewed (Digiconomist, Mora et al, and O’Dwyer and Malone) all have serious issues.

The full reference is:

Koomey, Jonathan. 2019. Estimating Bitcoin Electricity Use: A Beginner’s Guide. Washington, DC: Coin Center. May 7. [https://coincenter.org/entry/bitcoin-electricity]

Email me if you have questions.

An Update On Trends In US Primary Energy, Electricity, And Inflation-Adjusted GDP Through 2018

Back in 2015, Professor Richard Hirsh (Virginia Tech) and I published the following article in The Electricity Journal, documenting trends in US primary energy, electricity, and real (inflation-adjusted) Gross Domestic Product (GDP) through 2014:

Hirsh, Richard F., and Jonathan G. Koomey. 2015. “Electricity Consumption and Economic Growth: A New Relationship with Significant Consequences?” The Electricity Journal. vol. 28, no. 9. November. pp. 72-84. [http://www.sciencedirect.com/science/article/pii/S1040619015002067]

Every year since, my colleage Zach Schmidt and I have updated the trend numbers for the US using the latest energy and electricity data from the US Energy Information Administration (EIA).  The GDP numbers are preliminary, from the US Bureau of Economic Analysis (released 3/28/2019) This short blog post gives the three key graphs from that study updated to 2018, and makes a few observations.

Figure 1 shows GDP, primary energy, and electricity consumption through 2018. From 2017 to 2018, GDP grew a little more slowly and primary energy and electricity grew a little more rapidly than in recent years, resulting in slight changes in the slope of those curves. The overall picture, though, hasn’t changed that much, and we’ll have to see what happens in subsequent years. Electricity consumption and primary energy consumption have been flat for about a decade and two decades (respectively).

Figure 2 shows the ratio of primary energy and electricity consumption to GDP. The trends there are pretty clear as well. Primary energy use per unit of GDP has been declining since the early 1970s, while the ratio of electricity use to GDP has been declining since the mid 1990s. Before the 1970s, electricity intensity of economic activity was increasing, and from the early 1970s to the mid 1990s, it was roughly constant.

Figure 3 (which was Figure 4 in the Hirsh and Koomey article) shows the annual change in electricity consumption going back to 1950. Growth in total US electricity consumption has just about stopped in the past decade, but there’s significant year-to-year variation. Flat consumption poses big challenges to utilities, whose business models depend on continued growth to increase profits (unless they are in states like California, where the regulators have decoupled electricity use from profits).

Email me at jon@koomey.com if you’d like a copy of the 2015 article or the latest spreadsheet with graphs. If you want to use these graphs, you are free to do so as long as you don’t change the data and you credit the work as follows:

This graph is an updated version of one that appeared in Hirsh and Koomey (2015), using data from the US Energy Information Administration and the US Bureau of Economic Analysis.

Hirsh, Richard F., and Jonathan G. Koomey. 2015. “Electricity Consumption and Economic Growth: A New Relationship with Significant Consequences?" The Electricity Journal. vol. 28, no. 9. November. pp. 72-84. [http://www.sciencedirect.com/science/article/pii/S1040619015002067]

Review of Turning Numbers into Knowledge from the Midwest Book Review

The Midwest Book Review just released a short review of Turning Numbers into Knowledge: Mastering the Art of Problem Solving, 3rd edition. Here’s the money quote:

Exceptionally well written, impressively organized, and accessibly presented, “Turning Numbers into Knowledge: Mastering the Art of Problem Solving” is an ideal textbook for college and university library Business & Economics collections and supplemental studies curriculums.

Read the full short review here.

For more details about the book, go to http://www.numbersintoknowledge.com.

Talking Sense About Bitcoin Electricity Use

There’s been a lot written over the past couple of years about the electricity used to mine Bitcoin, the most prominent of many cryptocurrencies. I can best summarize the credibility and tone of that coverage with the following headline from Newsweek:

Bitcoin Mining On Track To Consume All Of The World’s Energy By 2020

This kind of headline makes me cringe, because it’s

1) an invalid extrapolation of oversimplified calculations that are no better than guesses that may mislead investors into taking rash actions; and (less importantly)

2) Bitcoin mining uses electricity, not (usually) other forms of energy.

I’ve been studying the electricity used by computing equipment for more than two decades, so I have some things to say about this latest outbreak of people taking leave of their critical faculties.

I’m also the author of an award winning book on critical thinking skills (among many other books and articles), recently out in its 3rd edition (Turning Numbers into Knowledge: Mastering the Art of Problem Solving, October 1, 2017).

In this post, I lay out my thoughts about how sensible people can avoid being misled by the hype machine on Bitcoin electricity use. I created a blog page that lists the recent links I’ve been able to find on this topic, which you can find here.

I’m making no judgment here about whether Bitcoin is something that society shouldbe doing, just focusing on the narrow question of what we know about the estimates of electricity used by Bitcoin currently making the rounds.

PRELIMINARIES

Bitcoin is what’s called a “cryptocurrency”, which is a digital medium of exchange and a store of value under decentralized control (unlike traditional currencies that are created and regulated by governments). The technology underlying Bitcoin and other cryptocurrencies is what’s called “block chain”, which is a decentralized way to establish and maintain trust among people interacting with each other.

Bitcoin is a public, permissionless block chain that uses what’s called “proof of work” to establish this trust. There are other types of block chains, and in some cases these can be designed to be less computationally and energy intensive than Bitcoin. I’ll explore the implications of this fact below.

There are at least a couple of credible academic studies of Bitcoin electricity use (O’Dwyer and Malone 2014 and Vrankan 2017), but things change so fast in this field that estimates become outdated in weeks or months.

BRANDOLINI’S LAW GOVERNS

In 2014, and Italian software developer named Alberto Brandolini made the following trenchant observation, which has come to be known as Brandolini’s law, or the BS asymmetry principle (where BS stands for bull manure in polite company):

The amount of energy needed to refute BS is an order of magnitude bigger than to produce it.

This “law” encapsulates one important reality of living in the digital age: it’s easier to create ostensibly accurate but incorrect numbers than it is to demonstrate why such numbers are invalid. That is the nature of careful analysis, but people have come to expect instant answers on new topics, even when there isn’t any real data or research. That’s the case for Bitcoin electricity use, which has emerged as a topic of wide public interest only in the past couple of years.

My corollary to Brandolini’s law is that

In fast changing fields, like information technology, BS refutations lag BS production more strongly than in fields with less rapid change.

This corollary places special obligations on media producers and consumers (Koomey 2014).

KNOW YOUR HISTORY

Information technology changes at a much more rapid pace than many other technologies (Nordhaus 2007, Koomey et al. 2011, Koomey et al. 2013, Koomey and Naffziger 2016). Unfortunately, innumerate observers love to mindlessly extrapolate rapid exponential growth rates to create “clickbait” headlines like the one in Newsweek I cited above. There’s also a tendency to assume that because information technology is economically important that it also must use a lot of electricity, but that’s just not the case.

Here are a couple of historical examples.

In 2003, writing in IEEE Spectrum, I cited an anecdote from Andrew Odlyzko, formerly at AT&T, now at the University of Minnesota (Koomey 2003). Odlyzko documented the genesis of a familiar “factoid” from the mid-1990s (“the Internet doubles every 100 days”) that led to a substantial overinvestment in the fiber optic network. This misconception

was based on data reported by UUNet, the first commercial Internet service provider, in the mid-1990s. In those days, at least for a brief time, such growth rates actually prevailed. But for almost all the rest of the 1990s, data flows were doubling only every year or so, as documented by Kerry Coffman and Andrew Odlyzko of AT&T Corp. and reported by The Wall Street Journal in 2002.

The difference between the two growth rates is huge: because of the magic of compounding, a doubling of traffic every 100 days translates into growth of about 1000 percent a year, rather than the 100 percent-a-year growth represented by an annual doubling.

During the dot-com boom, industry leaders, investment analysts, and the popular press repeated that gross overestimate of the growth of Internet data flow. The bogus figures helped justify dubious investments in network infrastructure and contributed greatly to overcapacity in the telecommunications and networking industries. Only a tiny fraction of this network capacity is now [circa 2003] being used, and billions of dollars [were] squandered because of blind acceptance of flawed conventional wisdom.

Misleading factoids about information technology electricity use emerged from coal-industry funded studies around the year 2000, at the time of the first dot com bubble and the California electricity crisis. They popped up again, from the same authors and funders, in 2005and 2013. The claims were reported in every major newspaper, cited by investment banking reports and politicians of both political parties, and avidly promoted by people and companies who should have known better.

The authors claimed that the Internet used 8% of US electricity in 2000, that all computers (including the Internet) used 13%, and that the total would grow to 50% of US electricity in 10 years. The authors also claimed that a wireless Palm VII used as much electricity as a refrigerator (later updated to an iPhone using as much electricity as TWO refrigerators) for the networking to support its data flows.

All of these claims turned out to be bunk, but it took years of creating peer reviewed research to prove it. We found that the Internet (defined as those authors defined it) used only 1% of US electricity in 2000, all computers used 3%, the total would never grow to half of all electricity use, and that the factoid about the wireless Palm VII overestimated networking electricity by a factor of 2000. In virtually every case, we found that those authors had overestimated electricity used by computing equipment. For a summary of these claims and a review of their effects on investors, see the Epilogue to the 3rd Edition of Turning Numbers into Knowledge.

These examples came to mind when I first heard about recent claims about Bitcoin electricity use, but the Bitcoin issue has some unique features.

BEWARE OF HAND WAVING ESTIMATES

The web site Digiconomist is the source for many media estimates of Bitcoin electricity use. The site is the creation of Alex de Vries, an economist in the Netherlands, and he summarizes his estimates in the form of his “Bitcoin consumption index”. Figure 1 is a screen shot of that index for roughly the past 16 months, showing TWh (terawatt-hours or billion kWh) on the Y-axis and time on the X-axis:

One important feature of cryptocurrency electricity use is that you need to specify the exact day to which your estimate applies. That’s because Bitcoin mining is increasing rapidly over time, so generating a monthly or annual average isn’t accurate enough. Such growth makes estimates from only a couple of years back less useful for estimating current day Bitcoin electricity use than you might expect.

De Vries recently published a commentary article in the journal Joule, in which he summarized the method for generating the Bitcoin consumption index. De Vries is able to create a time series day by day because he generates the index using following data:

1) the price of bitcoin over some time period (it was about $6300 as of today, October 29, 2018);

2) the number of Bitcoins mined over that same time period;

3) an assumption about the percentage of the bitcoin price that is electricity (60%), which would include both direct electrcity use for bitcoin mining rigs and the supporting infrastructure electricity(cooling, fans, pumps, power conversion, etc); and

4) an assumption about the price of electricity (5¢ US/kWh)

Figure 1: Digiconomist Bitcoin Energy Consumption Index

image

The price of bitcoin over some time period times the number of Bitcoins mined in that period (plus transaction fees) gives total revenues, which are multiplied by 60% to get electricity expenditures. Those expenditures are divided by the assumed electricity price to get electricity consumption of the network in TWh.

The calculation has the advantage of being tractable using available data, but it relies on assumptions about economic parameters and Bitcoin miner behavior (e.g., optimal allocation of mining resources) to estimate physical characteristics of a technological system.

Calculations like these set off alarm bells for me. Economic parameters (like Bitcoin prices) are volatile, and are at best imperfectly correlated with electricity use. In real economic systems, there may be a tendency towards optimality, but they often don’t get close because of transaction costs, cognitive failures, and other imperfections in people and institutions. In addition, the assumptions used by Digiconomist are so simplistic as to make any credible analyst wary.

DOING IT RIGHT

The more direct way to estimate electricity used by equipment is to understand the characteristics of the underlying computing, cooling, and power distribution equipment using field data (as we did for our most recent data center report). Unfortunately, aside from a few anecdotes in the trade and popular press, there’s little information on Bitcoin mining operations.

The servers used for Bitcoin mining are nowadays customized for just that application, and they bear little resemblance to standard server hardware installed in corporate or hyperscale data centers. Because these custom servers have not been tracked by the big data providers (like IDC and Gartner), we have only guesses/estimates as to how many are being built. We know almost nothing about where they’re being installed. And our knowledge of the power systems and cooling equipment in these facilities is virtually nil.

These data issues make it hard to create accurate estimates of electricity used by Bitcoin mining in the aggregate. This is probably why de Vries used his simplified economic approach to generate his index, but that doesn’t mean this method is a reliable indicator of Bitcoin electricity use.

Marc Bevand, an “analyst and crypto[currency] entrepreneur”, critiqued the Digiconomist estimates on February 1, 2017. Most importantly, he expressed concern over the arbitrary choice of 60% of revenues being electricity, and showed a range in his data (based on real equipment) of between 6.3% and 38.6%. Using a more refined approach and more detailed information about the characteristics of Bitcoin mining rigs over time, he estimated Bitcoin electricity use for February 26, 2017, July 28, 2017, and January 11, 2018. Figure 2 compares the Bevand and Digiconomist estimates for those three days.

Bevand did not generate day by day estimates for the intervening periods as Digiconomist does, but these three data points are enough to teach some important lessons. Both sources show growth of about fourfold between February 26, 2017 and January 11th, 2018. That’s rapid growth compared to conventional end-uses, so it’s not surprising that Bitcoin is getting more attention. The big difference is in the absolute electricity consumption. Bevand’s estimates are less than half of the Digiconomist estimates. They also are far more technically detailed and less dependent on high level arbitrary assumptions, which makes them more trustworthy, in my view.

Another important finding from this graph is the relative scale. The world used 21,200 TWh/year in 2016, the latest year for which I could find historical data, and this number increases by only a few percent a year, if history is any guide. This means that on January 11, 2018, Bitcoin accounted for about 0.1% of global electricity if Bevand’s numbers are correct.

Figure 2: Comparison of Digiconomist and Bevand Estimates of Bitcoin Electricity Use

image

IT’S NOT A CRISIS, BUT MORE RESEARCH IS NEEDED

According to the International Energy Agency, data centers and global networking equipment each accounted for about 1% of global electricity use in 2014, for a total of 2% (IEA 2017). In addition, growth in electricity use for data centers appears to have slowed substantially in recent years, as is also true for the US (Shehabi et al. 2016). Because of inefficiencies in current corporate data centers, little if any growth is expected for electricity use of data centers as a whole over the next couple of years.

Mining Bitcoin accounts for about 0.1% of global electricity use if Bevand’s estimates are correct, implying that total data center electricity use is about 5% bigger than what IEA estimates (Bitcoin isn’t included in those figures). Without better data, it’s hard to be any more precise than that.

Recent growth rates for Bitcoin mining are substantial, and if that growth continues at anywhere near current rates, Bitcoin electricity use will become more important soon. Continued growth in Bitcoin mining electricity use depends on the price of bitcoin, the difficulty of mining Bitcoin (which goes up as mining speed goes up), and the rate of efficency improvements in Bitcoin mining rigs. It also depends on electricity prices, the cost of mining equipment, and the continued laissez-faire attitude many governments have about Bitcoin (which could change quickly, but no one can predict if or when that might happen).

One thing we should NOT do is recklessly extrapolate recent growth rates for Bitcoin into the future, as the Nature Climate Change article coming out today appears to do (Mora et al. 2018). I cannot emphasize enough how dangerous, irresponsible, and misleading such extrapolations can be, and no credible analyst should ever extrapolate in this manner, nor should readers of reports on this topic fall for this well-known mistake.

Another big uncertainty is that we know even less about other cryptocurrencies than about Bitcoin, so more research is clearly needed there. The electricity use for non-Bitcoin cryptocurrencies should be tallied, both because the total is important and because these other cryptocurrencies (which can be MUCH more energy efficient than Bitcoin) may represent competition to Bitcoin if the electricity demands of Bitcoin become unmanageable.

We urgently need measured data about Bitcoin mining facilities in the field, including characteristics of Bitcoin mining rigs and the power and cooling infrastructure needed to support them. The Digiconomist estimate uses very high level assumptions and data to estimate electricity use, while Bevand creates a more detailed technical estimate, but neither includes much if any field data on actual equipment in real facilities. If we’re to get a better handle on Bitcoin electricity use, such field studies and data are vital.

A colleague (Professor Eric Masanet) and I have secured a small amount of funding to investigate the implications of block chains for electricity use, and that project begins in November 2018. To do such analyses properly (as we did in 2007, 2008, 2011, 2013, and again in 2016 for US data center electricity use) takes teams of scientists and hundreds of thousands of dollars, so it’s no surprise this emergent issue hasn’t yet been carefully studied (EPA 2007, Koomey 2008, Masanet et al. 2011, Masanet et al. 2013, Shehabi et al. 2016). Policy makers naturally want answers quickly, but until more research is funded, there’s not a whole lot more we can say about this issue.

OTHER LESSONS

While I encourage everyone in the electricity sector to track Bitcoin as a potential source of new load growth, please use caution and avoid being misled by the hype. Breathless media coverage papers over the uncertainties in the underlying data, and makes it seem like Bitcoin is taking over the world, but in fact it’s likely only 0.1% of global electricity consumption, and it is unlikely to continue growing at recent historical rates.

Because the rate of change in Bitcoin electricity use can be rapid, it is critically important that no estimates of electricity use be cited or reported unless the day to which that estimate applies is also reported. It is not sufficient to report the year in which an estimate applies, because things change so fast.

If there are many mining facilities proposed in a utility service territory, I suggest that they be charged the full cost of transmission and distribution infrastructure investments up front. Bitcoin mining can disappear just as fast as it rose to prominence, and you don’t want ratepayers to be left holding the bag. I’m not predicting that outcome, just pointing it out as a possibility with real risks to utility ratepayers.

If Bitcoin electricity use becomes more important, we’re likely to face questions about whether to allocate limited zero emissions generation resources to power it. If we use it for Bitcoin, we can’t use it elsewhere, so we’ll have to choose. This concern should factor into the needed societal conversation about whether cryptocurrencies are something we should as a society encourage.

The method for ascertaining trust for Bitcoin is needlessly computationally intensive, and other cryptocurrencies have taken a different tack. Resource intensity can be reduced substantially (i.e., by orders of magnitude) with software and protocol redesign, so it’s not just a question of hardware efficiency. As use cases for cryptocurrencies emerge, we’ll need to confront the possibility that more efficient cryptocurrency designs may be a better fit than Bitcoin for a low emissions world.

CONCLUSIONS

It’s still early days for cryptocurrency and the data on its electricity use are still too poor to derive firm conclusions. The issue bears watching, but caution is indicated, particularly when large investments are involved (like for transmission lines and distribution transformers for new bitcoin mining facilities). Media observers should beware of mindless extrapolations of recent trends, as such simpleminded methods can lead to consequential mistakes. At this point (as of January 11, 2018), Bitcoin mining accounts for about 0.1% of global electricity use, but recent growth has been rapid. What happens next is up to us.

ACKNOWLEDGMENTS

Professor Harald Vrankan of the Open University of the Netherlands and Jens Malmodin of Ericsson gave comments on an earlier version of this blog post. The author is responsible for the blog post itself and any errors contained herein.

REFERENCES

IEA. 2017. Digitalization and Energy. Paris, France: International Energy Agency.  [https://www.iea.org/digital/]

Koomey, Jonathan. 2003. “Sorry, Wrong Number:  Separating Fact from Fiction in the Information Age.” In IEEE Spectrum. June. pp. 11-12. [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1203076&tag=1]

Koomey, Jonathan. 2008. “Worldwide electricity used in data centers.”  Environmental Research Letters.  vol. 3, no. 034008. September 23. [http://stacks.iop.org/1748-9326/3/034008]

Koomey, Jonathan G., Stephen Berard, Marla Sanchez, and Henry Wong. 2011. “Implications of Historical Trends in The Electrical Efficiency of Computing.”  IEEE Annals of the History of Computing. vol. 33, no. 3. July-September. pp. 46-54. [http://doi.ieeecomputersociety.org/10.1109/MAHC.2010.28]

Koomey, Jonathan G., H. Scott Matthews, and Eric Williams. 2013. “Smart Everything:  Will Intelligent Systems Reduce Resource Use?” The Annual Review of Environment and Resources.  vol. 38, October. pp. 311-343. [http://arjournals.annualreviews.org/eprint/wjniAGGzj2i9X7i3kqWx/full/10.1146/annurev-environ-021512-110549]

Koomey, Jonathan. 2014. “Separating Fact from Fiction: A Challenge For The Media.” In IEEE Consumer Electronics Magazine. January. pp. 9-11. [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6685864]

Koomey, Jonathan, and Samuel Naffziger. 2016. “Energy efficiency of computing:  What’s next?” In Electronic Design. November 28. [http://electronicdesign.com/microprocessors/energy-efficiency-computing-what-s-next]

Masanet, Eric R., Richard E. Brown, Arman Shehabi, Jonathan G. Koomey, and Bruce Nordman. 2011. “Estimating the Energy Use and Efficiency Potential of U.S. Data Centers.”  Proceedings of the IEEE.  vol. 99, no. 8. August. [https://eta.lbl.gov/sites/all/files/publications/lbnl_version_procieee_embargoed-1-1.pdf]

Masanet, Eric, Arman Shehabi, and Jonathan Koomey. 2013. “Characteristics of Low-Carbon Data Centers.”  Nature Climate Change.  vol. 3, no. 7. July. pp. 627-630. [http://dx.doi.org/10.1038/nclimate1786]

Mora, Camilo, Randi Rollins, Katie Taladay, Michael B. Kantar, Mason K. Chock, Mio Shimada, and Erik C. Franklin. 2018. “Bitcoin emissions alone could push global warming above 2°C.” Nature Climate Change. October 29. [https://doi.org/10.1038/s41558-018-0321-8]

Nordhaus, William D. 2007. “Two Centuries of Productivity Growth in Computing.” The Journal of Economic History. vol. 67, no. 1. March. pp. 128-159. [http://www.econ.yale.edu/~nordhaus/homepage/homepage/nordhaus_computers_jeh_2007.pdf]

O’Dwyer, Karl J., and David Malone. 2014. Bitcoin Mining and its Energy Footprint. Proceedings of the ISSC 2014/CIICT 2014 Conference. Limerick, Ireland: June 26-27. [http://eprints.maynoothuniversity.ie/6009/1/DM-Bitcoin.pdf]

Shehabi, Arman, Sarah Josephine Smith, Dale A. Sartor, Richard E. Brown, Magnus Herrlin, Jonathan G. Koomey, Eric R. Masanet, Nathaniel Horner, Inês Lima Azevedo, and William Lintner. 2016. United States Data Center Energy Usage Report. Berkeley, CA: Lawrence Berkeley National Laboratory. LBNL-1005775.  June. [https://eta.lbl.gov/publications/united-states-data-center-energy]

US EPA. 2007. Report to Congress on Server and Data Center Energy Efficiency, Public Law 109-431. Prepared for the U.S. Environmental Protection Agency, ENERGY STAR Program, by Lawrence Berkeley National Laboratory. LBNL-363E.  August 2. [http://www.energystar.gov/datacenters]

Vranken, Harald. 2017. “Sustainability of bitcoin and blockchains.”  Current Opinion in Environmental Sustainability.  vol. 28, no. Supplement C. 2017/10/01/. pp. 1-9. [http://www.sciencedirect.com/science/article/pii/S1877343517300015]

Our analysis of the electricity intensity of networks came out in printed form in August 2018, now with a full reference

In September 2017 I posted on our analysis of the trends in the electricity intensity of network data flows, which was placed online in August 2017. The journal finally put the article in a printed edition in August 2018 and I wanted to re-up this post (with minor updates) to reflect the actual pub date and the complete reference (see below).

Our previous work on trends in the efficiency of computing showed that computations per kWh at peak output doubled every 1.6 years from the mid 1940s to around the year 2000, then slowed to a doubling time of 2.6 years after 2000 (Koomey et al. 2011, Koomey and Naffziger 2016).   These analyses examined discrete computing devices, and showed the effect (mainly) of progress in hardware.

The slowing in growth of peak output efficiency after 2000 was the result of the end of the voltage reductions inherent in Dennard scaling, which the chip manufacturers used to keep power use down as clock rates increased (Bohr 2007, Dennard et al. 1974) until about that time. When voltages couldn’t be lowered any more, they turned to other tricks (like multiple cores) but they still couldn’t continue improving performance and efficiency at the historical rate, because of the underlying physics.

Unlike that for computing devices, the literature on the electricity intensity and efficiency of network data flows has been rife with inconsistent comparisons, unjustified assumptions, and a general lack of transparency.  Our attempt to remedy these failings was just published in the Journal of Industrial Ecology in August 2018 (Aslan et al. 2018).  The focus is on the electricity intensity of data transfers over the core network and the access networks (like DSL and cable).

Here’s the summary of the article:

In order to understand the electricity use of Internet services, it is important to have accurate estimates for the average electricity intensity of transmitting data through the Internet (measured as kilowatt-hours per gigabyte [kWh/GB]). This study identifies representative estimates for the average electricity intensity of fixed-line Internet transmission networks over time and suggests criteria for making accurate estimates in the future. Differences in system boundary, assumptions used, and year to which the data apply significantly affect such estimates. Surprisingly, methodology used is not a major source of error, as has been suggested in the past. This article derives criteria to identify accurate estimates over time and provides a new estimate of 0.06 kWh/GB for 2015. By retroactively applying our criteria to existing studies, we were able to determine that the electricity intensity of data transmission (core and fixed-line access networks) has decreased by half approximately every 2 years since 2000 (for developed countries), a rate of change comparable to that found in the efficiency of computing more generally.

The rate of improvement is actually faster than in computing devices, but this result shouldn’t be surprising, because the aggregate rates of improvement in data transfer speeds and total data transferred are dependent on progress in both hardware and software.   Koomey and Naffziger (2016) and Koomey (2015) showed that other metrics for efficiency can improve more rapidly than peak output efficiency if the right tools are brought to bear on those problems.

Email me if you’d like a copy of the article, or any of the others listed below.

References

Aslan, Joshua, Kieren Mayers, Jonathan G Koomey, and Chris France. 2018. “Electricity Intensity of Internet Data Transmission: Untangling the Estimates.” The Journal of Industrial Ecology. vol. 22, no. 4. August. pp. 785-798. [https://doi.org/10.1111/jiec.12630]

Bohr, Mark. 2007. “A 30 Year Retrospective on Dennard’s MOSFET Scaling Paper.”  IEEE SSCS Newsletter.  vol. 12, no. 1. Winter. pp. 11-13.

Dennard, Robert H., Fritz H. Gaensslen, Hwa-Nien Yu, V. Leo Rideout, Ernest Bassous, and Andre R. Leblanc. 1974. “Design of Ion-Implanted MOSFET’s with Very Small Physical Dimensions.”  IEEE Journal of Solid State Circuits.  vol. SC-9, no. 5. October. pp. 256-268.

Koomey, Jonathan G., Stephen Berard, Marla Sanchez, and Henry Wong. 2011. “Implications of Historical Trends in The Electrical Efficiency of Computing”.  IEEE Annals of the History of Computing.  vol. 33, no. 3. July-September. pp. 46-54. [http://doi.ieeecomputersociety.org/10.1109/MAHC.2010.28]

Koomey, Jonathan. 2015. “A primer on the energy efficiency of computing.”  In Physics of Sustainable Energy III:  Using Energy Efficiently and Producing it Renewably (Proceedings from a Conference Held March 8-9, 2014 in Berkeley, CA). Edited by R. H. Knapp Jr., B. G. Levi and D. M. Kammen. Melville, NY: American Institute of Physics (AIP Proceedings). pp. 82-89.

Koomey, Jonathan, and Samuel Naffziger. 2016. “Energy efficiency of computing:  What’s next?” In Electronic Design. November 28. [http://electronicdesign.com/microprocessors/energy-efficiency-computing-what-s-next]

Google’s new white paper on clean energy for their data centers

image

Google just released this white paper, which is the next logical evolution in clean energy for data centers (also see the related blog post). When companies claim their data centers use 100% clean electricity, they do that using an annual balancing act, which sometimes isn’t clear to people not familiar with how this works.

Data center developers source clean electricity, like wind or solar, by making long-term contracts with developers to build power plants that otherwise wouldn’t be built. The power plants are sized to generate at least as much electricity as the data center would use over the course of a year, and in that annual sense, they are powering their data centers using clean electricity.

The reality, though, is that in any particular hour, the data center might be drawing more or less power than the wind or solar facility is generating that hour, and accounting for that real time variation is the next obvious step for clean electricity in data centers. There are also other sources of clean electricity on most grids, so those need to be accounted for as well.

Google’s white paper uses graphs like this one to show visually the hourly variation in clean energy generation’s match with data center electricity use. This one is for their North Carolina facility:

image

This graph shows the real-time challenge facing any data center facility, and I hope and expect other big data center operators to move to the kind of accounting suggested by Google in their new white paper. As storage becomes more common and clean generation becomes more widespread, this challenge will get easier, but right now we need to get the accounting right in preparation for those developments.

Disclosure: Google asked me to be a technical reviewer for this white paper before its release, which I did without remuneration (because it was worth my time to learn about what they have been up to).

Background

IEA. 2017. Digitalization and Energy. Paris, France: International Energy Agency. November 5. [https://www.iea.org/digital/]

Shehabi, Arman, Sarah Josephine Smith, Dale A. Sartor, Richard E. Brown, Magnus Herrlin, Jonathan G. Koomey, Eric R. Masanet, Nathaniel Horner, Inês Lima Azevedo, and William Lintner. 2016. United States Data Center Energy Usage Report. Berkeley, CA: Lawrence Berkeley National Laboratory. LBNL-1005775. June. [https://eta.lbl.gov/publications/united-states-data-center-energy]

Masanet, Eric, Arman Shehabi, and Jonathan Koomey. 2013. “Characteristics of Low-Carbon Data Centers.” Nature Climate Change. vol. 3, no. 7. July. pp. 627-630. [http://dx.doi.org/10.1038/nclimate1786]

Our article in Science on carbon intensity of oil production, out today!

All oil is not created equal when it comes to carbon intensity of production. That’s the message from our latest article, just released in Science today.

This work is part of a broader project to assess the full life-cycle emissions impacts of oil production, refining, distribution, and consumption, summarized in

Koomey, Jonathan, Deborah Gordon, Adam Brandt, and Joule Bergeson. 2016. Getting smart about oil in a warming world. Washington, DC: Carnegie Endowment for International Peace. October 5. [http://carnegieendowment.org/2016/10/04/getting-smart-about-oil-in-warming-world-pub-64784]

Policy makers have traditionally treated oil as a uniform commodity, but our work shows variations in total life-cycle emissions that are big enough to matter. The 2nd version of the Oil Climate index (OCI) showed a range of 80-90% from the lowest emitting oil to the highest emitting ones.

We’re in the process of adding more oil fields, updating the methodology, and incorporating life-cycle emissions for about half of the natural gas fields in the world. That update work is due out soon.

Here’s the full reference for the new Science article:

Masnadi, Mohammad S., Hassan M. El-Houjeiri, Dominik Schunack, Yunpo Li, Jacob G. Englander, Alhassan Badahdah, Jean-Christophe Monfort, James E. Anderson, Timothy J. Wallington, Joule A. Bergerson, Deborah Gordon, Jonathan Koomey, Steven Przesmitzki, Inês L. Azevedo, Xiaotao T. Bi, James E. Duffy, Garvin A. Heath, Gregory A. Keoleian, Christophe McGlade, D. Nathan Meehan, Sonia Yeh, Fengqi You, Michael Wang, and Adam R. Brandt. 2018. “Global carbon intensity of crude oil production.” Science. vol. 361, no. 6405. pp. 851. [http://science.sciencemag.org/content/361/6405/851.abstract]

Awards, Awards, Awards!

image

It’s been a great award season for Turning Numbers into Knowledge:  Mastering the Art of Problem Solving (3rd Edition). I’m particularly proud of this book because it encapsulates what I’ve learned about being an effective researcher and communicator over the past few decades, and it’s had enough success to warrant creating the 3rd edition, which came out late last year.

I use the book to train employees and students, and it works. I give them the book and say “this book summarizes my expectations for the accuracy of your analysis, the clarity of your presentations, and the thoroughness of your documentation”.

Here’s the complete list of awards for 2018:

Turning Numbers into Knowledge won Next Generation Indie Book Awards in the Business and E-Book Non Fiction categories for 2018. To see the list of Indie Book Award winners, go here.

Turning Numbers into Knowledge won Silver awards in the Business Non-Fiction and Professional/Technical Non-Fiction categories in the Global eBook Awards for 2018. To see the list of Global eBook Award winners, go here.

Turning Numbers into Knowledge also won a Silver Medal in the 2018 eLit eBook awards, in the Business/Career/Sales category. See the full list of award winners here.

Turning Numbers into Knowledge was a finalist in the International Book Awards in the Business:General category. To see the full list of International Book Award winners go here.

Turning Numbers into Knowledge also got honorable mentions in the Hoffer Book Awards in the Business and Reference categories. To see the list of Hoffer Award winners go here.

Turning Numbers into Knowledge, 3rd ed: http://amzn.to/2xZl6W0

Downloadable chapters and supplemental files:  http://www.numbersintoknowledge.com

Pub Link on Apple iBooks:  https://geo.itunes.apple.com/us/book/turning-numbers-into-knowledge/id1295269201?mt=11

Paperback ISBN: 9781938377068

PDF ISBN: 9781938377099

EPUB ISBN: 9781938377082

KINDLE ISBN: 9781938377075

Blog Archive
Stock1

Koomey researches, writes, and lectures about climate solutions, critical thinking skills, and the environmental effects of information technology.

Partial Client List

  • AMD
  • Dupont
  • eBay
  • Global Business Network
  • Hewlett Packard
  • IBM
  • Intel
  • Microsoft
  • Procter & Gamble
  • Rocky Mountain Institute
  • Samsung
  • Sony
  • Sun Microsystems
  • The Uptime Institute