At hearings yesterday about Planned Parenthood, Rep. Jason Chaffetz (R-UT) put up the chart above. This tweet gives the apparent source of the graph, which is an anti-abortion organization. Abortion is of course a highly charged issue, and feelings run high, but there is no excuse for making a chart that misleads so blatantly. Whoever made the graph just superimposed two different graphs with different Y axes, making it quantitatively meaningless and highly misleading.
The 2nd graph tells a very different (and accurate) story. Shame on whoever made the first graph, and shame on Representative Chaffetz for using it. That’s lying with graphs in a truly blatant manner.
Update: Blogger Brainwrap at Daily Kos posted a different version of the graph that gives additional context, adding up all the various procedures performed by Planned Parenthood.
Cern datacenter Photo credit: By Hugovanmeijeren (Own work) [GFDL or CC-BY-SA-3.0-2.5-2.0-1.0], via Wikimedia Commons
I’ve been struggling for years to convince executives in large enterprises to fix the incentive, reporting, and other structural problems in data centers. The folks in the data center know that there are issues (like having separate budgets for IT and facilities) but fixing those problems is “above their pay grade”. That’s why we’ve been studying the clever things eBay has done to change their organization to take maximal advantage of IT, as summarized in this case study from 2013:
Schuetz, Nicole, Anna Kovaleva, and Jonathan Koomey. 2013. eBay: A Case Study of Organizational Change Underlying Technical Infrastructure Optimization. Stanford, CA: Steyer-Taylor Center for Energy Policy and Finance, Stanford University. September 26.
That’s also why I’ve worked with Heatspring and Data Center Dynamics to develop the following online course, which starts October 5th and goes until November 13th, 2015:
This is a unique opportunity to spend seven weeks learning from Jonathan Koomey, a Research Fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University, and one of the foremost international experts on data center energy use, efficiency, organization, and management.
This course provides a road map for managers, directors, and senior directors in Technology Business Management (TBM), drawing upon real-world experiences from industry-leading companies like eBay and Google. The course is designed to help transform enterprise IT into a cost-reducing profit center by mapping the costs and performance of IT in terms of business KPIs.
Executives in this course will gain access to templates and best practices used by leaders in your data center. You’ll use these templates to complete a Capstone Project, in which you will propose management changes for your organization to help increase business agility, reduce costs, and move their internal IT organization from being a cost center to a profit center.
I’m excited about this class, but we need more signups by early October. Please spread the word by sending this blog post to upper level management in the company where you work.
The Wall Street Journal today has an article by Bob McMillan highlighting my work with Anthesis and TSO Logic on zombie servers (those that are using electricity but delivering no useful computing services). To download our most recent report on the topic, go here.
Here are the first few paragraphs:
There are zombies lurking in data centers around the world.
They’re servers—millions of them, by one estimate—sucking up lots of power while doing nothing. It is a lurking environmental problem that doesn’t get much discussion outside of the close-knit community of data-center operators and server-room geeks.
The problem is openly acknowledged by many who have spent time in a data center: Most companies are far better at getting servers up and running than they are at figuring out when to pull the plug, says Paul Nally, principal of his own consulting company, Bruscar Technologies LLC, and a data-center operations executive with experience in the financial-services industry. “Things that should be turned off over time are not,” he says. “And unfortunately the longer they linger there, the worse the problem becomes.”
Mr. Nally once audited a data center that had more than 1,000 servers that were powered on but not identifiable on the network. They hadn’t even been configured with domain-name-system software—the Internet’s equivalent of a telephone number. “They would have never been found by any other methodology other than walking around with a clipboard,” Mr. Nally says.
I’m hopeful that increased attention to this issue will result in more management focus and better application of computing resources to solve business problems. That’s one reason why I’m teaching my upcoming online class (October 5 to November 13, 2015) titled Modernizing enterprise data centers for fun and profit. Also see my recent article in DCD Focus with the same title.
Twenty first century data centers are the crown jewels of global business. No modern company can run without them, and they deliver business value vastly exceeding their costs. The big hyperscale computing companies (like Google, Microsoft, Amazon, and Facebook) are the best in the industry at extracting that business value, but for many enterprises whose primary business is not computing, the story is more complicated.
If you work in such a company, you know that data centers are often strikingly inefficient. While they may still be profitable, their performance still falls far short of what is possible. And by “far short” I don’t mean by 10 or 20 percent, I mean by a factor of ten or more.
The course will teach people how to bring their data centers into the twenty first century, turning them from cost centers into cost-reducing profit centers.
Cern datacenter Photo credit: By Hugovanmeijeren (Own work) [GFDL or CC-BY-SA-3.0-2.5-2.0-1.0], via Wikimedia Commons
I’ve been struggling for years to convince executives in large enterprises to fix the incentive, reporting, and other structural problems in data centers. The folks in the data center know that there are issues (like having separate budgets for IT and facilities) but fixing those problems is “above their pay grade”. That’s why we’ve been studying the clever things eBay has done to change their organization to take maximal advantage of IT, as summarized in this case study from 2013:
Schuetz, Nicole, Anna Kovaleva, and Jonathan Koomey. 2013. eBay: A Case Study of Organizational Change Underlying Technical Infrastructure Optimization. Stanford, CA: Steyer-Taylor Center for Energy Policy and Finance, Stanford University. September 26.
That’s also why I’ve worked with Heatspring and Data Center Dynamics to develop the following online course, which starts October 5th and goes until November 13th, 2015:
This is a unique opportunity to spend seven weeks learning from Jonathan Koomey, a Research Fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University, and one of the foremost international experts on data center energy use, efficiency, organization, and management.
This course provides a road map for managers, directors, and senior directors in Technology Business Management (TBM), drawing upon real-world experiences from industry-leading companies like eBay and Google. The course is designed to help transform enterprise IT into a cost-reducing profit center by mapping the costs and performance of IT in terms of business KPIs.
Executives in this course will gain access to templates and best practices used by leaders in your data center. You’ll use these templates to complete a Capstone Project, in which you will propose management changes for your organization to help increase business agility, reduce costs, and move their internal IT organization from being a cost center to a profit center.
I’m excited about this class, but we need more signups by early October. Please spread the word by sending this blog post to upper level management in the company where you work.
This morning we did an AMA (Ask Me Anything) on Reddit, focusing on our Oil Climate Index (OCI) and the related interactive web tool. I hadn’t done one of these before, and I was pleasantly surprised at how well it turned out (go here to take a look). Anyone can ask questions and we got to answer them. The questions and answers remain up for others to examine after the AMA is done. Some questions were a bit far afield, but there were also many excellent ones. For those interested in the OCI, it’s a good place to learn more.
The team at Carnegie just created a useful infographic for our Oil-Climate Index and the accompanying OCI web tool. I’m often skeptical of infographics, because they can be oversimplified, but this one seems to capture the essence of our work without doing violence to accuracy. Please let me know if you agree!
The Uptime Institute and McKinsey and Company had earlier estimated that up to 30% of servers in many data centers were comatose, and new data from TSO Logic confirms these estimates. Our initial sample size is small (4000 servers) but the data show that 30% of the servers in this sample hadn’t been used in more than six months.
If this finding holds up for larger sample sizes (and we expect it will) then about 10 million servers in the world are comatose, stranding tens of billions of dollars of data center capital and wasting billions every year in operating cost and software license costs.
In the twenty first century, every company is an IT company, but too many enterprises settle for vast inefficiencies in their IT infrastructure. The existence of so many comatose servers is a clear indication that the ways IT resources in enterprises are designed, built, provisioned, and operated need to change. The needed changes are not primarily technical, but revolve instead around management practices, information flows, and incentives. To learn how to implement such changes, see my Fall 2015 online class titled Management Essentials for Transforming Enterprise IT.
We will update the analysis as the data set grows, with the next update due in Fall 2015.
I’ve been working for the past couple of years with Deborah Gordon of Carnegie, Adam Brandt of Stanford, and Joule Bergeson of the University of Calgary, on open source data and tools to assess the life cycle greenhouse gas (GHG) emissions of different oils, summarizing them in our Oil-Climate Index (OCI). Total GHG emissions, when you correctly analyze how oil is extracted, how it’s processed, how it’s transported, and how it’s used, vary by a surprising amount. The highest emissions oil in our initial sample of thirty global oils has 80% higher emissions than the lowest emissions oil, and that surprising variation is big enough to matter.
Carnegie has just released the online web tool for the OCI, so you can explore the data. It’s beautifully designed, and the web developers did a terrific job. There are also mobile versions. We’ll keep the tool updated as we expand the OCI to more oils. We expect to have 20 more oils analyzed by the end of this summer.
Dave Roberts of Vox recently posted an article about the climate problem titled “The awful truth about climate change no one wants to admit”, the point of which he summarizes as “barring miracles, humanity is in for some awful shit.” This conclusion is true even if we manage to keep warming to 2C or below, but even more true if we don’t.
In support of this point of view, Roberts cites analysis showing the increasing difficulty of meeting the 2 C warming limit with every year of delay. This phenomenon is well known to people who understand the warming limit framing, but it can’t be repeated enough. Delay makes solving the problem more costly and difficult, a fact that is summarized in the graph below.
Unfortunately, this article falls prey to a particularly common pitfall, that of assuming that we can accurately assess feasibility decades hence. This mistake is particularly problematic for assessments of political feasibility, because political reality can be remade literally overnight by pivotal events.
Analysts always impose their ideas of what is possible on which policies and technologies are analyzed, but as I’ll argue in the next chapter, with few exceptions it is very difficult to predict years in advance what is feasible and what isn’t. People also usually underestimate the rate and scope of change that can occur with determined effort, and this bias is reinforced by the use of models that ignore important effects (like increasing returns to scale and other sources of path dependence), and include rigidities that don’t exist in the real economy (like assuming that individual and institutional decision making will be just like that in the past, even in a future that is drastically different).[1] For all these reasons, it is a mistake to rely too heavily on models of economic systems to constrain our thinking about the future.
And it is not just the creators of complex economic models who fall prey to this pitfall. Rob Socolow, one of the pioneers of the wedges method, was quoted in an article looking back on the contribution of his efforts as saying “I said hundreds of times the world should be very pleased with itself if the amount of emissions was the same in 50 years as it is today.”[2] Now I’m a big fan of Rob, we’ve been colleagues for years, and I have great admiration for what the wedges papers contributed to advancing the climate debate. But this statement has always rubbed me the wrong way, and I finally figured out why: it imposes his informal judgment about what is feasible on the analysis of the problem, and as I discuss in the next chapter, that is almost impossible to determine in advance.
Feasibility depends on context, and on what we are willing to pay to minimize risks. What if there’s a big climate-related disaster and we finally decide that it’s a real emergency (like World War II)? In that case we’d make every effort to fix the problem, and what would be possible then is far beyond what we could imagine today. It is therefore a mistake for analysts to impose an informal feasibility judgment when considering a problem like this one, and instead we should aim for what we think is the best outcome from a risk minimization perspective, and if we don’t quite get there, then we’ll have to deal with the consequences. But if we aim too low, we might miss possibilities that we’d otherwise be able to capture.
Judging feasibility without careful analysis really is a distraction–people obsessed with what is possible politically or practically kill innovative thinking because they miss the many degrees of freedom that we have to shape the future. They take the system as it is for granted, and we just can’t do that anymore.
An archetypal example is the discussion about integrating intermittent renewable power generation (like wind generation or solar photovoltaics) into the grid. In the old days the grizzled utility guys would say things like “maybe you can have a few percent of those resources on the grid, but above that you’ll destabilize the system”. Now we know that’s nonsense, and the “conventional wisdom percentage” of what’s allowable has crept up over the years, but it always reflected a static (and incomplete) view of what the system could handle. Over time, we can even change the system to use smaller gas-fired power plants that respond more rapidly to changes in loads, install better grid controls, institute variable pricing using smart meters, use weather forecasting, and create better software for anticipating grid problems. All of those things together should allow us to handle much more intermittency than what a conventional utility operator might think is feasible. And as we become smarter about energy storage, things will get easier still.[3]
The same lesson applies to any attempts to envision a vastly different energy system than the one we have today. We need to take off our feasibility blinders and shoot for the lowest emissions systems we can create. That doesn’t mean we can ignore real constraints, but we do need to throw off the illusory ones that are an artifact of our limited foresight. And if we don’t quite make it, that’s life, but at least it won’t be for lack of trying.
Context matters, and what seems infeasible today based on judgments about political will can become feasible tomorrow. Who would have thought, for example, that Chinese coal use could drop 7.4% in a year? Happily, that’s just what happened in April 2015. Who would have thought that the US auto industry could retool from making millions of cars per year to building war machines in 6 months? Yet that’s what happened soon after the US entered World War II. In both cases, what seemed impossible looking forward became possible when people put their minds to it (and policy makers pushed for big changes)
I would rephrase Roberts’ summary to say “we can avoid some awful shit if we just get our act together, and the only thing standing in the way is our willingness to face the reality of the climate problem.” Whether we can meet the 2 C warming limit is something that cannot be accurately predicted in advance, it can only be determined by making the attempt. Modeling exercises can be useful, but it is only by trying to reduce emissions that we can determine what is possible.
Our choices today affect our options tomorrow. If we choose wisely, we can still avoid the worst consequences of climate change, but we must choose. We are out of time, and the time for choice is now.
References
[1] Koomey, Jonathan. 2002. "From My Perspective: Avoiding "The Big Mistake” in Forecasting Technology Adoption.“ Technological Forecasting and Social Change. vol. 69, no. 5. June. pp. 511-518.
[3] See the recently commissioned Gemasolar plant in Spain for one way to address the storage issue, using molten salt heat storage [http://www.nrel.gov/csp/solarpaces/project_detail.cfm/projectID=40]. There are many other ways, some based on long proven technologies (like pumped storage, flywheels, compressed air, or batteries) and others that are more exotic, like molten salts.
Steve Lohr of the NY Times wrote a nice article about smart homes that will appear in tomorrow’s NY Times business section (April 23, 2015). The issue for residential efficiency efforts like this is that the savings are often small in absolute terms, and transaction costs can be high. In the aggregate, however, savings from millions of homes can add up fast.
My own view is that the biggest benefits from such technologies will be in making the grid more flexible and resilient, rather than yielding major energy savings for consumers. Here’s my quote in the article, surrounded by two other paragraphs, for context:
But the larger benefit of the new home technology may be beyond the home, as it contributes to the ecosystem of energy efficiency. Add up many household energy-saving steps at the right time, and peak loads for utilities are reduced, requiring less power generation. The cleanest, cheapest imaginable power plant is the one that is never built.
“If you can shift the load for a few hours on a summer day, that is a big deal to the utility company,” said Jonathan Koomey, a research fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University. “That’s where the big saving (sic) is going to be.”
Utilities across the country recognize the potential. Many are beginning to offer reward programs for households using their smart thermostats to curb energy use during peak hours and sometimes rebates for the purchase of Internet-connected thermostats from Nest, Honeywell, Ecobee and others.
Sheik Yamani once famously said “the stone age didn’t end because we ran out of stones, and the oil age won’t end because we run out of oil”.
Reed Landberg, writing for Bloomberg, describes the key trends that will lead to the end of the oil age, sooner than we thought even a few years ago. Demand growth in the US has disappeared, efficiency of vehicles is up, and new technologies (particularly electric vehicles and fuel cell vehicles) are building momentum quickly. The most important graph from this article is the one I reproduced above, which shows cost reductions in solar photovoltaic modules and lithium ion battery packs. Costs for both technologies are coming down at a furious pace (down more than 20% for each doubling of cumulative production). If the growth in demand for these technologies proceeds at anywhere near the recent historical pace, those learning rates should allow for significant further cost reductions, a virtuous cycle in which everyone except for the oil companies wins.
One of the biggest issues affecting the viability of electric vehicles is the cost of battery storage. A new article in Nature Climate Change explores the recent data on the cost of batteries, finding an average rate of decline of about 8% per year since 2006 and a learning rate (the percentage decline in per unit cost per doubling of cumulative production capacity) of between 6 and 9%. With production of such batteries growing rapidly, the decline in costs per unit in the next few years should be substantial.
I’ve reproduced Figure 1 from the article above. There is always a lot of uncertainty with such data, because so much of it is proprietary, but this article does a good job of trying to make those data comparable.
The graph posted above summarizes the main story, which is that peak output efficiency, which doubled every 2.7 years after 2000, is expected to continue on that pace through 2020, based on the AMD processor roadmap. The metric we call “typical-use efficiency”, on the other hand, which takes account of ever lower standby power modes, should see a faster rate of improvement. From 2008 to 2020 the rate of change in typical-use efficiency (according to AMD data) will result in about a doubling of efficiency every 1.5 years, which is similar to that for peak output efficiency in the half century preceding the year 2000.
We’ll have a longer version with more graphs and explanation posted soon.
A colleague of mine at MIT, Noelle Eckley Selin, along with some of her other colleagues, will be offering a 4 day short course on Sustainability: Principles and Practice on July 27-31, 2015.
Here’s the course overview:
This course will introduce participants to the goals, principles, and practical applications of sustainability from science/engineering, policy, and business perspectives. Many organizations, companies, and institutions are increasingly interested in conducting their activities while becoming more sensitive to environmental, social, and other concerns over a longer-term future. Sustainability has many definitions and includes environmental, social, and economic dimensions. In this course, we will examine the major environmental issues and trends happening in modern society from a scientific and practical perspective, including energy and resource use, pollution, climate change, water, and population. Conceptual definitions of sustainability will be introduced and discussed and sustainability plans from organizations and institutions will be examined and critiqued. The course presents practical skills for participants in the area of integrating sustainability into business practices, operations, policies, and research and development through a day of dedicated case studies. The course emphasizes sustainability in all its dimensions, including all “three E’s” of environment, economics, and equity. New research will be presented by faculty working in the area of sustainability science and engineering at MIT.
The class looks like a great intensive introduction to the application of sustainability in business. Check it out!