Target in Emeryville, CA has LED lighting with motion controls in food display cases

I had heard of motion sensors controlling display cases before, but the Target in Emeryville, CA (which we visited this morning) has a particularly nice implementation of that technology.  The display cases are lit with LEDs (see below).

Emeryville, CA Target display case

And here’s a closer shot of the motion detector:

Closeup of Target motion detector on display case

There were motion detectors every 8-10 feet along each row, and if you started walking down the aisle the lights inside the cases came one one by one.  I suspect that motion sensors could be used with incandescent lights but the lifetimes of such bulbs are short enough to make an application like this less feasible than with LEDs, which typically last thirty times longer than incandescents.  LED lifetime is also not affected by turning them on and off (see this Wikipedia article for more details).

It took a little more than a minute for the lamps to switch off when no motion was detected, at least for the aisle I visited.  I’m not sure if delay times vary by aisle or food type.

NYTimes.com today on data center cooling

James Kanter has a very nice article on NYTimes.com today on data centers and the search for ever better ways to cool those facilities.  Typical “in-house” data centers nowadays devote one kWh of electricity to cooling, pumps, fans, and power systems for every kWh of electricity used for servers (for details, see the EPA Report to Congress on Data Centers).  That additional kWh is “overhead”, and to the extent that it can be reduced, data centers can save big on both energy costs and capital costs to construct these facilities.  The service we really care about is computing, and so eliminating the waste associated with cooling the computers is all gravy.

Cloud computing providers have made big progress on reducing this waste, reaching overhead levels of 0.1 to 0.2 kWh per kWh of IT electricity use. For example, Google documents the PUE for their data centers here, and they boast an average overhead of 0.16 kWh/kWh of IT usage averaged across nine facilities.  Centralized computing providers can achieve such savings because they have at least four advantages over the “in-house” facilities, as I’ve explained frequently (for one example, go here):

1) Diversity

2) Economies of scale

3) Flexibility

4) Ease of enabling structural change

These advantages mean that there will be ever more compelling economic reasons to using centralized computing facilities over in-house IT with each passing year.  There still are good reasons for certain kinds of computing to be done in-house, but the powerful advantages of centralized computing will continue to create economic pressure to move to the cloud for many applications.

An interesting new technology trend: Internet in a suitcase!

On June 12, 2011, The New York Times reported on something novel in an article titlted “US Underwrites Internet Detour Around Censors”.  The first two paragraphs read:

“The Obama administration is leading a global effort to deploy “shadow” Internet and mobile phone systems that dissidents can use to undermine repressive governments that seek to silence them by censoring or shutting down telecommunications networks.

The effort includes secretive projects to create independent cellphone networks inside foreign countries, as well as one operation out of a spy novel in a fifth-floor shop on L Street in Washington, where a group of young entrepreneurs who look as if they could be in a garage band are fitting deceptively innocent-looking hardware into a prototype “Internet in a suitcase.””

This development is of course good news for people who believe in the free exchange of information, but it’s also a reminder of the power of the long-term trends in computing efficiency I’ve blogged on previously (here and here).

The research (which will be published this year in the IEEE Annals of the History of Computing) shows that computations per kWh (ie energy efficiency of computation) for all types of computers has doubled roughly every year and a half since Eniac in 1946.

The big implications of these trends are for mobile computing devices, the existence of which was an inevitable outcome of these trends.   Doubling of computing efficiency every year and a half means a factor of 100 improvement every decade.  That means that at a fixed level of computation the need for battery capacity will decrease at that rate every decade.  Alternatively, you could have 20 times more computation at 5 times the battery life.  And that progress implies ever greater use of mobile computing, sensors, and controls, which will allow us to better match services supplied with services demanded.  It will also lead to an explosion of data the likes of which we haven’t yet experienced (we ain’t seen nothing yet!).

And it’s not right to call it “Moore’s law” (as some still do) because these trends PRECEDE Moore’s law (they hold true for tube and discrete transistor computers, not just microprocessors).   In fact, they are an inherent feature of electronic information technology, as our paper shows.

The industry hasn’t fully absorbed the implications of these trends, but I think they are critically important to understanding its future.   Internet in a suitcase is a nice result, but there are many other such innovations that will no doubt surprise us all in coming years.

An important recent development in institutional design and corporate governance

On June 3, 2011, Climate Progress reported on the development of a new type of corporation, the B-corp, “which modifies governance so that managers respond to long-term interests of investors, stakeholders, and the environment, rather than just focusing on short-term profits.”  This is an important development because it changes the  "rules of the game", as B Lab co-founder Coen Gilbert explained in a Harvard Business School case study of his company.

Many people forget that the economy can’t exist without a government to define property rights, enforce contracts, charter corporations, regulate socially destructive corporate behavior, and determine the rules of the game.    When government doesn’t take these responsibilities seriously enough we end up with lead in children’s toys, e coli in food, global financial crises, and rivers that catch on fire, and no one should find that to be a particularly good outcome.

I’m excited about this idea because it does what some of us have advocated for a long time:  change the charter of corporations to redefine property rights, and add some responsibilities while we’re at it.  Corporations are socially constructed entities and we get to choose how their governance and underlying incentives should be structured.  By changing the assumptions behind their design it’s possible to set a new course for the economy, one where private profit can be more closely aligned with public good.

Of course, changing the charter for existing corporations would be difficult, but there may be other ways to accomplish the same goal for those institutions.   In any case, creating a new type of corporation that deals explicitly with issues of standards, transparency, legal status, and incentives is a good first step, and I applaud the B Lab folks for starting down this path.

And this development is also a reminder that the innovations we’ll need to truly face the climate challenge are not just technical, though we’ll need plenty of those to get to the greenhouse gas emissions reductions that will be required to stabilize the climate.  Social and institutional innovations will play an important role as well, given the rate, scale, and scope of changes that are required.

More thoughts on window-integrated PVs

In an earlier post I speculated about window-integrated PV, and have now completed some further investigations.

I emailed with Richard Lunt, the lead researcher on the window integrated PV paper, as well as with my friend Steve Selkowitz, head of LBNL’s Windows and Daylighting Group, about this new innovation, and they agreed it would be OK for me to share some of their thoughts here.

Richard seems to be thinking along the same lines as I am (direct use of the electricity at the window as opposed to distributing it throughout the building).   That means you get to cut out some of the balance of system costs for the PVs, which is typically half the installed cost for PV systems, as well as to avoid the costs of the glass and other structural elements that otherwise would contribute to the cost of the PV system.

He added a few more wrinkles:

1) the maximum theoretical efficiency for such cells is 22%, once you eliminate visible light from the solar spectrum.  The new cells have efficiencies of just under 2% in their experiments, with the goal of getting to 10%.

2) large buildings in urban environments have very large vertical areas that will intercept significant amounts of solar radiation in the mornings and afternoons, and the IR component is more significant at these times because the scattering of light is more prevalent for the visible spectrum than for IR/UV.

3) a compensating factor is that “these organic solar cells have much better efficiencies at low illumination intensity than traditional inorganic semiconductors”.

It’s clear from his comments that he (wisely) isn’t expecting this to be a silver bullet, but one more solution that could be useful in particular circumstances.

And that brings me to the question of performance vs cost, which Steve Selkowitz brought up first thing.  All of my speculations presume that the technology gets to a point where it is cost effective to apply it in the kinds of applications I propose.  If that doesn’t happen, then the speculations are moot.  But in this case, there are several other paths up the mountain, so the ideas I mention could make sense anyhow.

I visited Steve in his office on Friday, May 6, 2011 and I got an earful about the opportunities (large) and the complexities (significant).  The key tradeoffs for electrochromics and window integrated PVs relate to the comparative value of lighting, cooling and power.  An electrochromic window in conjunction with dimmable lighting can be a big lighting energy saver (the largest commercial building end use) but its value will be reduced if a significant fraction of the visible sunlight is converted to electric power at the window.  Selkowitz points out that daylighting a room with an electrochromic window or skylight uses solar energy at 3 – 10 times the efficiency of the sunlight-to-PV-electricity-to-electric lighting pathway.

There are a few other issues the manufacturers are working out as well.  For example the time it takes for some electrochromic technologies to switch between transparent and opaque is an issue that I hadn’t considered (some switch very rapidly, others take seconds to minutes).  And some electrochromics may require application of a very small constant voltage to maintain their optical state, while others are more like a flip-flop (the electronic kind, not the beachware kind) in that once they reach a state they stay there without additional effort or electricity.  Finally, there are new thermochromic glazings that respond to temperature instead of voltage, which is a whole different kettle of fish.

He also pointed out  a few other issues with my earlier post:

1) Self powered skylight shades already exist.  Velux has a skylight with a motorized shade that uses ordinary PVs on the frame of the shade to power itself (I haven’t found this online yet but will post a link here when I do).  In any case, such installations need batteries in case you want the skylight to operate at night, but this cost can be offset by not having to run wires to the skylight.

2) For cars, ventilation is a good thing, but it’s a complex design space because you’ll also want aggressive use of UV/IR blocking coatings that would need to be applied in addition to the PV coating, and the details of how to do that in practice haven’t been fully worked out yet as cost effective solutions.

3) The expected lifetime of these coatings as well as efficiency degradation over time are both real issues that the technology folks are addressing.

4) Finally, electrochromic glazings now retail for roughly an additional $50/sf, and adding another expensive layer to such windows is going to raise costs further and make rapid penetration that much more difficult.

Selkowitz’s team at LBNL has been developing and promoting electrochromic windows for over 20 years and he thinks he can now see the daylight at the end of the electrochromic tunnel.  But he thinks the early markets will do better focusing on the electrochromic elements alone, adding PV later once both technologies are better established.

Bottom line, if we can make window integrated PVs with low cost, high reliability, and long lifetime, there will be niches where they will be important.  I’ll post more as I learn more.

Bill Nye's Climate Lab at the Chabot Space and Science Center

We visited Bill Nye’s terrific Climate Lab at the Chabot Space and Science Center in Oakland, CA on Saturday May 7, 2011.  I didn’t get to play around with the exhibits as much as I’d like (that’s what happens when you have two year old twins) but I was very impressed with the smart use of new media and the ways the exhibit got kids to participate.  The focus was mostly on solutions to the climate problem, which was a refreshing change, and the technical information it provided was almost all on the mark.  I heartily recommend this exhibit for those in the Bay area who are interested.  It’s a very nice way to educate kids about climate.

I’ve provided some photos below.   The light level in the exhibit is much brighter than the photos indicate.

Andrew Fanara, Larry Vertal, and me at the Uptime Symposium yesterday

Andrew Fanara, Larry Vertal, and me at the Uptime Institute Symposium in Santa Clara, CA, May 11, 2011.  The three of us were at the center of efforts to improve energy efficiency in servers and data centers starting in early 2006.  Larry was at AMD when that company started to use efficiency as a selling point for their processors in servers (he also funded my papers on electricity used by servers that came out in 2007 and that grew into the analysis for the EPA Report to Congress and this 2008 peer reviewed journal article). Andrew was at EPA’s Energy Star Program (he initiated and sponsored the two meetings on data center efficiency I chaired in 2006—for more details go here).

Exciting news from Intel today on power efficient 3D transistors

Today Intel announced the commercialization of 3D transistors using low voltages and a 22 nm fab process, a development that will help the technology industry keep pace with the long term trends in power efficiency identified in our forthcoming IEEE article (Koomey, Jonathan G., Stephen Berard, Marla Sanchez, and Henry Wong. 2010. “Implications of Historical Trends in The Electrical Efficiency of Computing."  In Press at the IEEE Annals of the History of Computing.  March.) .  These new chips  should deliver the same performance as 2D chips using half the power, and that’s good news for the continued development and proliferation of wireless sensors, controls, and mobile data analysis devices.

People have been predicting the end of Moore’s law for a long time. Over the years, my friends at Intel would only guarantee that they could keep up with those trends for the next five or ten years, but then they weren’t sure.  The important thing is that they keep pulling rabbits out of the hat so that Moore’s law continues, and they seem to have just done it again.  And since the improvements in computing efficiency go hand-in-hand with Moore’s law, continuing rapid developments in battery powered mobile computing devices are also "in the bag”.

One interesting thing is that the trends in computing efficiency actually predate Moore’s law, as they also applied to computers made using vacuum tubes and discrete transistors (as our article demonstrates empirically).  So the efficiency trends are an inherent characteristic of electronic information technology, not just those powered by microprocessors.

For video of my talk at Microsoft last December on trends in computing efficiency, go here.

For a nice article in CNET about 3D transistors, go here.

Some thoughts on window-integrated photovoltaics

Yesterday I tweeted about an innovation from MIT that converts infrared (IR) radiation to electricity, allowing for electricity generation from windows that appear transparent to the naked eye.  The paper documenting those findings is here.

My good friend Kurt Brown, who I worked with in collaboration to design the recommendation engine for Wattbot, emailed me to tell me why he thought this was a crazy idea.  First, he said vertical windows typically get only half of the insolation of a rooftop PV array, so the total available energy for conversion to electricity is much less than for rooftop panels.  Second, he notes that PV panel costs are only 50% of the cost of PV installations nowadays, and while this innovation would substantially reduce panel costs, costs for the wiring and voltage controller to turn the electricity collected into useful power would be significant.  He also forgot to mention that the energy from IR radiation is less than half of that contained in total solar insolation, which reduces the potential of this innovation to generate power still further.

I always listen to Kurt, particularly when he tells me I’m crazy, because he’s often right. However, I had some good reasons for highlighting the technology, so I encouraged Kurt to “think outside the box”.  In this case, the box is that which says that electricity generated by transparent PVs in windows needs to get fed into the electric power system of the building.  I think that’s how most folks read the MIT summary of this study, but it’s the wrong framing.  The amount of power generated by this innovation will always be small, and Kurt is absolutely right that the costs of interconnecting the windows to feed their tidbit of electricity into the power system of a building would probably be prohibitive.

But what if the electricity from the windows could be used right then and there to do something useful?  I thought of three examples, maybe you can come up with more:

1) A self powered skylight:  If you put a small battery inside a skylight and use the IR PVs to charge that battery you wouldn’t need to wire the skylight to the conventional power system (that’s a big cost savings right there).  It would be a self contained unit that opened and closed under its own power, and I’m sure it would sell like hotcakes.

2) Ventilation in automobiles:  One of the big problems in autos is keeping them cool, and while there’s a lot you can do with window coatings, active cooling is almost always necessary in sunny climates.  The electricity from window or sunroof based PVs could be used to run a small ventilation fan to cool vehicles in the sun, without draining the battery.  This would be a boon for electric vehicles, for which battery storage is at a premium.  There are products like this out there now, but none using this particular PV technology.

3) My favorite–Self powered electrochromic glazings:  One of the biggest uncontrolled cooling loads for buildings is sunlight, and while coatings can help with this, having a system that can actively control shading would help buildings become much more flexible in their use of natural energy flows. Electrochromic materials can change their transparency depending on what voltage is applied to them.  The IR PVs could generate enough power to activate the electrochromic materials and to run a wireless network through which all the windows in a building could be controlled (similar to the wireless sensor networks that are becoming more common nowadays).  Electrochromics are now quite expensive but they are used in some commercial products (like sun roofs for high-end vehicles)–they would have to come down in price before this idea could come to fruition.  This application would also require a battery, but would avoid the need for wiring, thus reducing installation costs.

After I told Kurt these ideas, he reconsidered his assessment.  What do you think?

The importance of Facebook releasing technical details on their data center designs

Yesterday (April 7th, 2011) Facebook announced its Open Compute Project and shared with the world technical details about its efficient data center in Prineville, OR.  GigaOm did a nice summary of the technical details about that data center (as have others, here and here) but I wanted to talk about the bigger picture importance of that announcement.  No doubt Facebook has done some innovative things from which other large Internet software companies will learn, but the biggest difference in efficiency in the data center world is between the “in-house” business data centers (inside of big companies whose core business is not computing) and the facilities owned and operated by Facebook, Google, Yahoo, Microsoft, Salesforce, and other companies whose main focus is to deliver computing services over the internet.

I expect that that it’s the “in-house” data center facilities who will be most affected by the release of this technical info.  No one has ever publicly released this much technical detail about data center operations before, and with one stroke Facebook has created a standard for best practice to which the in-house facilities can aspire.  This means they can specify servers at high efficiency that are already being built (they don’t need to design the servers themselves).  They can also pressure their suppliers to deliver such servers in competition with others, to drive the costs of efficiency down. And they can change their infrastructure operations so that they move from Power Usage Effectiveness (PUE) of 2.0, which is typical for such facilities, much closer to Facebook’s PUE of 1.07.  (As background, Google and Yahoo have also released information about their PUEs for specific facilities, and those are much better than in-house facilities as well).

Back in 2007 I worked on the EPA report to congress, which identified PUE of 1.4 as “state of the art” for enterprise class data centers by 2011.  Cloud computing providers have blown right past that estimate and have built more than a few facilities with PUEs in the 1.1 to 1.2 range in the past couple of years.  That shows what happens when companies look at the whole system and focus on minimizing the total cost per computation, not just on optimizing components of that system.

Last coal plant in the US Pacific Northwest to be shut by 2025

Getting serious about climate solutions means shutting down coal plants, starting with the old dirty ones.  Washington State just announced an agreement to shut down the last coal plant in the Pacific Northwest, the first boiler by 2020 and the 2nd by 2025 (another facility is already scheduled to be closed by 2020).  The real action, of course, should be in the Midwest and South, where most US coal plants are located, but we need to celebrate the small victories along the way.

For more details on the characteristics of US coal plants, see my article “Defining a Standard Metric for Electricity Savings,” which is freely downloadable.

Separating facts from values

I wrote about the importance of distinguishing facts from values in Chapter 19 of Turning Numbers into Knowledge, and I wanted to summarize that point here, because it is so often forgotten in discussions of technical issues.

Every human choice embodies certain values. When technical people advocate a technology choice (e.g., whether or not to build more nuclear power plants) they often portray their advice as totally rational, completely objective, and value free. This portrayal cannot be correct—if an analyst makes a choice, he has also made a value judgment.

One purpose of analysis is to support public or private choices; in this context, it cannot be value-free. It can, however, illuminate the consequences of choices so that the people and institutions making them can evaluate the alternative outcomes, using both their values and the analysts’ best estimates of the consequences for each choice.

Some progress on the rebound effect dialogue

The folks at the Breakthrough Technology Institute (BTI) have kindly agreed to work through the concerns I raised in my memo on the state of the rebound dialogue in a collaborative way. We’re beginning that process by focusing on what’s called the elasticity of substitution in firms, which is a key parameter affecting the size of rebound within these firms.  I’m hopeful that by focusing on this specific issue we can achieve a more complete understanding of the various claims being made in this debate.  It’s clear that terminology and analytical conventions differ significantly between the various participants, but focusing on one issue should help us overcome those problems.

While I’m optimistic we’ll make progress, I continue to be concerned about some of the conclusions the recent BTI report reaches.  It will take time to review the technical questions in the detail this issue deserves, so I’ll hold off on stating any conclusions until that work is done.  The debate among experts who have reviewed this report continues to be heated, but I hope we’ll be able to resolve some of the issues that remain in dispute through the civil and collaborative technical discussion that we’ve just begun.

Update on the email dialogue about the rebound effect

The various participants are continuing the email dialogue about the rebound effect, and the parties have agreed on a strategy for making progress on the key issues.  We’re examining the specific example of rebound for an industrial facility, we’re analyzing the comments of various parties on that example, and will compile the comments in a systematic form.  This way we can keep track of all the various threads (as you can imagine it gets complicated with an issue like this).

More as things develop.

I was just chosen as a Google Science Communications Fellow

Two days ago I received word that I’d be one of the first class of Google Science Communications Fellows.  The program is focusing first on climate change, and the 21 scientists selected are experts in various aspects of climate science and solutions.  A post on the Google blog gives more details.  The incoming class includes my friends Susanne Moser and Becky Shaw, as well as many other distinguished thinkers and doers in the climate world.

The blog post gives more details about how the group was selected:

“These fellows were elected from a pool of applicants of early to mid-career Ph.D. scientists nominated by leaders in climate change research and science-based institutions across the U.S. It was hard to choose just 21 fellows from such an impressive pool of scientists; ultimately, we chose scientists who had the strongest potential to become excellent communicators. That meant previous training in science communication; research in topics related to understanding or managing climate change; and experience experimenting with innovative approaches or technology tools for science communication.”

This will be a great opportunity to work with some terrific researchers . I’ll post more as I learn more.

Blog Archive
Stock1

Koomey researches, writes, and lectures about climate solutions, critical thinking skills, and the environmental effects of information technology.

Partial Client List

  • AMD
  • Dupont
  • eBay
  • Global Business Network
  • Hewlett Packard
  • IBM
  • Intel
  • Microsoft
  • Procter & Gamble
  • Rocky Mountain Institute
  • Samsung
  • Sony
  • Sun Microsystems
  • The Uptime Institute