More thoughts on window-integrated PVs

In an earlier post I speculated about window-integrated PV, and have now completed some further investigations.

I emailed with Richard Lunt, the lead researcher on the window integrated PV paper, as well as with my friend Steve Selkowitz, head of LBNL’s Windows and Daylighting Group, about this new innovation, and they agreed it would be OK for me to share some of their thoughts here.

Richard seems to be thinking along the same lines as I am (direct use of the electricity at the window as opposed to distributing it throughout the building).   That means you get to cut out some of the balance of system costs for the PVs, which is typically half the installed cost for PV systems, as well as to avoid the costs of the glass and other structural elements that otherwise would contribute to the cost of the PV system.

He added a few more wrinkles:

1) the maximum theoretical efficiency for such cells is 22%, once you eliminate visible light from the solar spectrum.  The new cells have efficiencies of just under 2% in their experiments, with the goal of getting to 10%.

2) large buildings in urban environments have very large vertical areas that will intercept significant amounts of solar radiation in the mornings and afternoons, and the IR component is more significant at these times because the scattering of light is more prevalent for the visible spectrum than for IR/UV.

3) a compensating factor is that “these organic solar cells have much better efficiencies at low illumination intensity than traditional inorganic semiconductors”.

It’s clear from his comments that he (wisely) isn’t expecting this to be a silver bullet, but one more solution that could be useful in particular circumstances.

And that brings me to the question of performance vs cost, which Steve Selkowitz brought up first thing.  All of my speculations presume that the technology gets to a point where it is cost effective to apply it in the kinds of applications I propose.  If that doesn’t happen, then the speculations are moot.  But in this case, there are several other paths up the mountain, so the ideas I mention could make sense anyhow.

I visited Steve in his office on Friday, May 6, 2011 and I got an earful about the opportunities (large) and the complexities (significant).  The key tradeoffs for electrochromics and window integrated PVs relate to the comparative value of lighting, cooling and power.  An electrochromic window in conjunction with dimmable lighting can be a big lighting energy saver (the largest commercial building end use) but its value will be reduced if a significant fraction of the visible sunlight is converted to electric power at the window.  Selkowitz points out that daylighting a room with an electrochromic window or skylight uses solar energy at 3 – 10 times the efficiency of the sunlight-to-PV-electricity-to-electric lighting pathway.

There are a few other issues the manufacturers are working out as well.  For example the time it takes for some electrochromic technologies to switch between transparent and opaque is an issue that I hadn’t considered (some switch very rapidly, others take seconds to minutes).  And some electrochromics may require application of a very small constant voltage to maintain their optical state, while others are more like a flip-flop (the electronic kind, not the beachware kind) in that once they reach a state they stay there without additional effort or electricity.  Finally, there are new thermochromic glazings that respond to temperature instead of voltage, which is a whole different kettle of fish.

He also pointed out  a few other issues with my earlier post:

1) Self powered skylight shades already exist.  Velux has a skylight with a motorized shade that uses ordinary PVs on the frame of the shade to power itself (I haven’t found this online yet but will post a link here when I do).  In any case, such installations need batteries in case you want the skylight to operate at night, but this cost can be offset by not having to run wires to the skylight.

2) For cars, ventilation is a good thing, but it’s a complex design space because you’ll also want aggressive use of UV/IR blocking coatings that would need to be applied in addition to the PV coating, and the details of how to do that in practice haven’t been fully worked out yet as cost effective solutions.

3) The expected lifetime of these coatings as well as efficiency degradation over time are both real issues that the technology folks are addressing.

4) Finally, electrochromic glazings now retail for roughly an additional $50/sf, and adding another expensive layer to such windows is going to raise costs further and make rapid penetration that much more difficult.

Selkowitz’s team at LBNL has been developing and promoting electrochromic windows for over 20 years and he thinks he can now see the daylight at the end of the electrochromic tunnel.  But he thinks the early markets will do better focusing on the electrochromic elements alone, adding PV later once both technologies are better established.

Bottom line, if we can make window integrated PVs with low cost, high reliability, and long lifetime, there will be niches where they will be important.  I’ll post more as I learn more.

Bill Nye's Climate Lab at the Chabot Space and Science Center

We visited Bill Nye’s terrific Climate Lab at the Chabot Space and Science Center in Oakland, CA on Saturday May 7, 2011.  I didn’t get to play around with the exhibits as much as I’d like (that’s what happens when you have two year old twins) but I was very impressed with the smart use of new media and the ways the exhibit got kids to participate.  The focus was mostly on solutions to the climate problem, which was a refreshing change, and the technical information it provided was almost all on the mark.  I heartily recommend this exhibit for those in the Bay area who are interested.  It’s a very nice way to educate kids about climate.

I’ve provided some photos below.   The light level in the exhibit is much brighter than the photos indicate.

Andrew Fanara, Larry Vertal, and me at the Uptime Symposium yesterday

Andrew Fanara, Larry Vertal, and me at the Uptime Institute Symposium in Santa Clara, CA, May 11, 2011.  The three of us were at the center of efforts to improve energy efficiency in servers and data centers starting in early 2006.  Larry was at AMD when that company started to use efficiency as a selling point for their processors in servers (he also funded my papers on electricity used by servers that came out in 2007 and that grew into the analysis for the EPA Report to Congress and this 2008 peer reviewed journal article). Andrew was at EPA’s Energy Star Program (he initiated and sponsored the two meetings on data center efficiency I chaired in 2006—for more details go here).

Exciting news from Intel today on power efficient 3D transistors

Today Intel announced the commercialization of 3D transistors using low voltages and a 22 nm fab process, a development that will help the technology industry keep pace with the long term trends in power efficiency identified in our forthcoming IEEE article (Koomey, Jonathan G., Stephen Berard, Marla Sanchez, and Henry Wong. 2010. “Implications of Historical Trends in The Electrical Efficiency of Computing."  In Press at the IEEE Annals of the History of Computing.  March.) .  These new chips  should deliver the same performance as 2D chips using half the power, and that’s good news for the continued development and proliferation of wireless sensors, controls, and mobile data analysis devices.

People have been predicting the end of Moore’s law for a long time. Over the years, my friends at Intel would only guarantee that they could keep up with those trends for the next five or ten years, but then they weren’t sure.  The important thing is that they keep pulling rabbits out of the hat so that Moore’s law continues, and they seem to have just done it again.  And since the improvements in computing efficiency go hand-in-hand with Moore’s law, continuing rapid developments in battery powered mobile computing devices are also "in the bag”.

One interesting thing is that the trends in computing efficiency actually predate Moore’s law, as they also applied to computers made using vacuum tubes and discrete transistors (as our article demonstrates empirically).  So the efficiency trends are an inherent characteristic of electronic information technology, not just those powered by microprocessors.

For video of my talk at Microsoft last December on trends in computing efficiency, go here.

For a nice article in CNET about 3D transistors, go here.

Some thoughts on window-integrated photovoltaics

Yesterday I tweeted about an innovation from MIT that converts infrared (IR) radiation to electricity, allowing for electricity generation from windows that appear transparent to the naked eye.  The paper documenting those findings is here.

My good friend Kurt Brown, who I worked with in collaboration to design the recommendation engine for Wattbot, emailed me to tell me why he thought this was a crazy idea.  First, he said vertical windows typically get only half of the insolation of a rooftop PV array, so the total available energy for conversion to electricity is much less than for rooftop panels.  Second, he notes that PV panel costs are only 50% of the cost of PV installations nowadays, and while this innovation would substantially reduce panel costs, costs for the wiring and voltage controller to turn the electricity collected into useful power would be significant.  He also forgot to mention that the energy from IR radiation is less than half of that contained in total solar insolation, which reduces the potential of this innovation to generate power still further.

I always listen to Kurt, particularly when he tells me I’m crazy, because he’s often right. However, I had some good reasons for highlighting the technology, so I encouraged Kurt to “think outside the box”.  In this case, the box is that which says that electricity generated by transparent PVs in windows needs to get fed into the electric power system of the building.  I think that’s how most folks read the MIT summary of this study, but it’s the wrong framing.  The amount of power generated by this innovation will always be small, and Kurt is absolutely right that the costs of interconnecting the windows to feed their tidbit of electricity into the power system of a building would probably be prohibitive.

But what if the electricity from the windows could be used right then and there to do something useful?  I thought of three examples, maybe you can come up with more:

1) A self powered skylight:  If you put a small battery inside a skylight and use the IR PVs to charge that battery you wouldn’t need to wire the skylight to the conventional power system (that’s a big cost savings right there).  It would be a self contained unit that opened and closed under its own power, and I’m sure it would sell like hotcakes.

2) Ventilation in automobiles:  One of the big problems in autos is keeping them cool, and while there’s a lot you can do with window coatings, active cooling is almost always necessary in sunny climates.  The electricity from window or sunroof based PVs could be used to run a small ventilation fan to cool vehicles in the sun, without draining the battery.  This would be a boon for electric vehicles, for which battery storage is at a premium.  There are products like this out there now, but none using this particular PV technology.

3) My favorite–Self powered electrochromic glazings:  One of the biggest uncontrolled cooling loads for buildings is sunlight, and while coatings can help with this, having a system that can actively control shading would help buildings become much more flexible in their use of natural energy flows. Electrochromic materials can change their transparency depending on what voltage is applied to them.  The IR PVs could generate enough power to activate the electrochromic materials and to run a wireless network through which all the windows in a building could be controlled (similar to the wireless sensor networks that are becoming more common nowadays).  Electrochromics are now quite expensive but they are used in some commercial products (like sun roofs for high-end vehicles)–they would have to come down in price before this idea could come to fruition.  This application would also require a battery, but would avoid the need for wiring, thus reducing installation costs.

After I told Kurt these ideas, he reconsidered his assessment.  What do you think?

The importance of Facebook releasing technical details on their data center designs

Yesterday (April 7th, 2011) Facebook announced its Open Compute Project and shared with the world technical details about its efficient data center in Prineville, OR.  GigaOm did a nice summary of the technical details about that data center (as have others, here and here) but I wanted to talk about the bigger picture importance of that announcement.  No doubt Facebook has done some innovative things from which other large Internet software companies will learn, but the biggest difference in efficiency in the data center world is between the “in-house” business data centers (inside of big companies whose core business is not computing) and the facilities owned and operated by Facebook, Google, Yahoo, Microsoft, Salesforce, and other companies whose main focus is to deliver computing services over the internet.

I expect that that it’s the “in-house” data center facilities who will be most affected by the release of this technical info.  No one has ever publicly released this much technical detail about data center operations before, and with one stroke Facebook has created a standard for best practice to which the in-house facilities can aspire.  This means they can specify servers at high efficiency that are already being built (they don’t need to design the servers themselves).  They can also pressure their suppliers to deliver such servers in competition with others, to drive the costs of efficiency down. And they can change their infrastructure operations so that they move from Power Usage Effectiveness (PUE) of 2.0, which is typical for such facilities, much closer to Facebook’s PUE of 1.07.  (As background, Google and Yahoo have also released information about their PUEs for specific facilities, and those are much better than in-house facilities as well).

Back in 2007 I worked on the EPA report to congress, which identified PUE of 1.4 as “state of the art” for enterprise class data centers by 2011.  Cloud computing providers have blown right past that estimate and have built more than a few facilities with PUEs in the 1.1 to 1.2 range in the past couple of years.  That shows what happens when companies look at the whole system and focus on minimizing the total cost per computation, not just on optimizing components of that system.

Last coal plant in the US Pacific Northwest to be shut by 2025

Getting serious about climate solutions means shutting down coal plants, starting with the old dirty ones.  Washington State just announced an agreement to shut down the last coal plant in the Pacific Northwest, the first boiler by 2020 and the 2nd by 2025 (another facility is already scheduled to be closed by 2020).  The real action, of course, should be in the Midwest and South, where most US coal plants are located, but we need to celebrate the small victories along the way.

For more details on the characteristics of US coal plants, see my article “Defining a Standard Metric for Electricity Savings,” which is freely downloadable.

Separating facts from values

I wrote about the importance of distinguishing facts from values in Chapter 19 of Turning Numbers into Knowledge, and I wanted to summarize that point here, because it is so often forgotten in discussions of technical issues.

Every human choice embodies certain values. When technical people advocate a technology choice (e.g., whether or not to build more nuclear power plants) they often portray their advice as totally rational, completely objective, and value free. This portrayal cannot be correct—if an analyst makes a choice, he has also made a value judgment.

One purpose of analysis is to support public or private choices; in this context, it cannot be value-free. It can, however, illuminate the consequences of choices so that the people and institutions making them can evaluate the alternative outcomes, using both their values and the analysts’ best estimates of the consequences for each choice.

Some progress on the rebound effect dialogue

The folks at the Breakthrough Technology Institute (BTI) have kindly agreed to work through the concerns I raised in my memo on the state of the rebound dialogue in a collaborative way. We’re beginning that process by focusing on what’s called the elasticity of substitution in firms, which is a key parameter affecting the size of rebound within these firms.  I’m hopeful that by focusing on this specific issue we can achieve a more complete understanding of the various claims being made in this debate.  It’s clear that terminology and analytical conventions differ significantly between the various participants, but focusing on one issue should help us overcome those problems.

While I’m optimistic we’ll make progress, I continue to be concerned about some of the conclusions the recent BTI report reaches.  It will take time to review the technical questions in the detail this issue deserves, so I’ll hold off on stating any conclusions until that work is done.  The debate among experts who have reviewed this report continues to be heated, but I hope we’ll be able to resolve some of the issues that remain in dispute through the civil and collaborative technical discussion that we’ve just begun.

Update on the email dialogue about the rebound effect

The various participants are continuing the email dialogue about the rebound effect, and the parties have agreed on a strategy for making progress on the key issues.  We’re examining the specific example of rebound for an industrial facility, we’re analyzing the comments of various parties on that example, and will compile the comments in a systematic form.  This way we can keep track of all the various threads (as you can imagine it gets complicated with an issue like this).

More as things develop.

I was just chosen as a Google Science Communications Fellow

Two days ago I received word that I’d be one of the first class of Google Science Communications Fellows.  The program is focusing first on climate change, and the 21 scientists selected are experts in various aspects of climate science and solutions.  A post on the Google blog gives more details.  The incoming class includes my friends Susanne Moser and Becky Shaw, as well as many other distinguished thinkers and doers in the climate world.

The blog post gives more details about how the group was selected:

“These fellows were elected from a pool of applicants of early to mid-career Ph.D. scientists nominated by leaders in climate change research and science-based institutions across the U.S. It was hard to choose just 21 fellows from such an impressive pool of scientists; ultimately, we chose scientists who had the strongest potential to become excellent communicators. That meant previous training in science communication; research in topics related to understanding or managing climate change; and experience experimenting with innovative approaches or technology tools for science communication.”

This will be a great opportunity to work with some terrific researchers . I’ll post more as I learn more.

Two excellent posts on the rebound effect by James Barrett on realclimateeconomics.org

James Barrett has recently published two very well reasoned posts on the rebound effect that walk through his thinking, and they are worth a read.  The first one is introductory (written in response to the recent New Yorker article), and the second one more technical.  Both will help people who take an intelligent interest in this topic think more clearly about the rebound effect and its implications.

Intro post:  http://realclimateeconomics.org/wp/archives/647

More detailed post:  http://realclimateeconomics.org/wp/archives/654

A fascinating encounter with advocates of large rebound effects

Over the past few weeks I’ve been engaged in an email conversation with about 30 energy analysts and environmental reporters about the rebound effect.  That conversation has had many threads, but one of particular interest is a specific example I asked the rebound advocates to create.  After some resistance to the idea, someone from the Breakthrough institute took up the challenge, but has thus far failed to respond to technical critiques of his example that reduce the projected rebound effects by an order of magnitude or more.

I summarized where we stand in a memo that I sent to the group today, which is downloadable here.  The key points from my memo are:

1) In an effort to avoid misunderstandings about the complex phenomenon of rebound, I proposed to an email list of about 30 analysts and energy/environmental reporters that those supporting large rebound effects produce a simplified example, so we could dig into the buried assumptions that always afflict such analyses.

2) Jesse Jenkins of the Breakthrough Institute offered such an example, but failed to respond to substantive critiques of the assumptions behind that model that reduced the calculated rebound effect by factors of roughly 10 to 20.

3) Nevertheless, there was general agreement that the relevant research question should be “under what conditions is rebound a problem, and in those cases, how big is it?”  Once this question is accepted, however, making blanket categorical statements like “energy efficiency never saves energy” (as blog posts on the Breakthrough Institute’s web site routinely do) is no longer appropriate.

4) The normal burden of proof is on those advocating the existence of some unexpected and novel effect to show the underlying causal mechanisms that lead to that result, so the assumptions can be peer reviewed.  I can’t prove that large rebounds don’t exist, just like I can’t prove that black swans don’t exist in the absence of a perfectly accurate universal census of swan colors, but if someone brings me a black swan, the problem is solved.  And that’s what those of us skeptical about large rebound effects continue to request:  bring us a black swan!

I’m still waiting for a substantive technical response from the advocates for large rebounds, and will post more when I hear back.

Video of my talk at Microsoft, Dec 2, 2010: "Why we can expect ever more amazing mobile computing devices in the years ahead"

I was invited by Christian Belady at Microsoft to give a talk about my computing trends work at Microsoft, and the video of that Dec. 2, 2010 talk can be found here.    The talk turned out pretty well.  The implications of this work related to the capabilities and prevalence of mobile computing devices–the electrical efficiency of electronic computing has doubled every 1.6 years since the mid 1940s.  That means that for a fixed amount of computational power, the need for battery capacity will fall by half every 1.6 years, and that trend bodes well for the continued explosive growth in mobile computing, sensors and controls.

Alexis Madrigal wrote a nice summary of these trends for the Atlantic, readable here. He dubbed this “Koomey’s corollary to Moore’s law”, which makes me blush, but he’s a brilliant writer who excels in summarizing complex issues for a general audience, so his summary is worth a read.

Reference:  Koomey, Jonathan G., Stephen Berard, Marla Sanchez, and Henry Wong. 2010. “Implications of Historical Trends in The Electrical Efficiency of Computing."  In Press at the IEEE Annals of the History of Computing.  March.   Email me if you want a copy.


Figure 1:  Computations per kWh over time

Trends in computations per kWh since 1946
Creative Commons License


Graph of computations/kWh from 1946 to 2009 by Jonathan Koomey is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.
Permissions beyond the scope of this license may be available at http://www.koomey.com.

More on the rebound effect--almost made it into the New Yorker!

Well, my letter almost made it into the New Yorker, but I just learned today that they had cut it for space reasons.  They did a nice job shortening and editing it, though, and I was pleasantly surprised.  Edited version follows:

“Owen’s article conflates two different effects: the rebound effect (people who buy a more efficient device use it more and so "take back” some of the energy savings) and the wealth effect (society gets richer so consumers buy more goods and services).  Measurements of the true rebound effect almost universally show that the effect is either zero (for devices like cable boxes, where user behavior doesn’t affect energy consumption much) or small (like for automobiles). Proponents of the effect have argued that energy efficiency alone has caused people to buy bigger homes, more appliances, and more household goods, but there isn’t convincing evidence for this hypothesis. Instead, it’s technological improvements, productivity gains, and general wealth increases that are the major contributors to the overall rise in consumption. “

And this, I think, is the crux of the matter.  The people who argue that rebound effects overwhelm efficiency don’t understand that rebound is only the "takeback” that can be attributed to efficiency alone, everything else being held constant.

And then there are the people who are willfully misrepresenting the research, like those at the Breakthrough Institute.  A press release of theirs was quoted in a recent Climate Wire article as saying “"Greater energy efficiency helps us to become wealthier, which increases our overall demand for energy,”  This statement reflects what I would call the “macro” rebound effect.  Energy efficiency saves money, which is then respent on other goods and services.  The problem with this line of argument is that energy is less than 10% of the economy, so that less than 10 cents of each dollar saved and respent (on average) can be directly attributed to increases in energy use.  Therefore there’s no way that this macro rebound can wipe out all of the energy savings unless people only respend savings from efficiency on energy alone, and that’s just not a plausible scenario.

The folks who argue for what I would call “micro” ‘rebound have also failed to show that this effect is big enough to wipe out the energy savings, because there are no peer reviewed measurements that demonstrate this result.  If anyone can point me to such results in the peer reviewed literature I’ll be happy to look at them, but odds are they just don’t exist.

So the people arguing for big rebounds don’t have much of a leg to stand on.  I just wish they’d get another hobby so the rest of us with serious work to do can get on with it…

Blog Archive
Stock1

Koomey researches, writes, and lectures about climate solutions, critical thinking skills, and the environmental effects of information technology.

Partial Client List

  • AMD
  • Dupont
  • eBay
  • Global Business Network
  • Hewlett Packard
  • IBM
  • Intel
  • Microsoft
  • Procter & Gamble
  • Rocky Mountain Institute
  • Samsung
  • Sony
  • Sun Microsystems
  • The Uptime Institute