Europeans fail to correct the error of too many emissions allowances

The Associated Press reported that the European Union failed today to fix their emissions trading system.  The price of emissions allowances has fallen to about 5 euros ($6.5 US) per ton of carbon dioxide, which is about one quarter of the value that has prevailed over most of the system’s history.  The price dropped because the politicians, in their wisdom, allocated too many allowances in an effort to buy off the various status quo interests who would be affected by the system.

Such shenanigans are another argument for a simple carbon tax, collected at the top end of the market for fossil fuels, i.e. on the producers of fossil fuels.  A carbon tax is much simpler and cheaper to administer and enforce. If you’re going to use emissions trading, at least understand that you need to  crank down on the allowable emissions over time, to track the emissions reductions that will be needed to preserve a livable climate.

Streamlining permitting for new photovoltaic installations can have big effects on costs

LBNL just released a report titled “The Impact of City-level Permitting Processes on Residential Photovoltaic Installation Prices and Development Times: An Empirical Analysis of Solar Systems in California Cities.”    Here’s summary text from an email I just received from one of the study’s coauthors (Ryan Wiser):

Business process or “soft” costs account for well over 50% of the installed price of residential photovoltaic (PV) systems in the United States, so understanding these costs is crucial for identifying PV cost-reduction opportunities. Among these costs are those imposed by city-level permitting processes, which may add both expense and time to the residential PV development process.

Building on previous research, this study evaluates the effect of city-level permitting processes on the installed price of residential PV systems and on the time required to develop and install those systems. The study uses a unique data-set from the U.S. Department of Energy’s Rooftop Solar Challenge Program, which includes city-level permitting process “scores,” plus data from the California Solar Initiative and the U.S. Census. Using multivariate statistical analysis, the study quantifies the price and development-time effects of city-level permitting processes on more than 3,000 PV installations across 44 California cities in 2011.

Results indicate that city-level permitting processes have a substantial effect on average PV installation prices and project development times.
  • –PV Installation Prices: The results suggest that cities within our sample with the most favorable permitting practices have average residential PV prices that are $0.27–$0.77/W lower (4%–12% of median PV prices in California) than cities with the most onerous permitting practices.
  • –PV Development Times: Though the findings for development times are less robust, results suggest that cities within our sample with the most streamlined permitting practices have average PV development times that are around 24 days shorter (25% of the median development time) than cities with the most onerous permitting practices.
Overall, these findings illustrate the potential price and development-time benefits of streamlining local permitting procedures for PV systems.

The study’s final report can be found at:
http://emp.lbl.gov/sites/all/files/lbnl-6140e.pdf

A PowerPoint briefing that summarizes the report can be found at:
http://emp.lbl.gov/sites/all/files/lbnl-6140e-ppt_0.pdf

The bottom line is that permitting matters, and that cities who pay attention to fixing these processes can have a large impact on the costs their residents pay for installing solar photovoltaics.  For those cities who want to go “whole hog”, they can copy the efforts of Lancaster, CA, which are about as aggressive as any I’ve seen.

If we don't change our direction, we'll end up where we're headed

Joe Romm of Climate Progress did a great service for climate communications on March 8th, 2013 by publishing this graph of historical and projected global temperatures:

image

Figure 1:  Historical and projected global average surface temperatures on our current trajectory for fossil fuel emissions

The historical data in the graph came from a recently published article in Science, and the projected data came from the “no-policy” case developed by the folks at MIT back in 2009.  The MIT case showed about a 5 Celsius degree increase in global average surface temperatures by 2100, equivalent to about a 9 Fahrenheit degree increase.

I like this graph because it combines what we know about historical temperatures with what is our most likely future–one where we continue to consume fossil fuels at increasing rates.  I realized after seeing Joe’s graph that I could easily add additional context to it, because I have both historical data on carbon dioxide concentrations in the atmosphere, as well as the detailed projections from the MIT researchers (which I obtained from them while working on my most recent book, Cold Cash, Cool Climate:  Science-based Advice for Ecological Entrepreneurs).

Here’s Figure 2-3 from Cold Cash, Cool Climate, updated to include CO2 concentrations through 2012.  It shows historical carbon dioxide concentrations for the past 450,000 years, including the strikingly rapid increase since the 1800s.  The early historical data come from the Vostok and Lawdome ice cores, while the more recent data (post 1959) come from direct measurements.  We’ve pushed carbon dioxide concentrations well outside the range that has prevailed over the past 450 millennia.

image

Figure 2:  Carbon dioxide concentrations for the past 450,000 years

The data for the past 12,000 years, the period over which human civilization developed, shows a picture similar to Romm’s graph of temperatures.  Carbon dioxide concentrations were relatively stable for the entirety of this period, slightly increasing over time, with the most rapid increase only happening as the industrial revolution accelerated in the 1800s.

image

Figure 3:  Carbon dioxide concentrations for the past 12,000 years

Of course, carbon dioxide concentrations are not the only determinant of global surface temperatures, so the concentrations graph won’t exactly match Romm’s temperature graph, but the fact that concentrations didn’t change much over 10,000 years is consistent with that graph.

The issue of most concern to people thinking sensibly about climate is not the historical change in carbon dioxide concentrations, but the likely trajectory of those concentrations if we continue on the path we’re on now.  I’ve modified Figure 3 to include the MIT projections to 2100 to show just how big the change in carbon dioxide concentrations is likely to be (note that the y-axis in Figure 4 starts at 100 ppm, not 0 ppm).  We’re on track for a threefold increase in the concentration of carbon dioxide by 2100 if our emissions proceed along the path expected by MIT’s no policy case.

image

Figure 4:  Carbon dioxide concentrations for the past 12,000 years and projected to 2100 assuming no change in policies

The picture is even more striking when compared to the past 450,000 years (Figure 5), showing that we’re moving the earth well out of the comfortable range in which humanity evolved and civilization developed.

image

Figure 5:  Carbon dioxide concentrations for the past 450,000 years and projected to 2100 assuming no change in policies

Of course, it’s not just carbon dioxide that matters.  If you include the other important warming agents (like methane, nitrous oxides, CFCs and others) the MIT no policy case shows even bigger changes.  Figure 6 modifies Figure 4 to include these other agents in the projection, expressed as carbon dioxide equivalent concentrations.  Such conversions are complex and imperfect, but they’re good enough to get an order of magnitude estimate of the total potential impact of the path we’re now on.

image

Figure 6:  Carbon dioxide concentrations for the past 12,000 years and projected to 2100 assuming no change in policies, including other warming gases

Here’s the same graph going back 450,000 years (Figure 7).

image

Figure 7:  Carbon dioxide concentrations for the past 450,000 years and projected to 2100 assuming no change in policies, including other warming gases

The critical takeaway from Figures 6 and 7 is that we’re on track for more than two doublings of greenhouse gas concentrations by 2100 if we continue on our current path (greenhouse gas equivalent concentrations rise by a factor of 4.8 by 2100).  Many in the media and elsewhere mistakenly focus only on the climate sensitivity, which is the expected increase in global average surface temperatures for a doubling of greenhouse gas equivalent concentrations (best estimate now is about 3 Celsius degrees, or 5.4 Fahrenheit degrees, per doubling).  But it’s not just the temperature increase from a doubling of concentrations that matters, you also need to know how many doublings we’re in for!

I’ve been frustrated for many years by the way numbers about projected greenhouse gas concentrations have been presented, even by some folks who ought to know better.  The most common approach has been to focus just on carbon dioxide, and make some hand-waving statements about the effects of the other warming agents, but that never satisfied me.  As a comparison of Figure 4 and Figure 6 show, the other warming agents are significant contributors to warming, increasing the effective greenhouse gas concentration from about 900 ppm (for carbon dioxide alone) to about 1350 ppm when all warming agents are included.

The MIT researchers deserve great credit for their work.  They appropriately defined a “no-policy” case to clearly show the effect of the current path we’re on (avoiding the confusion among policy makers engendered by the “multiple baselines” approach embodied in the IPCC  Fourth Assessment report).  They also conducted a comprehensive analysis of all warming agents, and made their data available to other researchers who could summarize the results in effective ways.  It was quite a relief to discover their work, and it made writing the first few chapters of Cold Cash, Cool Climate a lot easier.

The last part of the puzzle is to understand whether the MIT no-policy case is a plausible representation of a world in which we initiate no constraints on greenhouse gas emissions.  One way to do that is to compare the history for various drivers of emissions (like population, energy efficiency, and economic growth) to the projections, a task that I undertook in Chapter 2 of Cold Cash, Cool Climate.  In virtually every case, the projected trends looked a lot like the previous 50 years, and in some cases, the projections showed more modest growth than one might expect from recent history.

Another way to assess the projection is to examine just how many fossil fuel resources exist, to see if it’s plausible that the world could burn the amount of fossil fuels embodied in the MIT no policy case.  I conducted this exercise in Chapter 2 and Appendix A of Cold Cash, Cool Climate.  I also summarized the results in my blog post titled “Why fossil fuel abundance is an illusion” (see Figure 8).

image

Figure 8:  Lower bound estimates of fossil fuel reserves compared to fossil carbon emissions in the MIT’s no-policy case

The key conclusion from this analysis (which is based on lower-bound resource estimates taken from the most recent Global Energy Assessment) is that fossil fuel resource constraints are unlikely to constrain carbon emissions in the twenty first century.  If we just burn the conventional oil and gas resource base plus the coal proven reserves, we’d only need to burn about 10% of the remaining coal in the “resource base” to hit the MIT no-policy case emissions by 2100.  In a world where the true cost of fossil fuels is masked by subsidies and unpriced pollution costs, it is clear to me that we’d easily burn enough fossil fuels to match the no-policy case totals.

Conclusions

The case for concern about rising greenhouse gas (GHG) concentrations is ironclad, and the graphics above show one compelling way to describe that case.  We’re on track for more than two doublings of greenhouse gas concentrations by 2100 when all warming agents are included.  Combined with an expected warming of about 3 Celsius degrees per doubling of GHG concentrations (the climate sensitivity) that implies about a 6 Celsius degree warming commitment on our current path (the 5 Celsius degree warming calculated by MIT for 2100 is lower because it takes many centuries for the climate to equilibrate to fully account for the effects of changes in concentrations).

The graphs above show a dramatic shift in the climate system caused by human activity, one that has no precedent in human history. We need to leave more than three-quarters of proven fossil fuel reserves in the ground if we’re to stabilize the climate (for more technical backup on this point, see this classic paper by Meinshausen et al. and the technical details provided in Cold Cash, Cool Climate). It’s hard to imagine a starker challenge for humanity, but it’s one that we must confront if we’re to leave a livable world for our descendants.

Disappointing article in Science about "greening" the Internet

In an article in a recent issue of Science, Diego Reforgiato Recupero discussed some aspects of reducing the environmental impact of the Internet, but muddled some key concepts, ignored others, and generally made a hash of things.  It’s disappointing because those of us working on this issue for decades have tried at different points to publish much better articles on this topic in Science, to no avail.  They also refused to publish a corrective letter that I and other colleagues at LBNL, Northwestern, and the University of South Florida wrote, so we posted it here (it may appear in shortened form as an online comment to the article, but the link gives the full letter with all citations).

Stay tuned for our article forthcoming in Nature Climate Change, which deals with the issues around reducing greenhouse gas emissions from data centers (one important part of the Internet) in a much more sensible way.

Recupero, Diego Reforgiato. 2013. “Toward a Green Internet."  Science.  vol. 339, no. 6127. March 29, 2013. pp. 1533-1534. [http://www.sciencemag.org/content/339/6127/1533.short]

Book review of Cold Cash, Cool Climate on Skeptical Science

image

The web site Skeptical Science posted a review of my book Cold Cash, Cool Climate:  Science-based Advice for Ecological Entrepreneurs yesterday.  Here are the first few paragraphs:

Jonathan Koomey’s new book Cold Cash, Cool Climate: Science-Based Advice for Ecological Entrepreneurs puts forth an intriguing idea – entrepreneurs are one of the keys to meaningful, timely climate action. Society needs to make drastic changes to avoid dangerous global warming. However, institutions such as the government and big business only change slowly and incrementally, except under exceptional circumstances.
Koomey argues forcefully that it’s the very nature of entrepreneurs that make them an important part of the solution.  While institutions often fear and resist change, entrepreneurs embrace it. The changes required are so large, no part of the economy will be untouched. Most people look at the enormity of this issue and despair. But entrepreneurs are famously scornful of the phrase “it can’t be done” and see opportunity.
This is not to say entrepreneurs are the magic bullet. One of the key points that Koomey makes is that we need to be addressing climate change on many fronts. The key to speeding up the change is to make the systems that are causing the climate problem obsolete more quickly. What entrepreneurs do is develop replacements that are so much better than to existing ways of doing things that people are willing to “upgrade” to gain the advantages of the new technology.

To read more, go here.

Just because we've always done it that way doesn't mean we still should...

At the end of November 2012 I met with Professor Rong Zeng of Tsinghua University, and he told me of his long-term vision of an electric power system that only used Direct Current (DC).    There is much to recommend this vision if one were starting a power system from scratch, but the US (and thus the world) settled on AC power about a century ago, after fierce competition between Edison and Tesla. Edison advocated DC power, Tesla advocated AC power, and eventually the AC power proponents triumphed. This historical conflict is one of several terrific examples of the importance of path dependence in technological and economic systems.

I was reminded of Professor Zeng’s vision when I read an article in EE times about the power systems needed for Light Emitting Diode (LED) fixtures.    This article describes the circuitry needed to minimize electromagnetic interference from the switching power supplies in LED, circuitry that adds costs and complexity to end-user devices.  If the houses were wired for DC, this complex circuitry simply wouldn’t be needed.

Of course, there might be other disadvantages to widespread use of DC power for power systems, but it’s not at all a given that the current state of a technological system is how we’d design it if we were creating it from scratch.  And in fact, the point of whole systems design is to capture the benefits of designing from scratch, precisely because most technological systems are characterized by path dependence.  So it’s important for those approaching problems for the first time not to assume that the way things are is the way things have to be.  The future is ours to choose, and technological developments often push us into new design spaces that simply weren’t reachable before.  Your goal is to find those spaces and use them to your advantage.

Black carbon: a short-lived warming agent with big effects

Climate Progress reported yesterday on a new study about the warming effects of black carbon published in the peer reviewed journal Journal of Geophysical Research-Atmospheres.  The study reveals that black carbon (dark particles of soot emitted by various kinds of combustion) is roughly twice as powerful a warming agent as previously thought, and it is the second most important climate forcing agent after carbon dioxide in terms of warming experienced thus far (methane is a close third).

The figure below, taken from the article, summarizes the complexities surrounding black carbon’s emissions and effects.

image

This research is important because:

1) It is yet another example (like the melting rate of Greenland’s glaciers) where the climate models have in the past significantly underestimated the warming effects of human activities.  As we learn more, the climate problem continues to grow more worrisome.

2) Unlike carbon dioxide, nitrous oxide, and methane, black carbon (BC) has a very short lifetime in the atmosphere (on the order of a week).  Reductions of BC emissions will lead to rapid reductions in the warming effect, so we can have a big impact on warming relatively quickly, if we focus on the sources of those emissions that exert the highest warming effect.

3) BC emissions (in the form of small particulate matter) are tied to significant human health effects, and the benefits of reducing these emissions are large enough to justify reducing them, irrespective of climate benefits.  The World Health Organization estimates that about 2 million people die each year from pollution from indoor combustion of solid fuels, the same kinds of combustion tied to BC emissions.

4) The biggest sources of BC are burning forests and savannas, coal and biomass in residential applications, and diesel fuel.  As the world warms, it’s likely that black carbon from burning forests will increase, which is a positive feedback associated with a climate change.

5) Even if we are successful in reducing BC emissions, we’ll still need to reduce emissions of carbon dioxide, methane, nitrous oxide, and other long-lived forcing agents, because their long-term effects remain a real concern.

Related posts

Cold Cash, Cool Climate:  A summary of the argument

Why climate change causes BIG increases in severe weather

An effective graph summarizing the climate conundrum

Why fossil fuel abundance is an illusion

The danger of disembodied "facts"

My friend Alex Zwissler, who heads up the Chabot Space and Science Center, published an interesting blog post today giving guidance about how to separate fact from fiction on scientific topics, with the promise of more to come.  He focused today on determining the source of any claim, assessing whether the source is truly an authority and what their underlying motivations might be.  Here are the intro two paragraphs:

In one of my recent posts I had some fun with the topic “…the six things I want our kids to know about science” … of course there are more, but it was a good start.  Among the comments I received on the post, one from a friend posed a troubling question, which could be restated as, “OK wise guy, how DO we help our kids figure out what to believe with all this seemingly conflicting and confusing sciency stuff?”  In re-reading my rants on this subject I realize that while I have done a passable job of laying out the challenge of figuring out how to decide what to believe, I’ve done a crap job in providing any answers to the question how.  This led to some rapid self- reflection, asking myself the question, “OK wise guy, how do YOU figure out what to believe with all this seemingly conflicting and confusing sciency stuff?”  …fie on self-reflection. But the effort did allow me to see that I do have a bit of method to my madness, outlined here.

My first step in assessing the validity of a claim is pretty much always the same… I take a really, really close look at the source. I feel this is the best place to start, and while taking a bit of time and effort it can often yield immediate results. If the source does not pass the smell test, then move on.  This exercise breaks down into two broad categories, qualification and motivation.

I’ve written about some aspects of this topic in several places in Turning Numbers into Knowledge, including Chapter 11, which focuses on applying critical thinking to assessing arguments.   Alex’s post got me thinking about the practical complexities that often arise, even for researchers in a specific field, and I saw a good example of such complexities in an news report in Science Daily that was just posted today.

This report summarized a peer reviewed article that appeared in a well regarded journal (Environmental Science and Technology).  Even better, the report gave the actual citation with a link to the article’s DOI (digital object identifier, which is a record locator for scholarly papers).  It also states at the end that it is a summary of materials supplied by the American Chemical Society, so it’s not original reporting by Science Daily. So far, so good.

Now it gets interesting–here’s the first paragraph of the article:

Researchers from the Centre for Energy-Efficient Telecommunications (CEET) and Bell Labs explain that the information communications and technology (ICT) industry, which delivers Internet, video, voice and other cloud services, produces more than 830 million tons of carbon dioxide (CO2), the main greenhouse gas, annually. That’s about 2 percent of global CO2emissions – the same proportion as the aviation industry produces. Projections suggest that ICT sector’s share is expected to double by 2020. The team notes that controlling those emissions requires more accurate but still feasible models, which take into account the data traffic, energy use and CO2production in networks and other elements of the ICT industry. Existing assessment models are inaccurate, so they set out to develop new approaches that better account for variations in equipment and other factors in the ICT industry.

A reader might reasonably conclude that the research article added up carbon dioxide emissions and showed that the ICT industry emits about the same amount of greenhouse gases as global aviation, roughly 2 percent of global emissions.   When you read the article itself, however, you realize that the authors were simply summarizing the results of six other studies, three peer reviewed, three not, with the two key sources dating to 2007 and 2008, respectively.

In order to really understand if this claim is true you’d need to go back to those sources and read them all.  If you did that, you’d realize that the 2 percent estimate is from non-peer reviewed reports published 4-5 years ago, and that the cited research article was simply reproducing those figures as context for presenting their conclusions.  In essence, the “factoid” that 2% of the world’s carbon dioxide emissions come from ICT has become disembodied from the original source, making it difficult and time consuming for people unfamiliar with this literature to determine if it’s true or not.

None of this should discourage the lay reader from following Alex’s advice and assessing the credibility and motives of any information source, but it also highlights the importance of actually reading the original published source for any particular claim.  Summaries of other people’s results almost invariably create disembodied statistics and other confusions, so it’s incumbent on anyone who wants to use information for an important decision to go back to the original source.  That’s the only way to make sure you’ve really gotten it right.

In the Epilogue to the second edition of Turning Numbers into Knowledge I summarize a related example, in which some rather wild claims about Internet electricity use required detailed debunking.  Email me if you’d like a copy of the Epilogue and a few related articles–it’s a terrific illustration of disembodied statistics run amok.

Finally, I highly recommend William Hughes’ book titled Critical Thinking:  An Introduction to the Basic Skills.   The book is a marvelous introduction to critical thinking, and it discusses how to evaluate whether an authority is credible in some detail.  I have the 1997 edition, which was written by Hughes alone, but there seem to be two later editions coauthored by Jonathan Lavery and William Hughes, and used copies seem to be reasonably priced.  Here’s the link to the 2008 edition on Amazon US.  Amazon Canada has the 2008 edition new for about $48 Canadian.

The National Center for Science Education highlights the book Cold Cash, Cool Climate

The National Center for Science Education (NCSE) states as its mission “defending the teaching of evolution and climate science.  NCSE highlighted Cold Cash, Cool Climate:  Science-based Advice for Ecological Entrepreneurs in a post on December 21, 2012, and it remains on their top page as of tonight.  At their request, I made "Chapter 5:  The Scope of the Problem” available as a free download through their site.  Please spread the word!

image

An effective graph summarizing the climate conundrum

Bill McKibben of 350.org created a graph summarizing the picture for carbon in a particularly effective way.  It’s part of a Washington Post compilation of the most important graphs of 2012.

image

This graph summarizes the warming limit approach to the climate problem nicely (for more details, see my post about “Why fossil fuel abundance is a illusion”, Chapters 1 to 3 and Appendix A in Cold Cash, Cool Climate, and Bill McKibben’s Rolling Stone article titled “Global Warming’s Terrifying New Math”).   When you compare the amount of carbon we can emit and stay within the 2 Celsius degree warming limit to proved reserves of fossil fuels, it’s clear that we just can’t burn it all.  This reality hasn’t yet sunk in, because the valuation of fossil fuel companies still implies that we can.

As an aside, readers who know my book Turning Numbers into Knowledge are aware that I’m not a fan of 3D bar charts, and the point could have been better made with a 2D bar overlaid on top of Google Earth.  The message of the graph is the most important thing, however, and the image is striking, so I’ll give Bill a pass on this issue.

Addendum (December 28, 2012): The graph uses a term called “declared reserves”, which isn’t one that is used much in the literature, but it’s likely that it represents proved reserves plus some part of what geologists call “resources”.  The latter category represents fossil fuel deposits that we expect to be there based on our current knowledge, but we don’t know their quantities as accurately as the proved reserves (which we are pretty sure we can extract at current prices using current technologies).

More on causality and the national debt

As I pointed out in this post and this one, growth in the national debt can only be fairly evaluated by assigning causality to the different contributors to that debt.  Simple-minded comparisons of debt when a president came into office with current day debt will be misleading if previous occupants of the office implemented policies that continue into the next administration.

I recently discovered that the New York Times and the Washington Post have conducted analysis of the contributors to debt based on the policies of the Bush and Obama administrations, and the bottom line is about the same in each case.  President Bush’s policies increased the debt by about $5 trillion (T) from 2001 to 2009, while President Obama’s policies increased the debt by $1 to 1.4T.  By attributing debt to specific policies, these two analyses exclude debt attributable to previous administrations, so the comparison is a consistent one.

These analyses don’t seem to include explicit treatment of the effect of the Great Recession on costs and revenues, which is something worth exploring (for one such analysis, see this graph via Paul Krugman).  They also aren’t explicit about how they treat inflation and the time value of money, both of which make money spent in earlier years more valuable than money spent in later years.  Someone evaluating these numbers would need to understand how those two effects were treated to use the data in other comparisons.  In any case, correcting for those two effects would tend to make President Bush’s relative contribution to debt even larger if they are not currently included in these comparisons.

Energy harvesting in the news

The world is starting to pay greater attention to energy harvesting, through which ultra-low power sensors and controls can be powered by ambient energy flows (like light, heat, motion, or stray radio and TV signals).  This week, Electronics Weekly reported on a study that estimated the current market for energy harvesting at $19M/year, and projected that it will grow roughly tenfold by 2017.  Putting aside the difficulty of projecting the future for economic and social systems, it’s clear that people are waking up to the potential for energy harvesting. Thus far it’s mostly been a niche application, the most widely used example of which is tire pressure sensors in cars (they use the motion of the wheel to power themselves).

There has also been more interest recently in biomedical applications.  Proteus Digital Health has an ingestible sensor that has no battery.  Instead, it has a cathode and anode, and uses your stomach juices as the electrolyte.  It goes inside a pill, and when the pill dissolves in your stomach it sends a tiny signal to a patch on your skin, which relays the signal to your cell phone or other mobile device, recording accurately when you took your medicine.  This is what Proteus calls “partial energy harvesting”, since the energy extracted is really embedded in the anode/cathode pair, and the electrolyte simple enables us to tap that energy for as long as the electrodes last.

Nature Biotechnology published an article recently on a device that can extract power from a biologic battery found in the inner ears of certain animals, including humans.

Mercier, Patrick P., Andrew C. Lysaght, Saurav Bandyopadhyay, Anantha P. Chandrakasan, and Konstantina M. Stankovic. 2012. “Energy extraction from the biologic battery in the inner ear."  Nat Biotech. Advance online publication, 11/08/online. [http://dx.doi.org/10.1038/nbt.2394]

The Wall Street Journal reported on the Nature Biotechnology paper as well as some other examples of energy harvesting in medicine, where the concept seems to be taking off.

It’s important to remember that energy harvesting is still in its infancy, and that it’s competing against single use batteries that are also improving over time.  Once an electronic device has achieved very low power (averaging micro watts or nano watts) then it’s relatively easy to attach it to a single use battery and achieve battery lifetimes in years (or even a decade or two).  For many applications, that’s more than sufficient, so the cost of energy harvesting needs to be compared to that for a single use lithium or lithium thionyl chloride battery, and in many cases the battery will come out ahead. That won’t always be true, but it’s often true now.

A real-world example of how wrong "likely voter" screens can be

As if in reply to my missive about hazards in political polling, Politico has a great story today about just how far astray likely voter screens can lead even seasoned political professionals:

For Republicans, one of the worst parts of the GOP’s 2012 trouncing was that they didn’t see it coming.
Top party strategists and officials always knew there was a chance that President Barack Obama would get reelected, or that Republicans wouldn’t gain control of the Senate. But down to the final days of the national campaign, few anticipated the severe setbacks that Republicans experienced on Nov. 6.
The reason: Across the party’s campaigns, committees and super PACs, internal polling gave an overly optimistic read on the electorate. The Romney campaign entered the last week of the election convinced that Colorado, Florida and Virginia were all but won, that the race in Ohio was neck and neck and that the Republican nominee had a legitimate shot in Pennsylvania.

In other words, the likely voter screen the Republican pollsters applied to figure out who would actually vote were grossly inaccurate.  And this conclusion is confirmed by Democratic pollsters and the Obama Campaign:

Democrats had argued for months before the election that Republican polling was screening out voters who would ultimately turn up to support Obama. In fact, Obama advisers said, if you applied a tighter likely voter screen to Democratic polling — counting only the very likeliest voters as part of the electorate — you could come up with results similar to what the GOP was looking at.

Keep in mind the twofold purposes of political polling next time you see polling results.  The first is to take an accurate snapshot of the electorate’s opinions on a certain date, but the second is to predict the results on election day.  The first goal isn’t easy to achieve, but the second one is even harder (because predicting behavior of human behavior is difficult in all circumstances, impossible in many).   Be much more skeptical of likely voter polls and focus more on polling averages than on a single poll, because you’re much more likely to have an accurate picture that way.  You should also read “polling postmortems” (like the one just published by Nate Silver at 538) to understand how each pollster stacked up against actual results.

This last conclusion applies to all kinds of forecasts, which is why I’m a strong advocate of retrospective comparisons of forecasting results to actual events (see for example Koomey, Jonathan G., Paul Craig, Ashok Gadgil, and David Lorenzetti. 2003. “Improving long-range energy modeling:  A plea for historical retrospectives.”  The Energy Journal (also LBNL-52448).  vol. 24, no. 4. October. pp. 75-92.   Email me for a copy.  Also check out this short post on a retrospective for a 1981 climate forecast).

My ARM Tech Con keynote ("Why ultra-low power computing will change everything") is now posted

I had great fun this past Wednesday (October 31st, 2012) talking at ARM’s Tech Con event, which is probably the world’s largest gathering of technologists devoted to low power innovation.  Compared to my Authors@Google talk, I’ve added some additional examples of ultra-low-power computing and communications and really boiled the talk down to its essential messages. Check it out!:

If political polls are driving you crazy, read this

As the election approaches, I’ve been musing over the nature of political polling.  There are many folks who make a living reporting on poll results, and a few who actually do solid analysis using such polls (with Nate Silver at 538 being the most prominent and sophisticated example).  Unfortunately, there are problems inherent in the enterprise of measuring public opinion that make it impossible to say with certainty what the outcome will be (at least for a close presidential election like this one promises to be).

There are two goals of a poll:  1) to create a “snapshot” of public opinion during the period over which the poll was conducted, and 2) to predict who will win the election.  It’s important to distinguish these two goals.

Taking a snapshot of public opinion seems straightforward, but it’s getting harder to do so, as different parts of the public change their preferences about answering calls from strangers (in part aided by technologies like call-waiting, which are becoming more widespread).   People are increasingly shifting to not having landline phones, and that may also introduce biases into the results.

The spread of polling results is largely the result of differences in how factors such as these are treated by the pollsters (and there are always buried assumptions and judgment calls in such analyses).  So it’s not at all clear that the snapshot of public preferences is accurate, even for the pollsters who are most sophisticated and use human interviewers and careful statistical methods.  Analysts try to adjust for these variations by taking averages of polling results, but such methods only work when there is no systematic bias affecting the results (as an aside, the “margin of error” that is commonly reported for polls is a simple statistical measure based on the number of respondents and does not reflect the kinds of structural biases I describe above).

I want to turn now to the second goal, which has received comparatively little attention (it’s the one that prompted me to write this post in the first place).  Doing a prediction of what will happen is fraught with problems, unless you are dealing with a physical system like planets orbiting the sun, and it is in attempting to do predictions that I think most pollsters get into real trouble.

The methods used to convert samples of “registered voters” to samples of “likely voters” are where the snapshot formally becomes a prediction.  In that conversion the pollster needs to decide who will submit a valid ballot by Tuesday November 6th.  While it is possible to make educated guesses based on historical data, each election is different.  Will an energized Republican base offset increased enthusiasm in the hispanic community?  Will Hurricane Sandy make voting difficult in some states?  Will undecided voters break for the incumbent or the challenger?  Will efforts to require photo ID reduce turnout of certain voter groups, and if so, how much?  Will someone be able to manipulate the voting results?  None of these things will be known with certainty until after November 6th, and history may give no guidance at all.

So I’m convinced that polling simply can’t tell us with certainty who will win the election, at least when it’s close.  According to Nate Silver, the state level polls suggest an electoral advantage for President Obama (with about 75% probability today), but what the polls can’t say is what will actually happen on November 6th, and the margins in many states are small enough that the election could go either way.

And that’s where we, the people come in.  Our choices are what determine the future. In an election where polls are close, your vote really does count.  What we decide to do will be what makes the difference on Tuesday. So don’t get hung up on contradictory polling results, just go out and vote!

“The best way to predict the future is to invent it.”  –Alan Kay


Addendum, October 30, 2012:  The statistician Andrew Gelman wrote a nice piece in The New York Times that analyzes what “too close to call” means in the context of this election.  Highly recommended reading.  It addresses the ostensible contradiction that President Obama has a 75% chance of winning but that the election could go either way based on unpredictable factors.

Blog Archive
Stock1

Koomey researches, writes, and lectures about climate solutions, critical thinking skills, and the environmental effects of information technology.

Partial Client List

  • AMD
  • Dupont
  • eBay
  • Global Business Network
  • Hewlett Packard
  • IBM
  • Intel
  • Microsoft
  • Procter & Gamble
  • Rocky Mountain Institute
  • Samsung
  • Sony
  • Sun Microsystems
  • The Uptime Institute