The data company IDC has a report out today suggesting that tablet shipments will surpass laptop shipments this year. That’s not surprising to anyone who’s been following this space, and it’s an indication (if we needed any more) that the future of computing is both mobile and ultra-low power.
Here’s the abstract of the article, to whet your appetite:
It is tempting to attribute variations in support for nuclear power to prominent accidents such as Three Mile Island in the United States or Fukushima in Japan. To illuminate how such attribution can be problematic, the authors discuss the historical context of the Three Mile Island accident in the United States. They point out that the US nuclear industry faced major challenges even before the 1979 accident: Forty percent of all US reactor cancellations between 1960 and 2010, they write, occurred before the accident in Pennsylvania. While safety concerns were undoubtedly a driver of public aversion to new nuclear construction in the United States, the nuclear industry already faced substantial economic and competitiveness obstacles, much like the nuclear industry worldwide before Fukushima.
The full reference is Hultman, Nathan E., and Jonathan G. Koomey. 2013. “Three Mile Island: The Driver of US Nuclear Power’s Decline?" Bulletin of the Atomic Scientists. vol. 69, no. 3. May/June. pp. 63-70.[http://bos.sagepub.com/content/69/3/63.abstract]
I wasn’t able to attend Al Gore’s memorial lecture for Stephen Schneider at Stanford, but it was just posted, and it’s “must-see”. Brilliant. Moving. Powerful. Inspirational. No visuals, just a passionate and committed person speaking from his heart.
Preceding Gore’s speech is a short documentary about Steve, which captures his essential brilliance, his clarity, and his passion for truth. Don’t miss it!
Please feel free to reuse these slides for any non-commercial purpose as long as you acknowledge the source and don’t alter the graphics in any significant way (OK to change slide titles, of course). To make it all clear, I arranged a license through Creative Commons (see below). If you want to use some or all of these graphics for a commercial purpose, please email me.
Yesterday Nature Communications published an article summarizing a new innovation in battery technology that promises much higher power AND energy densities, even for very small batteries. Batteries are usually good at delivering energy (kilowatt-hours) but aren’t as good at delivering power (kilowatts). These new batteries seem to have fixed this problem, with power densities as good as the best supercapacitors with reasonable energy densities as well.
Energy storage devices are typically characterized using a Ragone Plot, which shows power density (in watts per kilogram) on the x-axis, and energy density (in watt-hours per kilogram) on the y-axis. Figure 3 from the article shows how these new batteries stack up.
The new batteries are labeled A through H in the Figure, and they have high power densities (like super capacitors) and energy densities comparable in some cases to those of lead-acid, nickel cadmium, or nickel zinc batteries. The article makes the case that new ways of manufacturing batteries should allow us to overcome the power density limitations in typical batteries.
For real-world applications, of course, the issue will be whether these new batteries can be manufactured at competitive costs, but the article offers the hope that new ways of structuring battery materials can lead to substantial improvements in these devices. For those of us exploring the potential effects of widespread use of ultra-low-power electronics, that’s an exciting development.
The Associated Press reported that the European Union failed today to fix their emissions trading system. The price of emissions allowances has fallen to about 5 euros ($6.5 US) per ton of carbon dioxide, which is about one quarter of the value that has prevailed over most of the system’s history. The price dropped because the politicians, in their wisdom, allocated too many allowances in an effort to buy off the various status quo interests who would be affected by the system.
Such shenanigans are another argument for a simple carbon tax, collected at the top end of the market for fossil fuels, i.e. on the producers of fossil fuels. A carbon tax is much simpler and cheaper to administer and enforce. If you’re going to use emissions trading, at least understand that you need to crank down on the allowable emissions over time, to track the emissions reductions that will be needed to preserve a livable climate.
LBNL just released a report titled “The Impact of City-level Permitting Processes on Residential Photovoltaic Installation Prices and Development Times: An Empirical Analysis of Solar Systems in California Cities.” Here’s summary text from an email I just received from one of the study’s coauthors (Ryan Wiser):
Business process or “soft” costs account for well over 50% of the installed price of residential photovoltaic (PV) systems in the United States, so understanding these costs is crucial for identifying PV cost-reduction opportunities. Among these costs are those imposed by city-level permitting processes, which may add both expense and time to the residential PV development process.
Building on previous research, this study evaluates the effect of city-level permitting processes on the installed price of residential PV systems and on the time required to develop and install those systems. The study uses a unique data-set from the U.S. Department of Energy’s Rooftop Solar Challenge Program, which includes city-level permitting process “scores,” plus data from the California Solar Initiative and the U.S. Census. Using multivariate statistical analysis, the study quantifies the price and development-time effects of city-level permitting processes on more than 3,000 PV installations across 44 California cities in 2011.
Results indicate that city-level permitting processes have a substantial effect on average PV installation prices and project development times.
–PV Installation Prices: The results suggest that cities within our sample with the most favorable permitting practices have average residential PV prices that are $0.27–$0.77/W lower (4%–12% of median PV prices in California) than cities with the most onerous permitting practices.
–PV Development Times: Though the findings for development times are less robust, results suggest that cities within our sample with the most streamlined permitting practices have average PV development times that are around 24 days shorter (25% of the median development time) than cities with the most onerous permitting practices.
Overall, these findings illustrate the potential price and development-time benefits of streamlining local permitting procedures for PV systems.
The bottom line is that permitting matters, and that cities who pay attention to fixing these processes can have a large impact on the costs their residents pay for installing solar photovoltaics. For those cities who want to go “whole hog”, they can copy the efforts of Lancaster, CA, which are about as aggressive as any I’ve seen.
Joe Romm of Climate Progress did a great service for climate communications on March 8th, 2013 by publishing this graph of historical and projected global temperatures:
Figure 1: Historical and projected global average surface temperatures on our current trajectory for fossil fuel emissions
The historical data in the graph came from a recently published article in Science, and the projected data came from the “no-policy” case developed by the folks at MIT back in 2009. The MIT case showed about a 5 Celsius degree increase in global average surface temperatures by 2100, equivalent to about a 9 Fahrenheit degree increase.
I like this graph because it combines what we know about historical temperatures with what is our most likely future–one where we continue to consume fossil fuels at increasing rates. I realized after seeing Joe’s graph that I could easily add additional context to it, because I have both historical data on carbon dioxide concentrations in the atmosphere, as well as the detailed projections from the MIT researchers (which I obtained from them while working on my most recent book, Cold Cash, Cool Climate: Science-based Advice for Ecological Entrepreneurs).
Here’s Figure 2-3 from Cold Cash, Cool Climate, updated to include CO2 concentrations through 2012. It shows historical carbon dioxide concentrations for the past 450,000 years, including the strikingly rapid increase since the 1800s. The early historical data come from the Vostok and Lawdome ice cores, while the more recent data (post 1959) come from direct measurements. We’ve pushed carbon dioxide concentrations well outside the range that has prevailed over the past 450 millennia.
Figure 2: Carbon dioxide concentrations for the past 450,000 years
The data for the past 12,000 years, the period over which human civilization developed, shows a picture similar to Romm’s graph of temperatures. Carbon dioxide concentrations were relatively stable for the entirety of this period, slightly increasing over time, with the most rapid increase only happening as the industrial revolution accelerated in the 1800s.
Figure 3: Carbon dioxide concentrations for the past 12,000 years
Of course, carbon dioxide concentrations are not the only determinant of global surface temperatures, so the concentrations graph won’t exactly match Romm’s temperature graph, but the fact that concentrations didn’t change much over 10,000 years is consistent with that graph.
The issue of most concern to people thinking sensibly about climate is not the historical change in carbon dioxide concentrations, but the likely trajectory of those concentrations if we continue on the path we’re on now. I’ve modified Figure 3 to include the MIT projections to 2100 to show just how big the change in carbon dioxide concentrations is likely to be (note that the y-axis in Figure 4 starts at 100 ppm, not 0 ppm). We’re on track for a threefold increase in the concentration of carbon dioxide by 2100 if our emissions proceed along the path expected by MIT’s no policy case.
Figure 4: Carbon dioxide concentrations for the past 12,000 years and projected to 2100 assuming no change in policies
The picture is even more striking when compared to the past 450,000 years (Figure 5), showing that we’re moving the earth well out of the comfortable range in which humanity evolved and civilization developed.
Figure 5: Carbon dioxide concentrations for the past 450,000 years and projected to 2100 assuming no change in policies
Of course, it’s not just carbon dioxide that matters. If you include the other important warming agents (like methane, nitrous oxides, CFCs and others) the MIT no policy case shows even bigger changes. Figure 6 modifies Figure 4 to include these other agents in the projection, expressed as carbon dioxide equivalent concentrations. Such conversions are complex and imperfect, but they’re good enough to get an order of magnitude estimate of the total potential impact of the path we’re now on.
Figure 6: Carbon dioxide concentrations for the past 12,000 years and projected to 2100 assuming no change in policies, including other warming gases
Here’s the same graph going back 450,000 years (Figure 7).
Figure 7: Carbon dioxide concentrations for the past 450,000 years and projected to 2100 assuming no change in policies, including other warming gases
The critical takeaway from Figures 6 and 7 is that we’re on track for more than two doublings of greenhouse gas concentrations by 2100 if we continue on our current path (greenhouse gas equivalent concentrations rise by a factor of 4.8 by 2100). Many in the media and elsewhere mistakenly focus only on the climate sensitivity, which is the expected increase in global average surface temperatures for a doubling of greenhouse gas equivalent concentrations (best estimate now is about 3 Celsius degrees, or 5.4 Fahrenheit degrees, per doubling). But it’s not just the temperature increase from a doubling of concentrations that matters, you also need to know how many doublings we’re in for!
I’ve been frustrated for many years by the way numbers about projected greenhouse gas concentrations have been presented, even by some folks who ought to know better. The most common approach has been to focus just on carbon dioxide, and make some hand-waving statements about the effects of the other warming agents, but that never satisfied me. As a comparison of Figure 4 and Figure 6 show, the other warming agents are significant contributors to warming, increasing the effective greenhouse gas concentration from about 900 ppm (for carbon dioxide alone) to about 1350 ppm when all warming agents are included.
The MIT researchers deserve great credit for their work. They appropriately defined a “no-policy” case to clearly show the effect of the current path we’re on (avoiding the confusion among policy makers engendered by the “multiple baselines” approach embodied in the IPCC Fourth Assessment report). They also conducted a comprehensive analysis of all warming agents, and made their data available to other researchers who could summarize the results in effective ways. It was quite a relief to discover their work, and it made writing the first few chapters of Cold Cash, Cool Climate a lot easier.
The last part of the puzzle is to understand whether the MIT no-policy case is a plausible representation of a world in which we initiate no constraints on greenhouse gas emissions. One way to do that is to compare the history for various drivers of emissions (like population, energy efficiency, and economic growth) to the projections, a task that I undertook in Chapter 2 of Cold Cash, Cool Climate. In virtually every case, the projected trends looked a lot like the previous 50 years, and in some cases, the projections showed more modest growth than one might expect from recent history.
Another way to assess the projection is to examine just how many fossil fuel resources exist, to see if it’s plausible that the world could burn the amount of fossil fuels embodied in the MIT no policy case. I conducted this exercise in Chapter 2 and Appendix A of Cold Cash, Cool Climate. I also summarized the results in my blog post titled “Why fossil fuel abundance is an illusion” (see Figure 8).
Figure 8: Lower bound estimates of fossil fuel reserves compared to fossil carbon emissions in the MIT’s no-policy case
The key conclusion from this analysis (which is based on lower-bound resource estimates taken from the most recent Global Energy Assessment) is that fossil fuel resource constraints are unlikely to constrain carbon emissions in the twenty first century. If we just burn the conventional oil and gas resource base plus the coal proven reserves, we’d only need to burn about 10% of the remaining coal in the “resource base” to hit the MIT no-policy case emissions by 2100. In a world where the true cost of fossil fuels is masked by subsidies and unpriced pollution costs, it is clear to me that we’d easily burn enough fossil fuels to match the no-policy case totals.
Conclusions
The case for concern about rising greenhouse gas (GHG) concentrations is ironclad, and the graphics above show one compelling way to describe that case. We’re on track for more than two doublings of greenhouse gas concentrations by 2100 when all warming agents are included. Combined with an expected warming of about 3 Celsius degrees per doubling of GHG concentrations (the climate sensitivity) that implies about a 6 Celsius degree warming commitment on our current path (the 5 Celsius degree warming calculated by MIT for 2100 is lower because it takes many centuries for the climate to equilibrate to fully account for the effects of changes in concentrations).
The graphs above show a dramatic shift in the climate system caused by human activity, one that has no precedent in human history. We need to leave more than three-quarters of proven fossil fuel reserves in the ground if we’re to stabilize the climate (for more technical backup on this point, see this classic paper by Meinshausen et al. and the technical details provided in Cold Cash, Cool Climate). It’s hard to imagine a starker challenge for humanity, but it’s one that we must confront if we’re to leave a livable world for our descendants.
In an article in a recent issue of Science, Diego Reforgiato Recupero discussed some aspects of reducing the environmental impact of the Internet, but muddled some key concepts, ignored others, and generally made a hash of things. It’s disappointing because those of us working on this issue for decades have tried at different points to publish much better articles on this topic in Science, to no avail. They also refused to publish a corrective letter that I and other colleagues at LBNL, Northwestern, and the University of South Florida wrote, so we posted it here (it may appear in shortened form as an online comment to the article, but the link gives the full letter with all citations).
Stay tuned for our article forthcoming in Nature Climate Change, which deals with the issues around reducing greenhouse gas emissions from data centers (one important part of the Internet) in a much more sensible way.
Recupero, Diego Reforgiato. 2013. “Toward a Green Internet." Science. vol. 339, no. 6127. March 29, 2013. pp. 1533-1534. [http://www.sciencemag.org/content/339/6127/1533.short]
Jonathan Koomey’s new book Cold Cash, Cool Climate: Science-Based Advice for Ecological Entrepreneurs puts forth an intriguing idea – entrepreneurs are one of the keys to meaningful, timely climate action. Society needs to make drastic changes to avoid dangerous global warming. However, institutions such as the government and big business only change slowly and incrementally, except under exceptional circumstances.
Koomey argues forcefully that it’s the very nature of entrepreneurs that make them an important part of the solution. While institutions often fear and resist change, entrepreneurs embrace it. The changes required are so large, no part of the economy will be untouched. Most people look at the enormity of this issue and despair. But entrepreneurs are famously scornful of the phrase “it can’t be done” and see opportunity.
This is not to say entrepreneurs are the magic bullet. One of the key points that Koomey makes is that we need to be addressing climate change on many fronts. The key to speeding up the change is to make the systems that are causing the climate problem obsolete more quickly. What entrepreneurs do is develop replacements that are so much better than to existing ways of doing things that people are willing to “upgrade” to gain the advantages of the new technology.
At the end of November 2012 I met with Professor Rong Zeng of Tsinghua University, and he told me of his long-term vision of an electric power system that only used Direct Current (DC). There is much to recommend this vision if one were starting a power system from scratch, but the US (and thus the world) settled on AC power about a century ago, after fierce competition between Edison and Tesla. Edison advocated DC power, Tesla advocated AC power, and eventually the AC power proponents triumphed. This historical conflict is one of several terrific examples of the importance of path dependence in technological and economic systems.
I was reminded of Professor Zeng’s vision when I read an article in EE times about the power systems needed for Light Emitting Diode (LED) fixtures. This article describes the circuitry needed to minimize electromagnetic interference from the switching power supplies in LED, circuitry that adds costs and complexity to end-user devices. If the houses were wired for DC, this complex circuitry simply wouldn’t be needed.
Of course, there might be other disadvantages to widespread use of DC power for power systems, but it’s not at all a given that the current state of a technological system is how we’d design it if we were creating it from scratch. And in fact, the point of whole systems design is to capture the benefits of designing from scratch, precisely because most technological systems are characterized by path dependence. So it’s important for those approaching problems for the first time not to assume that the way things are is the way things have to be. The future is ours to choose, and technological developments often push us into new design spaces that simply weren’t reachable before. Your goal is to find those spaces and use them to your advantage.
Climate Progress reported yesterday on a new study about the warming effects of black carbon published in the peer reviewed journal Journal of Geophysical Research-Atmospheres. The study reveals that black carbon (dark particles of soot emitted by various kinds of combustion) is roughly twice as powerful a warming agent as previously thought, and it is the second most important climate forcing agent after carbon dioxide in terms of warming experienced thus far (methane is a close third).
The figure below, taken from the article, summarizes the complexities surrounding black carbon’s emissions and effects.
This research is important because:
1) It is yet another example (like the melting rate of Greenland’s glaciers) where the climate models have in the past significantly underestimated the warming effects of human activities. As we learn more, the climate problem continues to grow more worrisome.
2) Unlike carbon dioxide, nitrous oxide, and methane, black carbon (BC) has a very short lifetime in the atmosphere (on the order of a week). Reductions of BC emissions will lead to rapid reductions in the warming effect, so we can have a big impact on warming relatively quickly, if we focus on the sources of those emissions that exert the highest warming effect.
3) BC emissions (in the form of small particulate matter) are tied to significant human health effects, and the benefits of reducing these emissions are large enough to justify reducing them, irrespective of climate benefits. The World Health Organization estimates that about 2 million people die each year from pollution from indoor combustion of solid fuels, the same kinds of combustion tied to BC emissions.
4) The biggest sources of BC are burning forests and savannas, coal and biomass in residential applications, and diesel fuel. As the world warms, it’s likely that black carbon from burning forests will increase, which is a positive feedback associated with a climate change.
5) Even if we are successful in reducing BC emissions, we’ll still need to reduce emissions of carbon dioxide, methane, nitrous oxide, and other long-lived forcing agents, because their long-term effects remain a real concern.
My friend Alex Zwissler, who heads up the Chabot Space and Science Center, published an interesting blog post today giving guidance about how to separate fact from fiction on scientific topics, with the promise of more to come. He focused today on determining the source of any claim, assessing whether the source is truly an authority and what their underlying motivations might be. Here are the intro two paragraphs:
In one of my recent posts I had some fun with the topic “…the six things I want our kids to know about science” … of course there are more, but it was a good start. Among the comments I received on the post, one from a friend posed a troubling question, which could be restated as, “OK wise guy, how DO we help our kids figure out what to believe with all this seemingly conflicting and confusing sciency stuff?” In re-reading my rants on this subject I realize that while I have done a passable job of laying out the challenge of figuring out how to decide what to believe, I’ve done a crap job in providing any answers to the question how. This led to some rapid self- reflection, asking myself the question, “OK wise guy, how do YOU figure out what to believe with all this seemingly conflicting and confusing sciency stuff?” …fie on self-reflection. But the effort did allow me to see that I do have a bit of method to my madness, outlined here.
My first step in assessing the validity of a claim is pretty much always the same… I take a really, really close look at the source. I feel this is the best place to start, and while taking a bit of time and effort it can often yield immediate results. If the source does not pass the smell test, then move on. This exercise breaks down into two broad categories, qualification and motivation.
I’ve written about some aspects of this topic in several places in Turning Numbers into Knowledge, including Chapter 11, which focuses on applying critical thinking to assessing arguments. Alex’s post got me thinking about the practical complexities that often arise, even for researchers in a specific field, and I saw a good example of such complexities in an news report in Science Dailythat was just posted today.
This report summarized a peer reviewed article that appeared in a well regarded journal (Environmental Science and Technology). Even better, the report gave the actual citation with a link to the article’s DOI (digital object identifier, which is a record locator for scholarly papers). It also states at the end that it is a summary of materials supplied by the American Chemical Society, so it’s not original reporting by Science Daily. So far, so good.
Now it gets interesting–here’s the first paragraph of the article:
Researchers from the Centre for Energy-Efficient Telecommunications (CEET) and Bell Labs explain that the information communications and technology (ICT) industry, which delivers Internet, video, voice and other cloud services, produces more than 830 million tons of carbon dioxide (CO2), the main greenhouse gas, annually. That’s about 2 percent of global CO2emissions – the same proportion as the aviation industry produces. Projections suggest that ICT sector’s share is expected to double by 2020. The team notes that controlling those emissions requires more accurate but still feasible models, which take into account the data traffic, energy use and CO2production in networks and other elements of the ICT industry. Existing assessment models are inaccurate, so they set out to develop new approaches that better account for variations in equipment and other factors in the ICT industry.
A reader might reasonably conclude that the research article added up carbon dioxide emissions and showed that the ICT industry emits about the same amount of greenhouse gases as global aviation, roughly 2 percent of global emissions. When you read the article itself, however, you realize that the authors were simply summarizing the results of six other studies, three peer reviewed, three not, with the two key sources dating to 2007 and 2008, respectively.
In order to really understand if this claim is true you’d need to go back to those sources and read them all. If you did that, you’d realize that the 2 percent estimate is from non-peer reviewed reports published 4-5 years ago, and that the cited research article was simply reproducing those figures as context for presenting their conclusions. In essence, the “factoid” that 2% of the world’s carbon dioxide emissions come from ICT has become disembodied from the original source, making it difficult and time consuming for people unfamiliar with this literature to determine if it’s true or not.
None of this should discourage the lay reader from following Alex’s advice and assessing the credibility and motives of any information source, but it also highlights the importance of actually reading the original published source for any particular claim. Summaries of other people’s results almost invariably create disembodied statistics and other confusions, so it’s incumbent on anyone who wants to use information for an important decision to go back to the original source. That’s the only way to make sure you’ve really gotten it right.
In the Epilogue to the second edition of Turning Numbers into Knowledge I summarize a related example, in which some rather wild claims about Internet electricity use required detailed debunking. Email me if you’d like a copy of the Epilogue and a few related articles–it’s a terrific illustration of disembodied statistics run amok.
Finally, I highly recommend William Hughes’ book titled Critical Thinking: An Introduction to the Basic Skills. The book is a marvelous introduction to critical thinking, and it discusses how to evaluate whether an authority is credible in some detail. I have the 1997 edition, which was written by Hughes alone, but there seem to be two later editions coauthored by Jonathan Lavery and William Hughes, and used copies seem to be reasonably priced. Here’s the link to the 2008 edition on Amazon US. Amazon Canada has the 2008 edition new for about $48 Canadian.
This graph summarizes the warming limit approach to the climate problem nicely (for more details, see my post about “Why fossil fuel abundance is a illusion”, Chapters 1 to 3 and Appendix A in Cold Cash, Cool Climate, and Bill McKibben’s Rolling Stone article titled “Global Warming’s Terrifying New Math”). When you compare the amount of carbon we can emit and stay within the 2 Celsius degree warming limit to proved reserves of fossil fuels, it’s clear that we just can’t burn it all. This reality hasn’t yet sunk in, because the valuation of fossil fuel companies still implies that we can.
As an aside, readers who know my book Turning Numbers into Knowledge are aware that I’m not a fan of 3D bar charts, and the point could have been better made with a 2D bar overlaid on top of Google Earth. The message of the graph is the most important thing, however, and the image is striking, so I’ll give Bill a pass on this issue.
Addendum (December 28, 2012): The graph uses a term called “declared reserves”, which isn’t one that is used much in the literature, but it’s likely that it represents proved reserves plus some part of what geologists call “resources”. The latter category represents fossil fuel deposits that we expect to be there based on our current knowledge, but we don’t know their quantities as accurately as the proved reserves (which we are pretty sure we can extract at current prices using current technologies).