I’ve been struggling for years to convince executives in large enterprises to fix the incentive, reporting, and other structural problems in data centers. The folks in the data center know that there are issues (like having separate budgets for IT and facilities) but fixing those problems is “above their pay grade”. That’s why we’ve been studying the clever things eBay has done to change their organization to take maximal advantage of IT, as summarized in this case study from last fall:
Spend five weeks learning from Jonathan Koomey, Ph.D., a Research Fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University, and one of the foremost international experts on data center energy use, efficiency, organization, and management. Jon, along with help from other industry leaders, has developed this course to help experienced executives bring their internal information technology (IT) organizations into the 21st century. For the Capstone Project, students will propose management changes to their own organizations to help increase business agility, reduce costs, and move their internal IT organization from being a cost center to a profit center.
I’m excited about this class, but to make it happen, we need lots of signups by mid September. Please spread the word by sending this to upper level management in the company where you work.
I taught a class called Energy and Society as a visiting professor at UC Berkeley back in Fall 2011. It was a kind of homecoming for me as I had taken the graduate version of that class at the Energy and Resources Group from John Holdren and the late Mark Christensen back in 1984.
At some point in the semester I gave an impromptu lecture on academic integrity, and I recently ran across a recording of that lecture by chance. It struck me as a nice concise summary that others might find useful. Here’s an edited version:
It’s about the right time in the semester to remind everybody about academic integrity. It’s important that any work that you submit as yours should be your own individual thoughts—not thoughts lifted from other people in any way, shape, or form. When you do assignments you have to use what’s called “proper attribution”, and by that we mean quoting accurately and making sure that somebody who reads what you write can trace back to the original source who said what.
I have been doing work on a paper recently with the historian Richard Hirsh, who never ever, ever uses quotes second hand. So if he hears somebody has quoted a particular person, the only way he ever uses that quote in his work is if he can look at the wording of the quote and the context in the original source, to make sure that he’s got it right. That turns out to be important, because you find mistakes all over the place.
For example, the old White’s Law that we talked about earlier in the class? The relationship between culture and energy? The original slide that I presented came from last year’s class and was dated 1973. It turned out to come from a paper in 1943. So there was a little typo. And so going back to the original source, Richard figured that out. It’s good to be a history professor; you have time to track these things down.
You have to use proper attribution—if you aren’t sure what that is then go to this site or this site and they will tell you a bit about that. It’s also important that you don’t work with other students or collaborate on assignments unless you’re given permission or instruction to do that. You need to do your own work and you need to make sure that whatever work you use to support your own work is properly attributed.
You should take this issue seriously. The University is a test bed for real life. As an undergrad, you need to experiment and try different things, but there are consequences—both here and in real life—for not following these rules and not guarding your academic integrity with great care.
Reputation is a precious and perishable thing, and if you use someone else’s work without attribution, then you are impugning your own intellectual integrity: you are hurting yourself. In the real world, if you are a scientist and somebody finds out that you have copied data, you are ruined. You are ruined. There is no way to recover from that as a scientist. You might be able to do some work in another field, but no one’s ever going to trust you again.
Academic integrity is about doing the right thing even when it is not convenient to do the right thing, to mean what you say and say what you mean, and follow through when you make a promise to someone. That’s all part of integrity. That’s all part of making sure that when people see your work they say: “I believe it”. And they will check it—in science especially they will always check it—but they will have an underlying confidence that because you’ve done your work with integrity in the past—every time they’ve checked it in the past it’s worked out well—they will believe your work and trust it and use it to support theirs. It’s a critical thing both personally and professionally. Please keep that in mind. If you have questions about this issue or about the rules about academic misconduct at UC Berkeley, please check this website.
My interview with Tom Bowman about “the unique role entrepreneurs play in climate action” was posted this past Saturday. It turned out very nicely and required little or no editing, so I guess I was on a roll. Please listen, send comments, and spread the word!
In the interview I talk about why economic models underestimate the scope and possibilities for change. I also explore why entrepreneurs are a crucial part of the solution. And I describe why hope is really the only choice in the face of climate change, the ultimate adaptive challenge.
Tom is founder and CEO of Bowman Change, Inc., a consultancy dedicated to helping organizations reap the benefits of working with purpose—making social issues and environmental change central to their missions. His podcast series on climate solutions is extensive and interesting.
The impression back in 2012 might have been that Lomborg’s think tank was struggling for cash, but a DeSmogBlog investigation suggests the opposite.
The nonprofit Copenhagen Consensus Center (CCC) has spent almost $1 million on public relations since registering in the US in 2008. More than $4 million in grants and donations have flooded in since 2008, three quarters of which came in 2011 and 2012.
In one year alone, the Copenhagen Consensus Center paid Lomborg $775,000.
It’s important to follow the money, as Readfearn has done, to determine who’s supporting the most prominent skeptics. Almost always the trail leads back to the status quo interests who want to keep earning profits from fossil fuel infrastructure as long as they can.
Addendum, June 26, 2014: Joe Romm at Climate Progress has gone into more detail about funding for Lomborg, indicating that some of the usual status quo suspects are behind these developments.
The new Risky Business report was released today. Worth a read. Here a paragraph motivating the report’s conclusions:
Climate Change: Nature’s Interest-Only Loan
Our research focuses on climate impacts from today out to the year 2100, which may seem far off to many investors and policymakers. But climate impacts are unusual in that future risks are directly tied to present decisions. Carbon dioxide and other greenhouse gases can stay in the atmosphere for hundreds or even thousands of years. Higher concentrations of these gases create a “greenhouse effect” and lead to higher temperatures, higher sea levels, and shifts in global weather patterns. The effects are cumulative: By not acting to lower greenhouse gas emissions today, decision-makers put in place processes that increase overall risks tomorrow, and each year those decision-makers fail to act serves to broaden and deepen those risks. In some ways, climate change is like an interest-only loan we are putting on the backs of future generations: They will be stuck paying off the cumulative interest on the greenhouse gas emissions we’re putting into the atmosphere now, with no possibility of actually paying down that “emissions principal.”
Our key findings underscore the reality that if we stay on our current emissions path, our climate risks will multiply and accumulate as the decades tick by.
By putting the risks in financial terms this report makes clear what’s at stake. "Staying the course" has real costs and risks, it’s not just the alternative future that costs something. And all credible analyses show that the incremental costs of making the changes we need are modest (at most 1-2% of GDP, but very likely much less than that, for reasons that I can explain to anyone who’s interested in the details).
If you want a compact place to check out the latest climate news from around the world, my friend Mel Harte’s site is for you. It’s hosted on the Huffington Post.
Addendum: This list omitted one of my favorites, the Tu Quoque fallacy, where someone claims that an argument is invalidated because of hypocritical actions by the person making that argument. Thou shalt not do that either!
From an energy efficiency perspective, one of the most persistently difficult appliances is the clothes dryer. Apart from switching from electricity to natural gas, or buying a washing machine that has extra high spin speeds, there just isn’t much you can do about dryer energy use. For decades people have talked about dryers that use a heat pump instead of electric resistance heating, but they suffer from long dry times, high cost, and reliability issues.
Now comes a new technology that promises to revolutionize drying efficiency: the long wave radiofrequency (RF) dryer. This technology involves using very long wave electromagnetic RF radiation (about 70 feet or 21 meters wavelength, corresponding to 13.56 MHz, for the technical folks in the audience) to heat the water in the clothes. One of the great advantages of such an approach is that the RF energy can be “tuned” to preferentially heat water, so it’s very efficient indeed.
On February 5th, 2014 I had occasion to visit a Silicon Valley garage containing an RF dryer prototype. The company is called Cool Dry RF, and they have made tremendous progress on this difficult problem.
Photo credit: Jonathan Koomey.
Two of the three inventors of Cool Dry are in the picture: John Eisenberg (left) and Dave Wisherd (right). Chris Calwell of Ecova is in the center.
The first thing to understand is that the value of clothes moving through the clothes dryer is much higher than the value of the energy used to dry the clothes. Thus drying methods that are easier on the clothes (and extend clothing lifetime) have an inherent advantage.
RF dryers benefit from more direct coupling of drying energy to the water in the clothes, resulting in lower fabric temperature and fewer rotations. In addition, the technology allows the dryer to directly sense the capacitance of the load of clothing, giving a precise measurement of the actual moisture content of the clothes. By contrast, conventional moisture sensors (resistive strip sensors) are notoriously imprecise, and often result in over drying (and thus fabric damage).
These benefits together result in significantly improved drying performance. The RF Cool Dry prototype was built from a conventional 240 VAC GE Spacemaker electric dryer. The engineers bought and maintain a second dryer, which is the exact same model in a conventional configuration. The second dryer is the baseline against which savings are measured.
Photo credit: Jonathan Koomey.
The standard GE Spacemaker dryer, instrumented but otherwise unmodified.
When the Cool Dry prototype is compared to the conventional dryer for drying a 3 pound mixed load (denim jeans, cotton T-shirts, high speed washer spin), the results are striking. Drying time is the same (35 minutes), but electricity use is 26.5% lower for Cool Dry. In addition, 8.5 times fewer drum rotations and a 45 degree F lower fabric temperatures results in one fifteenth of the lint compared to the standard dryer. The more precise measurement and directed RF energy means that the T-shirts are not over-dried as they are in the conventional dryer.
Your clothes will therefore last longer with the RF dryer. This technology is yet another example where improved efficiency is a byproduct of good design, and in fact it’s the other benefits (longer clothing lifetimes) that will probably be most motivating for consumers (the lower cost of operation will be a bonus).
Photo credit: Jonathan Koomey.
The back of the modified GE Spacemaker dryer that incorporates the GE Cool Dry technology. Some of the wiring is for instrumentation, but most is to make the device function.
This system is by no means optimized. The Cool Dry engineers were forced by the limitations of the Spacesaver dryer to make certain compromises, and I’ll pretty sure that building the technology into a dryer design from the start will result in much bigger savings. The Cool Dry engineers didn’t want to say this definitively, because they are very careful fellows, but just take this as my speculation based on experience reviewing whole system design methods for improving efficiency. Starting from scratch almost always results in higher savings and lower costs.
The company is currently in licensing discussions with several big appliance companies, so there’s a good chance this innovation will make it to market. There is a long road between a prototype and commercialization, but this looks like an innovation that has legs. That’s hopeful, given how hard it’s been to improve dryer efficiency in the past. Let’s hear it for garage-based innovation, whole system design, and thinking outside the box!
Photo credit: Jonathan Koomey.
Now that is my kind of workshop!
Addendum: The earlier version of this post gave the wavelength of the RF waves as 70 meters, but the correct wavelength is 70 feet, as now described above.
To give context to the recent EPA proposed rule on existing power plant emissions, this week I compiled historical data and the most recent Energy Information Administration (EIA) projections on carbon emissions for the US utility sector. The results are shown in Figure 1.
Figure 1: US utility sector carbon dioxide emissions over time (Mt carbon dioxide per year)
The historical data for utility sector carbon emissions (which includes independent generation sources) through 2011 come from the Annual Energy Review, The historical data for 2012 and 2013 come from the May 2014 Monthly Energy Review. The business as usual forecast for 2014 to 2040 (assuming no changes in current policy, published well before the proposed EPA rules), comes from the Annual Energy Outlook 2014.
I estimated the emissions path for EPA’s proposed rule by assuming that total utility sector emissions would hit 25% below 2005 levels by 2020, and 30% below 2005 levels by 2030, based on a Wall Street Journal article about the rule. I then linearly interpolated between 2013 and 2020/2030 emission levels.
Historical growth in emissions averaged 4.1%/year from 1949 to 2005. Total emissions remained about the same through 2007, then dropped rapidly, falling more than 2%/year starting in 2007. Some of that decline was caused by the Great Recession, some by efficiency improvements, some by switching to natural gas, and some by increased penetration of renewables.
The EIA’s business as usual projection shows growth in utility emissions of about 0.4%/year from 2013 to 2040, while the EPA regulation path shows declines of about 1.8%/year through 2020 and 0.7%/year after that. So the first years of the rule to 2020 are estimated to result in a annual rate of decline in emissions slightly less than that experienced between 2007 and 2013.
What I realized in compiling these numbers is that the coal industry should hope that the utility industry uses non-fossil resources (like renewables and efficiency) to displace coal plants and meet the constraints of the rule. By 2020, emissions will need to decline by about 12% compared to 2013, which means an absolute reduction in annual emissions of 240 Mt carbon dioxide per year.
If that reduction in coal use (which represents about 15% of US coal generation in 2013) is brought about by efficiency and renewables, then only 15% of coal plant generation existing in 2013 would be displaced. If instead the coal reductions come about from using natural gas in advanced combined cycle power plants, then utilities would need to displace twice as much coal to achieve the mandated emissions reductions, because natural gas fired generation emits half as much carbon dioxide per kWh as coal (this ignores the still live issue of fugitive emissions of methane, which are almost certainly higher than official estimates, and would make this problem even worse).
So once the coal industry accepts that emissions have to come down, and eventually they will, then their natural temporary allies in a scenario of declining coal production (or at least their somewhat less hated opponents) are wind generators, photovoltaic panels, nuclear power plants (if we can build them on time and on budget), and efficient appliances. Weird, huh?
For those who want to review my spreadsheet and check the numbers for yourselves, download it here. It has the AEO 2013 and 2014 year by year numbers, which you can’t get from the official EIA reports. It also contains the graph above for ease of use. Feel free to reproduce the graph as long as you link back to this post and acknowledge the source.
Addendum, Jun 6, 2014: One of my most astute colleagues pointed out that the EPA rules are specified in terms of emissions rates, rather than absolute emission levels, and the connection between renewables and efficiency to meeting the standards is more complicated than I indicate above. The general lesson still holds, as long as you are thinking about absolute caps on emissions (as the Northeast US and California are doing) but the exact implications need careful study for the US as a whole. Live and learn!
One of the most common Internet memes about climate is the idea that the earth hasn’t warmed since 1998. This erroneous claim is based on cherry picking data and ignoring the increase in the heat content of the oceans, which is where most of the historical warming has been stored. This issue is explored fully in this section of the website Skeptical Science, as well as another page that explores the right and wrong ways to understand trends, but I recently saw a nice graph that boils it all down.
The graph was in the recently released National Climate Assessment, but it was on p. 796, in Appendix 4 (talk about burying the lede!). It shows average global surface temperatures by decade over time, indicating that the 2000s were hotter than the 1990s, which were hotter than the 1980s, which were hotter than the 1970s. It shows clearly to anyone who can read a graph why global warming didn’t stop in 1998. For those still perpetuating this falsehood, please find another hobby.
Figure caption: The last five decades have seen a progressive rise in Earth’s average surface temperature. Bars show the difference between each decade’s average temperature and the overall average for 1901 to 2000. The far right bar includes data for 2001-2012. (Figure source: NOAA NCDC). National Climate Assessment, p.796, Figure 7 in Appendix 4.
I lectured in Leslie’s class at her invitation this past school year, sharing the class with my old friend Terry Root.
Here’s the first paragraph:
Dr. Jonathan Koomey’s 199-page gem of a book provides a brilliant, targeted and concise analytical basis for the evaluation and development of entrepreneurial understanding and opportunities in the climate space. The book is meant to give time-strapped ecological entrepreneurs a scientific grounding in what opportunities and constraints arise from the numerous and growing problems that stem from climate change – and this reviewer has also used it twice as an introductory text for the graduate-level seminar class she started and teaches on Engineering and Climate Change at Stanford.
And here’s the last paragraph:
Don’t think you’ve read the full story about this deceptively accessible book in this short review. The meticulously researched figures, the examples, the tables and appendices are packed full of careful detail that can help an entrepreneur, an innovator, a concerned citizen, to get going rapidly to develop his or her own breakthrough contributions to the large collection of approaches that must be imagined and evaluated and selected among, in order to accelerate our progress toward the kind of future that we would be happy to leave to our children, and to theirs. Respectfully, it’s time to get going.
It’s nice to have the book being used for the purpose for which it was intended, inspiring the next generation of entrepreneurs to tackle the climate problem, which is the biggest collective (and adaptive) challenge humanity has ever faced.
Knovel just published my latest white paper, titled Climate Change as an Adaptive Challenge (it’s a free download, but you’ll need to type in some info about yourself before downloading). The blog post below summarizes the argument. The white paper itself contains the latest data demonstrating the case for urgent action, and it ties in nicely to the recently released National Climate Assessment.
Introduction
In their new book Moments of Impact, Chris Ertel and Lisa Kay Solomon, citing Ronald Heifetz, describe two types of challenges:
• Technical challenges are those we can solve using well-known techniques and tools. Such problems are well defined and well understood
• Adaptive Challenges, on the other hand, are “messy, open-ended, and ill defined”. The tools needed to address them may not yet exist. Such problems require different kinds of leadership and problem solving skills, and cry out for interactive engagement among all the people needed to solve them.
Ertel and Solomon write
“It’s nearly impossible for any one senior executive–or small leadership team–to solve adaptive challenges alone. They require observations and insights from a wide range of people who see the world and your organization’s problems differently. And they require combining those divergent perspectives in a way that creates new ideas and possibilities that no individual would think up on his or her own.” [1, page 10]
Climate change is the ultimate adaptive challenge, because the rate and scope of the changes needed to solve the problem will stretch us to the limit. In addition, the solutions must involve changes in behavior and institutional structure, not just technology, because the problem is so pressing. As I argued in Cold Cash, Cool Climate,
Climate change is probably the biggest challenge modern humanity has ever faced. It’s bigger than World War II, because it will take decades to vanquish this foe. It’s harder than ozone depletion, whose causes were far less intertwined with industrial civilization than fossil fuels and other sources of greenhouse gases. And it’s more intractable than the Great Depression (or our current economic malaise) because financial crises eventually pass, assuming we learn from past mistakes and fix the financial system (again!).” [2]
We have many options for reducing greenhouse gas emissions, but we’ll need new ones, too. Existing options will only get us so far. That’s why we’ll have to take an evolutionary approach to this problem, one that embraces the adaptive nature of the challenge before us.
As Ertel and Solomon argue, tackling adaptive challenges requires “strategic conversations” that help define the problem and generate innovative ideas for solving it. The first step in creating such a strategic conversation is to understand the challenge before us, and this blog post and the associated research note (LINK) are intended to foster such understanding.
The case for urgent action
The case for concern about rising greenhouse gas (GHG) concentrations is ironclad, and the graphics in the white paper show one compelling way to describe that case. We’re on track for more than two doublings of greenhouse gas concentrations by 2100 when all warming agents are included (see Figure 1). Combined with an expected warming of about 3 Celsius degrees per doubling of GHG concentrations (the climate sensitivity) that implies about a 6 Celsius degree warming commitment on our current path (the 5.5 Celsius degree warming calculated by MIT is lower because it takes many centuries for the climate to equilibrate to fully account for the effects of changes in concentrations).
Figure 1: Carbon dioxide equivalent concentrations for the past 800,000 years and projected to 2100 assuming no change in policies, including other warming gases
Sources: CO2 concentration data before year zero taken from a composite record produced by NOAA [ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/epica_domec/edc-co2-2008.txt], based on [3, 4, 5, 6, 7]. CO2 concentration data from years zero to 1958 taken from a composite record produced by NOAA [ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/law/law2006.txt], based on [8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. Concentrations for years 1959 to 2012 are taken from Keeling et al. [18] and Tans [19]. MIT no-policy case concentrations taken from Sokolov et al. [20]. Negative numbers indicate years BC. Note that y-axis begins at 100 parts per million by volume.
The graphs in my white paper show a dramatic shift in the climate system caused by human activity, one that has no precedent in human history. We need to keep a significant fraction of proved fossil fuel reserves in the ground if we’re to stabilize the climate and avoid these changes. It’s hard to imagine a starker adaptive challenge for humanity, but it’s one that we must confront if we hope to leave a livable world for our descendants.
What can we do?
To meet the climate challenge we’ll need rapid GHG emission reductions in the next few decades. This conclusion is inescapable because it’s cumulative emissions that matter, due to the long lifetime of many greenhouse gases. If we want to prevent global temperatures from increasing more than 2 Celsius degrees, we have a fixed emissions budget over the next century. If we emit more now we’ll have to reduce emissions more rapidly later, so delaying action (either to gather more data or to focus on energy innovation) is foolish and irresponsible. If energy technologies improved as fast as computers there might be an argument for waiting under some circumstances, but they don’t, so it’s a moot point.
Of course, we need new technologies and should therefore invest heavily in research and development, but there are vast opportunities for emission reductions using current technologies, and cost reductions for these technologies are dependent on implementing them on a large scale (learning by doing only happens if we do). So the focus in the next few decades should be on aggressive deployment of current low-emissions technologies, bringing new technologies into the mix as they emerge.
The changes we need are so large that no part of the economy will remain untouched, and that means opportunity. In fact, we’ll probably need to scrap some capital in the energy sector, given the rate of emissions reductions that will be required to maintain a livable climate. Entrepreneurs can lead the way by designing new low-emission products, services, and institutional arrangements that are so much better than what they replace that people are eager to adopt them (and to scrap some of their high emitting existing capital along the way).
Emissions reduction opportunities start by focusing on the tasks we want to accomplish and associating those tasks with flows of energy, emissions, and costs, which you then work to minimize. This focus on tasks frees you from the constraints of how they are currently accomplished and allows you to capture compounding resource savings upstream. By considering the whole system and designing to approach theoretical limits of efficiency, it is often possible to achieve drastically reduced emissions while also improving other characteristics of products or services substantially.
Information and communication technology (ICT) is accelerating the rate of innovation throughout the economy, and that development has implications for business opportunities in this space. ICT speeds up data collection, helps us manage complexity, allows us to restructure our institutions more easily, and lets us rapidly learn and adapt to changing circumstances. It also creates a continuously renewable source of emissions reductions, and is a great place to look for opportunities because it generally offers rapid speed to market and low startup costs.
When considering the climate issue, we can’t avoid the issue of institutional governance. Government has an essential role to play in defining property rights, enforcing contracts, and internalizing external costs. No other institution can do these things, so we need to ensure that these tasks are performed in a way that leads to the kind of society we want. When it comes to government, more is not better. Less is not better. Only better is better. And better is what we as a society should strive for.
Conclusions
Surviving this stage of human development means we’ll need to evolve as a species to learn how to face adaptive challenges like this one. We’ll need to foster rapid innovation, fierce competition, and active coordination between businesses, all at the same time. We’ll also need to change how we think about our responsibilities to each other, to the earth, and to future generations. Innovations in our values can be as powerful as those for new technologies in opening up new possibilities for the future, and these we also need to explore.
The technology now exists for us to move past combustion in most applications, but scaling it up to meet the demands of a modern industrial society won’t be easy. Of course, not doing so will be harder still, because of the damages unrestricted climate change will inflict on the earth and on human society. It’s long past time to get started. There’s simply no more time to waste.
Author’s biography
Jonathan Koomey is a Research Fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University, worked for more than two decades at Lawrence Berkeley National Laboratory, and has been a visiting professor at Stanford University (2003-4 and Fall 2008), Yale University (Fall 2009), and UC Berkeley’s Energy and Resources Group (Fall 2011). He was a lecturer in management at Stanford’s Graduate School of Business in Spring 2013. Dr. Koomey holds M.S. and Ph.D. degrees from the Energy and Resources Group at UC Berkeley, and an A.B. in History of Science from Harvard University. He is the author or coauthor of nine books and more than 200 articles and reports. He’s also one of the leading international experts on the economics of reducing greenhouse gas emissions, the effects of information technology on resource use, and the energy use and economics of data centers. He’s the author of Turning Numbers into Knowledge: Mastering the Art of Problem Solving (which has been translated into Chinese and Italian) and Cold Cash, Cool Climate: Science-Based Advice for Ecological Entrepreneurs (both from Analytics Press).
3. Indermühle, A., E. Monnin, B. Stauffer, T.F. Stocker, and M. Wahlen. 1999. “Atomspheric CO2 concentration from 60 to 20 kyr BP from the Taylor Dome ice core, Antarctica." Geophysical Research Letters. vol. 27, pp. 735-738.
4. Lüthi, D., M. Le Floch, B. Bereiter, T. Blunier, J.-M. Barnola, U. Siegenthaler, D. Raynaud, J. Jouzel, H. Fischer, K. Kawamura, and T.F. Stocker. 2008. "High-resolution carbon dioxide concentration record 650,000-800,000 years before present." Nature. vol. 453, no. 7193. 2008/05/15. pp. 379-382. [http://www.nature.com/nature/journal/v453/n7193/suppinfo/nature06949_S1.html]
5. Monnin, E., A. Indermühle, A. Dällenbach, J. Flückiger, B. Stauffer, T.F. Stocker, D. Raynaud, and J.-M. Barnola. 2001. "Atmospheric CO2 concentrations over the last glacial termination." Science. vol. 291, pp. 112-114.
6. Petit, J.R., J. Jouzel, D. Raynaud, N.I. Barkov, J.-M. Barnola, I. Basile, M. Benders, J. Chappellaz, M. Davis, G. Delayque, M. Delmotte, V.M. Kotlyakov, M. Legrand, V.Y. Lipenkov, C. Lorius, L. Pépin, C. Ritz, E. Saltzman, and M. Stievenard. 1999. "Climate and atmospheric history of the past 420,000 years from the Vostok ice core, Antarctica." Nature. vol. 399, pp. 429-436.
7. Siegenthaler, U., T.F. Stocker, E. Monnin, D. Lüthi, J. Schwander, B. Stauffer, D. Raynaud, J.-M. Barnola, H. Fischer, V. Masson-Delmotte, and J. Jouzel. 2005. "Stable Carbon Cycle-Climate Relationship During the Late Pleistocene." Science. vol. 310, pp. 1313-1317.
8. Etheridge, D.M., L.P. Steele, R.J. Francey, and R.L. Langenfelds. 1998. "Atmospheric methane between 1000 A.D. and present: evidence of anthropogenic emissions and climatic variability." Journal of Geophysical Research. vol. 103, pp. 15979-15996.
9. Etheridge, D.M., L.P. Steele, R.L. Langenfelds, R.J. Francey, J.-M. Barnola, and V.I. Morgan. 1996. "Natural and anthropogenic changes in atmospheric CO2 over the last 1000 years from air in Antarctic ice and firn." Journal of Geophysical Research. vol. 101, pp. 4115-4128.
10. Ferritti, D.F., J.B. Miller, J.W.C. White, D.M. Etheridge, K.R. Lassey, D.C. Lowe, C. MacFarling Meure, M.F. Dreier, C.M. Trudinger, and T.D. van Ommen. 2005. "Unexpected Changes to the Global Methane Budget over the last 2,000 Years." Science. vol. 309, pp. 1714-1717.
11. Langenfelds, R. L., P. J. Fraser, R. J. Francey, L. P. Steele, L. W. Porter, and C. E. Allison. 1996. "The Cape Grim air archive: The first seventeen years, 1978-1995." In Baseline Atmospheric Program Australia. Edited by R. J. Francey, A. L. Dick and N. Derek. Melbourne: Bureau of Meteorology and CSIRO Division of Atmospheric Research. pp. 53-70.
12. Langenfelds, R.L., P.J. Fraser, L.P. Steele, and L.W. Porter. 2004. "Archiving of Cape Grim Air." In Baseline Atmospheric Program Australia. Edited by J. M. Cainey, N. Derek and P. B. Krummel. Melbourne: Bureau of Meteorology and CSIRO Atmospheric Research. pp. 48.
13. Langenfelds, R.L., L.P. Steele, M.V. Van der Schoot, L.N. Cooper, D.A. Spencer, and P.B. Krummel. 2004. "Atmospheric methane, carbon dioxide, hydrogen, carbon monoxide and nitrous oxide from Cape Grimm flask air samples analysed by gas chromatography." In Baseline Atmospheric Program Australia. Edited by J. M. Cainey, N. Derek and P. B. Krummel. Melbourne: Bureau of Meteorology and CSIRO Atmospheric Research. pp. 46-47.
14. MacFarling Meure, C. 2004. The natural and anthropogenic variations of carbon dioxide, methane and nitrous oxide during the Holocene from ice core analysis. Thesis, University of Melbourne.
15. MacFarling Meure, C., D. Etheridge, C. Trudinger, P. Steele,, and T. van Ommen R. Langenfelds, A. Smith, and J. Elkins. 2006. "The Law Dome CO2, CH4 and N2O Ice Core Records Extended to 2000 years BP." Geophysical Research Letters. vol. 33, no. 14. pp. L14810 10.1029/2006GL026152.
16. Sturrock, G. A., D. M. Etheridge, C. M. Trudinger, P. J. Fraser, and A. M. Smith. 2002. "Atmospheric histories of halocarbons from analysis of Antarctic firn air: Major Montreal Protocol species." Journal of Geophysical Research: Atmospheres. vol. 107, no. D24. pp. 4765. [http://dx.doi.org/10.1029/2002JD002548
17. Trudinger, C. M., D. M. Etheridge, P. J. Rayner, I. G. Enting, G. A. Sturrock, and R. L. Langenfelds. 2002. "Reconstructing atmospheric histories from measurements of air composition in firn." Journal of Geophysical Research: Atmospheres. vol. 107, no. D24. pp. 4780. [http://dx.doi.org/10.1029/2002JD002545
18. Keeling, R.F., S.C. Piper, A.F. Bollenbacher, and J.S. Walker. 2009. Atmospheric CO2 records from sites in the SIO air sampling network. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tenn., U.S.A.:
20. Sokolov, A.P., P.H. Stone, C.E. Forest, R. Prinn, M.C. Sarofim, M. Webster, S. Paltsev, C.A. Schlosser, D. Kicklighter, S. Dutkiewicz, J. Reilly, C. Wang, B. Felzer, J. Melillo, and H.D. Jacoby. 2009. Probabilistic Forecast for 21st Century Climate Based on Uncertainties in Emissions (without Policy) and Climate Parameters. Cambridge, MA: Massachusetts Institute of Technology (MIT) Joint Program on the Science and Policy of Climate Change. 169. January. [http://globalchange.mit.edu/files/document/MITJPSPGC_Rpt169.pdf]
In this earlier post I discussed why graphene is such an exciting new material for electronics, and now Bloomberg and Mac Rumors have articles about it.
Here’s the intro to the Bloomberg article.
The main battleground between Samsung Electronics Co. andApple Inc. in the global smartphone market is moving from courtrooms to the laboratory, amid a race for patents on atom-thick technology for the next generation of devices.
Graphene is sort of like the high-tech version of cling wrap. It’s a transparent material that conducts electricity so it can be stretched across glass surfaces of phones or tablets to make them into touch screens. Thinner, stronger and more flexible than current technology, it’s ideal for futuristic gadgets like bendable smartwatches or tablets that fold up into smartphones.
Here’s the intro to the Mac Rumors article, which has a bit more technical detail:
In a world where mobile devices are becoming thinner and thinner and in some cases being worn on the wrist or other parts of the body, graphene may be the wonder material of the future, with properties that make it stronger than steel, more flexible than rubber and more conductive than most metals. As a result, the material could initiate a new wave of innovation in hardware design and manufacturing that may lead to incredibly thin and flexible devices. According to Bloomberg, it also may become the next battlefield for Apple and Samsung.
Graphene is graphite, the material in pencils, arranged in a layer that is one atom thick. The arrangement of the carbon molecules makes the material stronger than steel and even diamonds. It also is flexible, conductive and so transparent that is nearly invisible to the naked eye. It can be applied to other materials, potentially allowing for the creation of flexible displays and bendable devices.
A friend of mine from college who is a practicing lawyer recently took a continuing legal education class at a local college. For his class project he chose to analyze the Keystone XL pipeline. He consented to me posting most of his memo, with his anonymity preserved and with his conclusions removed.
I found his questions and answers to be balanced and useful, and thought others should have the benefit of reading them as well. The presentation of the jobs impacts probably overestimates what is likely to come to pass, and I think it’s important to describe the errors in the State Department’s analysis (because they are both glaring and consequential), but overall this summary is worth a read, no matter what your conclusions are about the pipeline. It focuses more on the legal issues than many other discussions of this type, so that’s also useful.
On to the memo! It was first sent to me in April 2014. Here are the first few paragraphs:
The following is information which I have collected in connection with my in-class presentation on The Keystone XL Pipeline. I have used a question-and-answer format below, as some newspapers are also prone to doing, so as to summarize and speak to the complex and multi-dimensional issues presented by this landmark situation unprecedented in the annals of American environmental law and policy, as far as I am aware.
1.What Is The “Keystone Pipeline”?
The project’s full name is “The Keystone XL Pipeline”; “The Keystone”; or, simply, “Keystone”, and will be referred to as such herein. Keystone is an intended sequential arrangement of individual large pieces of pipeline intended to be built and placed in four (4) states within the United States of America: Montana, South Dakota, Nebraska, and Kansas. Keystone’s contact with Kansas is limited relative to Keystone’s contact with Montana, South Dakota, and Nebraska. Given recent litigation and unusual and remarkable political events in Nebraska relating to Keystone, special attention is paid herein to Nebraska’s interrelationship with Keystone. (Please see below).
2.Which Countries Are Involved?
Canada, and the United States. Canadians including Prime Minister Stephen Harper - and Jim Prentice, former Canadian Conservative Party official touted as a possible Harper successor - are seeking to pressure the Obama Administration to approve the Keystone. USA Today For The Journal News, Tuesday, February 18, 2014. However, no commentator has suggested that Canada has the power to compel America’s decision on Keystone. Canada has roughly one-tenth of America’s population and probably one-one hundredth of America’s political power in the world theatre. (No citation for the latter - that’s just my estimate).
3.Which Companies Are Involved?
“TransCanada” is the company seeking to build the Keystone. TransCanada is a pipeline and energy company based in Calgary, Alberta, Canada - from the municipality of Hardisty, Alberta, Canada. Hardisty, Alberta is a town in Flagstaff County in Alberta, Canada. It is located in east-central Alberta, 111 kilometers (69 miles) from the Saskatchewan border, near the crossroads of Highway 13 and Highway 881, in the Battle River Valley. Hardisty is a town which is mainly known as a pivotal petroleum industry hub where petroleum products such as Western Canada Select blended crude oil and Hardisty heavy oil are produced and traded. http://en.wikipedia.org/wiki/Hardisty