One of the most common Internet memes about climate is the idea that the earth hasn’t warmed since 1998. This erroneous claim is based on cherry picking data and ignoring the increase in the heat content of the oceans, which is where most of the historical warming has been stored. This issue is explored fully in this section of the website Skeptical Science, as well as another page that explores the right and wrong ways to understand trends, but I recently saw a nice graph that boils it all down.
The graph was in the recently released National Climate Assessment, but it was on p. 796, in Appendix 4 (talk about burying the lede!). It shows average global surface temperatures by decade over time, indicating that the 2000s were hotter than the 1990s, which were hotter than the 1980s, which were hotter than the 1970s. It shows clearly to anyone who can read a graph why global warming didn’t stop in 1998. For those still perpetuating this falsehood, please find another hobby.
Figure caption: The last five decades have seen a progressive rise in Earth’s average surface temperature. Bars show the difference between each decade’s average temperature and the overall average for 1901 to 2000. The far right bar includes data for 2001-2012. (Figure source: NOAA NCDC). National Climate Assessment, p.796, Figure 7 in Appendix 4.
I lectured in Leslie’s class at her invitation this past school year, sharing the class with my old friend Terry Root.
Here’s the first paragraph:
Dr. Jonathan Koomey’s 199-page gem of a book provides a brilliant, targeted and concise analytical basis for the evaluation and development of entrepreneurial understanding and opportunities in the climate space. The book is meant to give time-strapped ecological entrepreneurs a scientific grounding in what opportunities and constraints arise from the numerous and growing problems that stem from climate change – and this reviewer has also used it twice as an introductory text for the graduate-level seminar class she started and teaches on Engineering and Climate Change at Stanford.
And here’s the last paragraph:
Don’t think you’ve read the full story about this deceptively accessible book in this short review. The meticulously researched figures, the examples, the tables and appendices are packed full of careful detail that can help an entrepreneur, an innovator, a concerned citizen, to get going rapidly to develop his or her own breakthrough contributions to the large collection of approaches that must be imagined and evaluated and selected among, in order to accelerate our progress toward the kind of future that we would be happy to leave to our children, and to theirs. Respectfully, it’s time to get going.
It’s nice to have the book being used for the purpose for which it was intended, inspiring the next generation of entrepreneurs to tackle the climate problem, which is the biggest collective (and adaptive) challenge humanity has ever faced.
Knovel just published my latest white paper, titled Climate Change as an Adaptive Challenge (it’s a free download, but you’ll need to type in some info about yourself before downloading). The blog post below summarizes the argument. The white paper itself contains the latest data demonstrating the case for urgent action, and it ties in nicely to the recently released National Climate Assessment.
Introduction
In their new book Moments of Impact, Chris Ertel and Lisa Kay Solomon, citing Ronald Heifetz, describe two types of challenges:
• Technical challenges are those we can solve using well-known techniques and tools. Such problems are well defined and well understood
• Adaptive Challenges, on the other hand, are “messy, open-ended, and ill defined”. The tools needed to address them may not yet exist. Such problems require different kinds of leadership and problem solving skills, and cry out for interactive engagement among all the people needed to solve them.
Ertel and Solomon write
“It’s nearly impossible for any one senior executive–or small leadership team–to solve adaptive challenges alone. They require observations and insights from a wide range of people who see the world and your organization’s problems differently. And they require combining those divergent perspectives in a way that creates new ideas and possibilities that no individual would think up on his or her own.” [1, page 10]
Climate change is the ultimate adaptive challenge, because the rate and scope of the changes needed to solve the problem will stretch us to the limit. In addition, the solutions must involve changes in behavior and institutional structure, not just technology, because the problem is so pressing. As I argued in Cold Cash, Cool Climate,
Climate change is probably the biggest challenge modern humanity has ever faced. It’s bigger than World War II, because it will take decades to vanquish this foe. It’s harder than ozone depletion, whose causes were far less intertwined with industrial civilization than fossil fuels and other sources of greenhouse gases. And it’s more intractable than the Great Depression (or our current economic malaise) because financial crises eventually pass, assuming we learn from past mistakes and fix the financial system (again!).” [2]
We have many options for reducing greenhouse gas emissions, but we’ll need new ones, too. Existing options will only get us so far. That’s why we’ll have to take an evolutionary approach to this problem, one that embraces the adaptive nature of the challenge before us.
As Ertel and Solomon argue, tackling adaptive challenges requires “strategic conversations” that help define the problem and generate innovative ideas for solving it. The first step in creating such a strategic conversation is to understand the challenge before us, and this blog post and the associated research note (LINK) are intended to foster such understanding.
The case for urgent action
The case for concern about rising greenhouse gas (GHG) concentrations is ironclad, and the graphics in the white paper show one compelling way to describe that case. We’re on track for more than two doublings of greenhouse gas concentrations by 2100 when all warming agents are included (see Figure 1). Combined with an expected warming of about 3 Celsius degrees per doubling of GHG concentrations (the climate sensitivity) that implies about a 6 Celsius degree warming commitment on our current path (the 5.5 Celsius degree warming calculated by MIT is lower because it takes many centuries for the climate to equilibrate to fully account for the effects of changes in concentrations).
Figure 1: Carbon dioxide equivalent concentrations for the past 800,000 years and projected to 2100 assuming no change in policies, including other warming gases
Sources: CO2 concentration data before year zero taken from a composite record produced by NOAA [ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/epica_domec/edc-co2-2008.txt], based on [3, 4, 5, 6, 7]. CO2 concentration data from years zero to 1958 taken from a composite record produced by NOAA [ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/law/law2006.txt], based on [8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. Concentrations for years 1959 to 2012 are taken from Keeling et al. [18] and Tans [19]. MIT no-policy case concentrations taken from Sokolov et al. [20]. Negative numbers indicate years BC. Note that y-axis begins at 100 parts per million by volume.
The graphs in my white paper show a dramatic shift in the climate system caused by human activity, one that has no precedent in human history. We need to keep a significant fraction of proved fossil fuel reserves in the ground if we’re to stabilize the climate and avoid these changes. It’s hard to imagine a starker adaptive challenge for humanity, but it’s one that we must confront if we hope to leave a livable world for our descendants.
What can we do?
To meet the climate challenge we’ll need rapid GHG emission reductions in the next few decades. This conclusion is inescapable because it’s cumulative emissions that matter, due to the long lifetime of many greenhouse gases. If we want to prevent global temperatures from increasing more than 2 Celsius degrees, we have a fixed emissions budget over the next century. If we emit more now we’ll have to reduce emissions more rapidly later, so delaying action (either to gather more data or to focus on energy innovation) is foolish and irresponsible. If energy technologies improved as fast as computers there might be an argument for waiting under some circumstances, but they don’t, so it’s a moot point.
Of course, we need new technologies and should therefore invest heavily in research and development, but there are vast opportunities for emission reductions using current technologies, and cost reductions for these technologies are dependent on implementing them on a large scale (learning by doing only happens if we do). So the focus in the next few decades should be on aggressive deployment of current low-emissions technologies, bringing new technologies into the mix as they emerge.
The changes we need are so large that no part of the economy will remain untouched, and that means opportunity. In fact, we’ll probably need to scrap some capital in the energy sector, given the rate of emissions reductions that will be required to maintain a livable climate. Entrepreneurs can lead the way by designing new low-emission products, services, and institutional arrangements that are so much better than what they replace that people are eager to adopt them (and to scrap some of their high emitting existing capital along the way).
Emissions reduction opportunities start by focusing on the tasks we want to accomplish and associating those tasks with flows of energy, emissions, and costs, which you then work to minimize. This focus on tasks frees you from the constraints of how they are currently accomplished and allows you to capture compounding resource savings upstream. By considering the whole system and designing to approach theoretical limits of efficiency, it is often possible to achieve drastically reduced emissions while also improving other characteristics of products or services substantially.
Information and communication technology (ICT) is accelerating the rate of innovation throughout the economy, and that development has implications for business opportunities in this space. ICT speeds up data collection, helps us manage complexity, allows us to restructure our institutions more easily, and lets us rapidly learn and adapt to changing circumstances. It also creates a continuously renewable source of emissions reductions, and is a great place to look for opportunities because it generally offers rapid speed to market and low startup costs.
When considering the climate issue, we can’t avoid the issue of institutional governance. Government has an essential role to play in defining property rights, enforcing contracts, and internalizing external costs. No other institution can do these things, so we need to ensure that these tasks are performed in a way that leads to the kind of society we want. When it comes to government, more is not better. Less is not better. Only better is better. And better is what we as a society should strive for.
Conclusions
Surviving this stage of human development means we’ll need to evolve as a species to learn how to face adaptive challenges like this one. We’ll need to foster rapid innovation, fierce competition, and active coordination between businesses, all at the same time. We’ll also need to change how we think about our responsibilities to each other, to the earth, and to future generations. Innovations in our values can be as powerful as those for new technologies in opening up new possibilities for the future, and these we also need to explore.
The technology now exists for us to move past combustion in most applications, but scaling it up to meet the demands of a modern industrial society won’t be easy. Of course, not doing so will be harder still, because of the damages unrestricted climate change will inflict on the earth and on human society. It’s long past time to get started. There’s simply no more time to waste.
Author’s biography
Jonathan Koomey is a Research Fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University, worked for more than two decades at Lawrence Berkeley National Laboratory, and has been a visiting professor at Stanford University (2003-4 and Fall 2008), Yale University (Fall 2009), and UC Berkeley’s Energy and Resources Group (Fall 2011). He was a lecturer in management at Stanford’s Graduate School of Business in Spring 2013. Dr. Koomey holds M.S. and Ph.D. degrees from the Energy and Resources Group at UC Berkeley, and an A.B. in History of Science from Harvard University. He is the author or coauthor of nine books and more than 200 articles and reports. He’s also one of the leading international experts on the economics of reducing greenhouse gas emissions, the effects of information technology on resource use, and the energy use and economics of data centers. He’s the author of Turning Numbers into Knowledge: Mastering the Art of Problem Solving (which has been translated into Chinese and Italian) and Cold Cash, Cool Climate: Science-Based Advice for Ecological Entrepreneurs (both from Analytics Press).
3. Indermühle, A., E. Monnin, B. Stauffer, T.F. Stocker, and M. Wahlen. 1999. “Atomspheric CO2 concentration from 60 to 20 kyr BP from the Taylor Dome ice core, Antarctica." Geophysical Research Letters. vol. 27, pp. 735-738.
4. Lüthi, D., M. Le Floch, B. Bereiter, T. Blunier, J.-M. Barnola, U. Siegenthaler, D. Raynaud, J. Jouzel, H. Fischer, K. Kawamura, and T.F. Stocker. 2008. "High-resolution carbon dioxide concentration record 650,000-800,000 years before present." Nature. vol. 453, no. 7193. 2008/05/15. pp. 379-382. [http://www.nature.com/nature/journal/v453/n7193/suppinfo/nature06949_S1.html]
5. Monnin, E., A. Indermühle, A. Dällenbach, J. Flückiger, B. Stauffer, T.F. Stocker, D. Raynaud, and J.-M. Barnola. 2001. "Atmospheric CO2 concentrations over the last glacial termination." Science. vol. 291, pp. 112-114.
6. Petit, J.R., J. Jouzel, D. Raynaud, N.I. Barkov, J.-M. Barnola, I. Basile, M. Benders, J. Chappellaz, M. Davis, G. Delayque, M. Delmotte, V.M. Kotlyakov, M. Legrand, V.Y. Lipenkov, C. Lorius, L. Pépin, C. Ritz, E. Saltzman, and M. Stievenard. 1999. "Climate and atmospheric history of the past 420,000 years from the Vostok ice core, Antarctica." Nature. vol. 399, pp. 429-436.
7. Siegenthaler, U., T.F. Stocker, E. Monnin, D. Lüthi, J. Schwander, B. Stauffer, D. Raynaud, J.-M. Barnola, H. Fischer, V. Masson-Delmotte, and J. Jouzel. 2005. "Stable Carbon Cycle-Climate Relationship During the Late Pleistocene." Science. vol. 310, pp. 1313-1317.
8. Etheridge, D.M., L.P. Steele, R.J. Francey, and R.L. Langenfelds. 1998. "Atmospheric methane between 1000 A.D. and present: evidence of anthropogenic emissions and climatic variability." Journal of Geophysical Research. vol. 103, pp. 15979-15996.
9. Etheridge, D.M., L.P. Steele, R.L. Langenfelds, R.J. Francey, J.-M. Barnola, and V.I. Morgan. 1996. "Natural and anthropogenic changes in atmospheric CO2 over the last 1000 years from air in Antarctic ice and firn." Journal of Geophysical Research. vol. 101, pp. 4115-4128.
10. Ferritti, D.F., J.B. Miller, J.W.C. White, D.M. Etheridge, K.R. Lassey, D.C. Lowe, C. MacFarling Meure, M.F. Dreier, C.M. Trudinger, and T.D. van Ommen. 2005. "Unexpected Changes to the Global Methane Budget over the last 2,000 Years." Science. vol. 309, pp. 1714-1717.
11. Langenfelds, R. L., P. J. Fraser, R. J. Francey, L. P. Steele, L. W. Porter, and C. E. Allison. 1996. "The Cape Grim air archive: The first seventeen years, 1978-1995." In Baseline Atmospheric Program Australia. Edited by R. J. Francey, A. L. Dick and N. Derek. Melbourne: Bureau of Meteorology and CSIRO Division of Atmospheric Research. pp. 53-70.
12. Langenfelds, R.L., P.J. Fraser, L.P. Steele, and L.W. Porter. 2004. "Archiving of Cape Grim Air." In Baseline Atmospheric Program Australia. Edited by J. M. Cainey, N. Derek and P. B. Krummel. Melbourne: Bureau of Meteorology and CSIRO Atmospheric Research. pp. 48.
13. Langenfelds, R.L., L.P. Steele, M.V. Van der Schoot, L.N. Cooper, D.A. Spencer, and P.B. Krummel. 2004. "Atmospheric methane, carbon dioxide, hydrogen, carbon monoxide and nitrous oxide from Cape Grimm flask air samples analysed by gas chromatography." In Baseline Atmospheric Program Australia. Edited by J. M. Cainey, N. Derek and P. B. Krummel. Melbourne: Bureau of Meteorology and CSIRO Atmospheric Research. pp. 46-47.
14. MacFarling Meure, C. 2004. The natural and anthropogenic variations of carbon dioxide, methane and nitrous oxide during the Holocene from ice core analysis. Thesis, University of Melbourne.
15. MacFarling Meure, C., D. Etheridge, C. Trudinger, P. Steele,, and T. van Ommen R. Langenfelds, A. Smith, and J. Elkins. 2006. "The Law Dome CO2, CH4 and N2O Ice Core Records Extended to 2000 years BP." Geophysical Research Letters. vol. 33, no. 14. pp. L14810 10.1029/2006GL026152.
16. Sturrock, G. A., D. M. Etheridge, C. M. Trudinger, P. J. Fraser, and A. M. Smith. 2002. "Atmospheric histories of halocarbons from analysis of Antarctic firn air: Major Montreal Protocol species." Journal of Geophysical Research: Atmospheres. vol. 107, no. D24. pp. 4765. [http://dx.doi.org/10.1029/2002JD002548
17. Trudinger, C. M., D. M. Etheridge, P. J. Rayner, I. G. Enting, G. A. Sturrock, and R. L. Langenfelds. 2002. "Reconstructing atmospheric histories from measurements of air composition in firn." Journal of Geophysical Research: Atmospheres. vol. 107, no. D24. pp. 4780. [http://dx.doi.org/10.1029/2002JD002545
18. Keeling, R.F., S.C. Piper, A.F. Bollenbacher, and J.S. Walker. 2009. Atmospheric CO2 records from sites in the SIO air sampling network. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tenn., U.S.A.:
20. Sokolov, A.P., P.H. Stone, C.E. Forest, R. Prinn, M.C. Sarofim, M. Webster, S. Paltsev, C.A. Schlosser, D. Kicklighter, S. Dutkiewicz, J. Reilly, C. Wang, B. Felzer, J. Melillo, and H.D. Jacoby. 2009. Probabilistic Forecast for 21st Century Climate Based on Uncertainties in Emissions (without Policy) and Climate Parameters. Cambridge, MA: Massachusetts Institute of Technology (MIT) Joint Program on the Science and Policy of Climate Change. 169. January. [http://globalchange.mit.edu/files/document/MITJPSPGC_Rpt169.pdf]
In this earlier post I discussed why graphene is such an exciting new material for electronics, and now Bloomberg and Mac Rumors have articles about it.
Here’s the intro to the Bloomberg article.
The main battleground between Samsung Electronics Co. andApple Inc. in the global smartphone market is moving from courtrooms to the laboratory, amid a race for patents on atom-thick technology for the next generation of devices.
Graphene is sort of like the high-tech version of cling wrap. It’s a transparent material that conducts electricity so it can be stretched across glass surfaces of phones or tablets to make them into touch screens. Thinner, stronger and more flexible than current technology, it’s ideal for futuristic gadgets like bendable smartwatches or tablets that fold up into smartphones.
Here’s the intro to the Mac Rumors article, which has a bit more technical detail:
In a world where mobile devices are becoming thinner and thinner and in some cases being worn on the wrist or other parts of the body, graphene may be the wonder material of the future, with properties that make it stronger than steel, more flexible than rubber and more conductive than most metals. As a result, the material could initiate a new wave of innovation in hardware design and manufacturing that may lead to incredibly thin and flexible devices. According to Bloomberg, it also may become the next battlefield for Apple and Samsung.
Graphene is graphite, the material in pencils, arranged in a layer that is one atom thick. The arrangement of the carbon molecules makes the material stronger than steel and even diamonds. It also is flexible, conductive and so transparent that is nearly invisible to the naked eye. It can be applied to other materials, potentially allowing for the creation of flexible displays and bendable devices.
A friend of mine from college who is a practicing lawyer recently took a continuing legal education class at a local college. For his class project he chose to analyze the Keystone XL pipeline. He consented to me posting most of his memo, with his anonymity preserved and with his conclusions removed.
I found his questions and answers to be balanced and useful, and thought others should have the benefit of reading them as well. The presentation of the jobs impacts probably overestimates what is likely to come to pass, and I think it’s important to describe the errors in the State Department’s analysis (because they are both glaring and consequential), but overall this summary is worth a read, no matter what your conclusions are about the pipeline. It focuses more on the legal issues than many other discussions of this type, so that’s also useful.
On to the memo! It was first sent to me in April 2014. Here are the first few paragraphs:
The following is information which I have collected in connection with my in-class presentation on The Keystone XL Pipeline. I have used a question-and-answer format below, as some newspapers are also prone to doing, so as to summarize and speak to the complex and multi-dimensional issues presented by this landmark situation unprecedented in the annals of American environmental law and policy, as far as I am aware.
1.What Is The “Keystone Pipeline”?
The project’s full name is “The Keystone XL Pipeline”; “The Keystone”; or, simply, “Keystone”, and will be referred to as such herein. Keystone is an intended sequential arrangement of individual large pieces of pipeline intended to be built and placed in four (4) states within the United States of America: Montana, South Dakota, Nebraska, and Kansas. Keystone’s contact with Kansas is limited relative to Keystone’s contact with Montana, South Dakota, and Nebraska. Given recent litigation and unusual and remarkable political events in Nebraska relating to Keystone, special attention is paid herein to Nebraska’s interrelationship with Keystone. (Please see below).
2.Which Countries Are Involved?
Canada, and the United States. Canadians including Prime Minister Stephen Harper - and Jim Prentice, former Canadian Conservative Party official touted as a possible Harper successor - are seeking to pressure the Obama Administration to approve the Keystone. USA Today For The Journal News, Tuesday, February 18, 2014. However, no commentator has suggested that Canada has the power to compel America’s decision on Keystone. Canada has roughly one-tenth of America’s population and probably one-one hundredth of America’s political power in the world theatre. (No citation for the latter - that’s just my estimate).
3.Which Companies Are Involved?
“TransCanada” is the company seeking to build the Keystone. TransCanada is a pipeline and energy company based in Calgary, Alberta, Canada - from the municipality of Hardisty, Alberta, Canada. Hardisty, Alberta is a town in Flagstaff County in Alberta, Canada. It is located in east-central Alberta, 111 kilometers (69 miles) from the Saskatchewan border, near the crossroads of Highway 13 and Highway 881, in the Battle River Valley. Hardisty is a town which is mainly known as a pivotal petroleum industry hub where petroleum products such as Western Canada Select blended crude oil and Hardisty heavy oil are produced and traded. http://en.wikipedia.org/wiki/Hardisty
Rick Piltz at Climate Science Watch has compiled a list of resources related to the recently released US National Climate Assessment. This landmark document summarizes the climate impacts the US has felt so far and those expected in the future if we don’t change our direction. I hope and expect that it will be influential in changing the debate on this topic. The Administration seems to be discussion the report with unexpected vigor, which is a welcome development.
Here are some key paragraphs from the overview, which describes the scope of the report:
This National Climate Assessment collects, integrates, and assesses observations and research from around the country, helping us to see what is actually happening and understand what it means for our lives, our livelihoods, and our future. The report includes analyses of impacts on seven sectors – human health, water, energy, transportation, agriculture, forests, and ecosystems – and the interactions among sectors at the national level. The report also assesses key impacts on all U.S. regions: Northeast, Southeast and Caribbean, Midwest, Great Plains, Southwest, Northwest, Alaska, Hawai'i and Pacific Islands, as well as the country’s coastal areas, oceans, and marine resources.
Over recent decades, climate science has advanced significantly. Increased scrutiny has led to increased certainty that we are now seeing impacts associated with human-induced climate change. With each passing year, the accumulating evidence further expands our understanding and extends the record of observed trends in temperature, precipitation, sea level, ice mass, and many other variables recorded by a variety of measuring systems and analyzed by independent research groups from around the world. It is notable that as these data records have grown longer and climate models have become more comprehensive, earlier predictions have largely been confirmed. The only real surprises have been that some changes, such as sea level rise and Arctic sea ice decline, have outpaced earlier projections.
What is new over the last decade is that we know with increasing certainty that climate change is happening now. While scientists continue to refine projections of the future, observations unequivocally show that climate is changing and that the warming of the past 50 years is primarily due to human-induced emissions of heat-trapping gases. These emissions come mainly from burning coal, oil, and gas, with additional contributions from forest clearing and some agricultural practices.
Global climate is projected to continue to change over this century and beyond, but there is still time to act to limit the amount of change and the extent of damaging impacts.
This report documents the changes already observed and those projected for the future.
Stanford is the first major University of which I’m aware to divest from coal, but I’m convinced they won’t be the last. Coal is the low hanging fruit and the natural first target for those seeking to reduce emissions and increase pressure on the fossil fuel industry. Stopping coal export terminals and shutting down existing coal plants are two more natural steps in this process.
As a landmark article in the American Economic Review concluded in 2011, coal and oil fired electricity deliver negative net value added to the economy once you incorporate their societal costs. So the US will be better off once we shut down this industry. It will of course take decades, but it’s time to get started.
(Muller, Nicholas Z., Robert Mendelsohn, and William Nordhaus. 2011. “Environmental Accounting for Pollution in the United States Economy." American Economic Review vol. 101, no. 5. August. pp. 1649–1675. [https://www.aeaweb.org/articles.php?doi=10.1257/aer.101.5.1649])
This year is the the beginning of the end for the fossil fuel companies, though exactly how long this transition will take is an open question. Exxon accepted the framing related to stranded assets in late March, which is another major domino to fall. Assuming other major universities and investment funds follow in Stanford’s footsteps, pressure on the coal industry will increase. When markets shift, they do so with terrifying speed, as we found out during the financial crisis 6 years ago.
Acting on a recommendation of Stanford’s Advisory Panel on Investment Responsibility and Licensing, the Board of Trustees announced that Stanford will not make direct investments in coal mining companies. The move reflects the availability of alternate energy sources with lower greenhouse gas emissions than coal.
The Stanford University Board of Trustees has decided to not make direct investments of endowment funds in coal-mining companies. (David J. Phillip / AP)
Stanford University will not make direct investments of endowment funds in publicly traded companies whose principal business is the mining of coal for use in energy generation, the Stanford Board of Trustees decided today.
In taking the action, the trustees endorsed the recommendation of the university’s Advisory Panel on Investment Responsibility and Licensing (APIRL). This panel, which includes representatives of students, faculty, staff and alumni, conducted an extensive review over the last several months of the social and environmental implications of investment in fossil fuel companies.
Stanford's Statement on Investment Responsibility, originally adopted in 1971, states that the trustees’ primary obligation in investing endowment assets is to maximize the financial return of those assets to support the university. In addition, it states that when the trustees judge that "corporate policies or practices create substantial social injury,” they may include this factor in their investment decisions.
In January of 2008, Jim Cramer, in a video at TheStreet.com, recommended that readers buy shares of Bear Stearns. Two months later, he bellowed on his CNBC show, “Mad Money,” that “Bear Stearns is fine!” and “Bear Stearns is not in trouble.” Within days, the bank was nearly insolvent and had been acquired by JPMorgan Chase.
Cramer is well known for his hysterical boosterism of the stocks he likes, but enthusiasm for well-performing companies isn’t unique in business journalism. In 2003, Kimberly Allers, writing in Fortune, described Washington Mutual as “a banking powerhouse” with an “unorthodox retail approach.” In 2006, Fortune headlined an article about Lehman Brothers’s C.E.O., Dick Fuld, “The Improbable Power Broker,” with the subtitle “How Dick Fuld transformed Lehman from Wall Street also-ran to Super-Hot Machine.” In 2007, Neil Weinberg, of Forbes, observed that “Goldman [Sachs] has to stay out ahead of its rivals in trying daring and innovative approaches that push the outer edge of the boundary between what is okay and what may not be.”
Business reporters are supposed to make the complex worlds of finance and commerce intelligible to non-experts. But business journalism generally failed to predict the looming credit collapse, although a few reporters warned of its arrival. Critical stories by Michael Hudson, of the Roanoke Times and the Wall Street Journal, and Gillian Tett, of theFinancial Times, drowned in a vat of glimmering C.E.O. profiles and analyst chatter. Business reporters missed opportunities to investigate abusive lending, negligent rating agencies, and dodgy derivatives trading. To critics, they were complicit in the financial crisis and the recession that followed.
Almost a century ago, trains throwing sparks into neighboring fields and forests helped ignite a canonical debate in economics. These railroad sparks sometimes set fire to farms and woodlands. Writing in 1920, Alfred Pigou observed that if the railroads fail to account for these damages, profit maximizing operating decisions would not be socially optimal. He proposed taxation as a means of aligning private and social interests.
In 1960, Ronald Coase revisited this example in a famous paper titled The Problem of Social Cost. He observed that if property rights are well defined and costless to enforce, private bargaining between railroads and landowners should result in a socially efficient outcome. (Interested readers should see Severin’s post celebrating Coase’s influential insights).
Current debates about transporting oil by rail bring us back to the question of how to internalize this canonical social cost.
Lakoff assumes that people who believe irrational things can actually be convinced, and I’m not sure framing or facts will ultimately matter that much. Many people who can’t accept climate change are surrounded by others who can’t accept it, so that changing their views would involve rejecting their community (or being rejected by it). Ultimately it will come down to political power, where enough people realize that we have to act that we move ahead without the people who can’t accept the reality of climate change.
Framing and language are important, but at a certain point people need to take responsibility for their inability to deal with facts and evidence. You can’t blame it all on framing, as some of the post modernist people seem to do. I don’t think Lakoff does, but many do. This is why I think it will ultimately come down to political power–we will decide as a society that we’re going to ignore the flat earthers and deal with the real risks that climate change poses to the continued orderly development of human civilization. Wilber would say that we need to transcend and include the legitimate concerns that some have about the means to solve the problem, and then figure out solutions that address those concerns in some way. For the 20-30% of the population that will never be convinced, there’s nothing to be done, and it’s a waste of time to try at this point. We’re out of time and need to get moving.
Some great discussion of framing with references is in this article: Harjanne, Atte, and Janne M. Korhonen. 2019. “Abandoning the concept of renewable energy.” Energy Policy. vol. 127, 2019/04/01/. pp. 330-340. [http://www.sciencedirect.com/science/article/pii/S0301421518308280]
OPEN ACCESS
Framing is a process where meanings are constructed (Benford and Snow, 2000). Originally introduced by Goffman (1974), frames can be considered models of interpretation which enable the organization of experiences and occurrences into communicable sets of shared beliefs and meanings that also guide action (Benford and Snow, 2000). Frames are thus essential for the formation and maintenance of institutional logic as well. Frames and framing have mostly been applied in research on social movements, where the interest typically is the role of frames in inspiring and legitimating actions and for mobilizing resources (Benford and Snow, 2000, Granqvist and Laurila, 2011). Among science and technology studies, Rosenberg’s (1976) idea of “focusing devices” that direct research and policy efforts towards a specific subset of technologies, sometimes at the expense of other subsets, resonates with this idea of a framing process. The nature of framing also includes drawing boundaries between what is included in a shared meaning and what is not, and can result in umbrella constructs (Hirsch and Levin, 1999) that organize various theoretical elements of a field into a meaningfully combined concept.
In this paper, we approach the concept of renewable energy as a socially constructed result of framing in the field of energy policy. Since frames are fundamentally constructed in discursive processes (Benford and Snow, 2000) we focus our attention mainly on language and written documents, although we briefly discuss visual discourse (see, e.g. O’Neill and Smith, 2014) as well. Discourse analysis has been widely used in studying the formation of environmental and energy policies (see, e.g. Hajer and Versteeg, 2005; Jessup, 2010; Cotton et al., 2014), and the role of linguistic framing and discourse in energy policy has been discussed in detail by Scrase and Ockwell (2010), who described how framing may serve to sustain the continuation of existing policy positions. It should be noted that while we rely on theory of framing, we refer to renewable energy also as a “concept”, since we believe that this term is more familiar to most audiences.
The NY Times has an article today describing recent developments in graphene, which is a carbon based material with amazing properties. Carbon is of course the basis for life on earth and when emitted into the atmosphere as carbon dioxide it warms the earth (as do other greenhouse gases like nitrous oxide and methane). Graphene is strong, conductive, flexible, and transparent, which gives it many advantages over conventional materials.
Here are a few key paragraphs from the NY Times article:
While the material was discovered a decade ago, it started to gain attention in 2010 when two physicists at the University of Manchester were awarded the Nobel Prize for their experiments with it. More recently, researchers have zeroed in on how to commercially produce graphene.
The American Chemical Society said in 2012 that graphene was discovered to be 200 times stronger than steel and so thin that a single ounce of it could cover 28 football fields. Chinese scientists have created a graphene aerogel, an ultralight material derived from a gel, that is one-seventh the weight of air. A cubic inch of the material could balance on one blade of grass.
“Graphene is one of the few materials in the world that is transparent, conductive and flexible — all at the same time,” saidDr. Aravind Vijayaraghavan, a lecturer at the University of Manchester. “All of these properties together are extremely rare to find in one material.”
We’re still far from widespread commercial appellation of graphene, but I wanted to point readers to the three best scientific articles of which I’m aware that demonstrate the use of graphene for sensors and super capacitors (energy storage).
Bogue, Robert. 2012. “Environmental sensing and recent developments in graphene." Sensor Review. vol. 32, no. 1.
Liu, Chenguang, Zhenning Yu, David Neff, Aruna Zhamu, and Bor Z. Jang. 2010. "Graphene-Based Supercapacitor with an Ultrahigh Energy Density." Nano Letters. vol. 10, no. 12. 2010/12/08. pp. 4863-4868. [http://dx.doi.org/10.1021/nl102661q]
Liu, Chang-Hua, You-Chia Chang, Theodore B. Norris, and Zhaohui Zhong. 2014. "Graphene photodetectors with ultra-broadband and high responsivity at room temperature." Nat Nano. vol. advance online publication, 03/16/online. [http://dx.doi.org/10.1038/nnano.2014.31]
One of my colleagues at Stanford was skeptical of using graphene and carbon nanotubes in microprocessors, but he thought the other applications were quite exciting. This material, when combined with recent developments in energy harvesting, will likely accelerate the advent of "Smart Everything”, as we discussed in our recent article in the Annual Review of Environment and Resources.
For those interested in a real world account of using Tesla’s supercharging network to drive long distances, check out my friend Chris Calwell’s blog posts here, here, and here. The superchargers worked well, and as they become more common, it will get even easier. I suspect strongly (though have no inside information about this) that once the network is fully built out Tesla will consider licensing the use of it to other automakers, but this will probably have to wait until Tesla makes its electric car “for the rest of us” in a few years.
The picture above shows Chris Calwell (right) with his Model S at the Tesla Factory in Fremont, CA, on February 6, 2014. Dave Houghton and Gregg Hardy are at left and center, respectively.
As I wrote in Cold Cash, Cool Climate, one of the most important ways to achieve breakthrough innovation is through integrated whole systems design. That means not settling for incremental change, but redesigning devices as whole systems to help them accomplish tasks as well as any human or machine could with current technology.
Amory Lovins, who is the most prominent proponent of this design approach, says wisely that “optimizing parts of a system will pessimize the whole system”. Rocky Mountain Institute has compiled recommendations about integrated design here and here.
I had the pleasure of doing a tour of the Tesla factory in Fremont, CA on February 6, 2014, and I saw this inspirational quotation from Elon Musk about whole systems design and the Tesla Model S. I wasn’t allowed to take a picture of the wall on which it was printed, but I remembered it so vividly that I had my friend Chris Calwell (who drove to California from Colorado using the supercharger network) get back in touch with the person at Tesla who set up the tour to get the exact quote. It’s wonderful!
With the Model S, our goal was to create the first ‘true’ electric car. By this, I mean the first electric vehicle where every element was considered anew in light of the fundamental change in technology and then designed and engineered as an integrated system. The result is a car that is beyond what people believe a car can be.”
Elon Musk – CEO and Co-founder
It is this spirit of redesigning from the bottom up that all entrepreneurs should embrace. We need innovations that are so much better than what they replace that people will be happy to scrap their old technology to capture those benefits, because that’s the only way we’ll achieve the rate and scope of change we need to truly face the climate challenge. Whole systems integrated design is the way to do just that.
Here’s a picture of me at the wheel of Calwell’s Model S after the tour, thankfully driving at a reasonable speed according to the speedometer. It’s a terrific car. Now that innovation needs to trickle down to cars that ordinary folks can afford!
I had the honor of speaking at a Clean Tech Open event in San Francisco two days ago, which was a terrific event focused on Climate and Clean Tech. There were more than 60 eager and interested folks in attendance, and there were lots of great questions from the audience. It was great to reconnect with that group–I was an advisor to the founders of the Clean Tech open in the very beginning, and served as a judge and a mentor for one year.
My friend Ken Lee is arranging a series of interesting Clean Tech Open events in the East Bay. See list below:
The video for the entire event is now posted (tip of the hat to Dan Reicher at Stanford, who was one of the co-sponsors). Apparently the back and forth between Amory Lovins and Armond Cohen was quite spirited. Amory’s keynote lecture is at 6 hours 20 minutes in.
PS. For those who still think that Three Mile Island was the primary reason for nuclear power’s decline in the US, please email me to get a copy of our Bulletin of the Atomic Scientists article addressing this widely believed but incorrect idea.