Robert Henneke writes a fine article in Washington Examiner Trump has a chance to rein in Obama’s out-of-control EPA Henneke is is the general counsel and director of the Center for the American Future at the Texas Public Policy Foundation. Excerpts below in italics with my bolds.
The Environmental Protection Agency has sent its replacement for the Clean Power Plan to the White House. We don’t know what’s in it (it won’t be released until the White House has a chance to review it), but we know what should be and what shouldn’t.
The new plan should restore the rule of law to an out-of-control agency. The EPA must abide by the rules set by Congress, particularly in the Clean Air Act, rather than lawlessly assuming authority it doesn’t have, as it did through the Clean Power Plan. The new plan must not repeat the mistakes of the CPP.
Carbon dioxide is the supervillain in the story of global climate change. The EPA declared even naturally-occurring CO2 as a pollutant in 2009, then sought to regulate it in the Clean Power Plan. Fortunately, the plan was stayed by the Supreme Court before it went into effect, and it remains in legal limbo.
But in December of that year, the advance notice of the new rules, the EPA indicated it would repeat some of the same mistakes of the CPP in its new guidelines.
First, EPA is not allowed to regulate greenhouse gas emissions from stationary sources (power plants) under Section 111 of the Clean Air Act. Why not? Because all emissions from such sources are already regulated under Section 112. Regulators don’t get two bites at that apple.
Congress expressly prohibited such overregulation to avoid burdensome, duplicative rules, and it required the EPA to choose only one avenue. But EPA has regulated coal-and-oil-fired electric generation unit emissions under Section 112 since 2000, and in 2012, it began regulating all fossil fuel-fired electric generation unit emissions under that section.
Second, to proceed under Section 111, the EPA is required to make an endangerment finding under the criteria for stationary sources. But there is no endangerment finding – not under the Obama administration and not now. To justify its overreach, the EPA has pointed to the endangerment finding it made in 2009 in connection with mobile source emissions (cars and trucks, etc.) under a different provision of the Clean Air Act (Section 202).
But that endangerment finding simply doesn’t apply. It doesn’t meet the criteria of Section 111, that a pollutant from stationary sources endangers the public health and welfare. Instead, it found that an aggregate of six different greenhouse gases, emitted by mobile sources, is a danger.
Why is this difference important? Section 111 permits regulation only from “a category of sources . . . [which] significantly causes or contributes significantly to air pollution [that endangers health or welfare].” This “significance” requirement is not found in Section 202.
So, the EPA would have an insurmountable task in finding that American power plants “significantly” cause or contribute to the levels of carbon dioxide thought to aggravate climate change. Carbon dioxide is ubiquitous and worldwide in scope, making any such finding fraught with peril. Given emissions of carbon dioxide worldwide, it is highly unlikely that the EPA can specifically point to greenhouse gas emissions from American power plants as a significant cause of endangerment of health or welfare.
Finally, if the EPA is to regulate carbon dioxide emissions from power plants, it must proceed under Section 108 of the Clean Air Act, not Section 111. Section 108 is the regulatory path Congress prescribed for air pollutants in the “ambient air” emitted from “numerous or diverse” sources, while Section 111 is the instrument for emissions from specific source categories that pose local pollution concerns. Carbon dioxide is the very model of a ubiquitous substance emitted into the “ambient air” from “numerous or diverse” sources. EPA cannot short-circuit the regulatory framework hard written into the Clean Air Act under section 108 by jumping to another section of the act.
The Clean Power Plan represents the worst of the regulatory abuses of the Obama administration. Its mistakes must not be repeated.
When it comes to the new plan, less is more. Texas serves as the model for success, where a deregulated electricity markethas resulted in abundant energy and cleaner power plants as electricity companies adopt the latest technologies as a way to increase efficiencies and maximize profit, all while resulting in a cleaner environment.
That’s the way forward for the Clean Power Plan’s replacement.
Energy Sources and the Rise of Civilization. Source: Bill Gates
Richard Rhodes won the Pulitzer Prize for The Making of the Atomic Bomb. He’s written countless other books and his new book about energy called Energy: A Human history.
He takes us on a journey through the history behind energy transitions over time from wood to coal to oil to renewables and beyond. Some stories are well known, some much less so, a fascinating set of characters that go back in time all the way to Elizabethan England.
It also provides fascinating insights into how energy history should help us today understand a possible energy transition of the future towards lower carbon economy, to provide affordable reliable and sustainable energy for a growing global population. Rhodes shared some transition stories in an interview with Jason Bordoff at Columbia University, podcast and transcription entitled Richard Rhodes — Energy: A Human History. Excerpts below in italics are from Rhodes unless otherwise indicated, with my light editing, headers, bolds and images.
Jason Bordoff: I think some people may be familiar with more recent energy innovations electricity obviously, oil, nuclear power. But you really start with animals, with woods and what it meant for human civilization as we know it to depend on that for their energy sources and how transformative it was to then convert initially to coal and then beyond. Talk a little bit about how big, how big a deal that was and what it meant for the human experience as we know it.
From Wood to Coal in Elizabethan England
Richard Rhodes: Well, the story of the transition by the Elizabethan English from wood to coal was one of the most fascinating and in some ways comical although of course it wasn’t comical for them. One of the things that I wanted to do with this book was to tell the human stories that are behind the technologies involved, since so many books on the history of energy focused almost entirely on the technological changes.
The Wood Burning Society
And of course there are vast human stories, because changing from one source of energy to another is as much a social phenomenon as it is a technical phenomenon perhaps more so. So, the Elizabethans had been cutting down their trees in vast numbers, primarily for firewood for their homes. And they burned firewood usually on stern platforms or fire places set against the wall that didn’t have chimneys.
They liked the smell of wood and they thought that the smoke hardened their rafters, so either there was just a hole in the roof leading straight up from the fire place or they let the smoke drift through the rooms and out through the windows. Well, that was fine as long as they had enough wood, but as they cut the wood down farther and farther away from London it got more and more expensive to transport.
Substituting Coal for Wood
So, eventually it reached the point where it was really too expensive for the common people to afford, at that point the only alternative that they had was really smelly bituminous coal from New Castle up the river — up the country in the northeast, and they didn’t like its characteristics compared to wood. First of all imagine lighting a bituminous coal fire in the middle of your living room with no place for the smoke to go and imagine what – you’d be coughing and breathing from that. And then on top of that imagine resting your beef, your good English beef over a coal fire with all the sulfur that’s in coal smoke.
In a way England was just one vast coal mine made of these layers of coal which was black and dirty and smelled sulfurous when you burned it was literally the devils exponent. If the devil had hell down in the center of the earth this is where his body waste accumulated further up during the surface. Well, that obviously didn’t endear coal to the populous. So, they really struggled with it and basically what happened is the rich kept buying wood which they could afford and the poor had to find a way to survive with coal and they hated it.
A New King Adopts Coal
The transition really was a social transition when Elizabeth died at the end of the sixteen century, and just around 1598 or so King James I — VI of Scotland became the King of England. And he came down to London as James I and the Scots who had a much thinner forest to begin with up north then the English had had already switched to coal a long time ago. They had been working on coal for a hundred years.
And so Scottish coal was better quality, it didn’t have so much sulfur in it, so when the King came to London and started burning coal in the castle it became fashionable. Well, the King does it I suppose we can too, was the result and after that the transition was much facilitated. In addition they had to retrofit all the homes that didn’t have proper chimneys with chimney, which is another lesson that has extended across the entire history of energy transitions.
Converting Society to Burn Coal
In this regard it seems so simple – you find a new source of energy when it’s old – when it is causing you troubles and you switch over to it. I mean that’s the way people are talking today about wind and solar and other renewables. But it turns out, it takes anywhere from 50 to 100 years to make a full scale energy transition, because it’s not just a matter of the technology at all, it’s a matter of all sorts of social and societal changes.
In this case for example even they had to retrofit all the chimneys. They had to open coal mines and find a way to transport the coal down to London. They had to develop markets. They would sell the coal and then most of all you had to figure out how to burn it in your home without making the place un-inhabitable. So, it took a while. It was not really until the 1650s and the 1660s that coal had really moved in England and then of course they had the problem of air pollution.
Inventing the Coal Industry and Society
Well, just staying with the English, once they started digging coal they first dug of course the superficial layers that tended to out crop on hillsides. So, they could easily drain their mines just by putting in what they call adits which were to — basically channels for the water to flow out. But as they continued to dig deeper as they used the superficial coal they began to intersect the water table and the mines began flooding. They tried pumping them out with horses and what were called Rims which were basically horse turned pumps.
But that got more and more difficult as the mines continued to deepen, they were going down as far as 800 feet below ground to dig their coal. It’s hard to pump water that far with just a couple of horses. So, the solution that they found as time went on and this is now the early eighteenth century the middle eighteenth century was to develop an engine – a steam engine an early form of steam engine that was very inefficient less than 1%.
So, it was a big thing the size of a house, they would kind of sit on top of the coal mine opening to the surface and pump out the water, so that the mines could continue to be mined. This was the Newcomen engine which basically produced a vacuum which then allowed atmospheric pressure to rush in and function as a pump. That limited its function to the pressure of the atmosphere about 32 feet of lift. And therefore there continued to be a desire for innovation, a better way to pump water farther along, because if you had a – let’s say a 300 foot shaft in a coal mine.
Newcomen Atmospheric Steam Engine
Energy Necessity Calls for Innovation
The only way you could pump with a Newcomen engine would be to put the engines every 32 or so feet up and down the shaft, which was not a very efficient idea, especially since coal mines tend to release a certain amount of methane and other gases. And there were lots of explosions that people had to deal with. So, it quickly became apparent that there was a place for a better steam engine that’s where James Watt the Scotsman came along and invented a true steam engine one that worked by using steam to expand and push the piston back and forth.
And it could pump as much as its capacity was built to pump. Then of course they had the problem of moving the coal from the mine down to the river or the ocean in order to barge it to London. Again moving stuff around which turns out to be a large part of the problems in dealing with these forms of energy. At first the mines were close enough to the water to simply put the coal on a cart and roll it down hill. They used rails to do that, originally wooden rails, but then they started covering the wooden rails with cast iron plates on top to make it more efficient.
A late version of a Watt double-acting steam engine, built by D. Napier & Son (London) in 1859.
Transformation into a Coal Society
And you know, once you switch from wood to coal and as the country began to industrialize, particularly with advent of the steam engines, coal production got more and more enormous. And it wasn’t just a matter of heating homes anymore it became a matter of running factories as well. And you couldn’t do that with bags of coal on the saddles of horses, you needed some more large scale way to move the coal around.
Once the mines were farther back from the valleys where the rivers ran or the canals as they came to be, you had to find a way to move the material uphill as well as downhill. And horses weren’t going to do that job not at the scale that England was operating by then. So, someone realized that if you had a small steam engine and by then Watts engine could be made fairly small, you could mount them on wheels and move the coal with the steam engine, which is a certainly a railroad engine.
One of the early steam engine locomotives.
The Coal-Based Society Emerges
This whole story is about how self reinforcing all of these things were. You need to go deeper and deeper to get the coal for heating and cooking. You develop an innovation like the steam engine to do that, which enables an innovation to transport the coal, and then uses the very energy you were trying to get for another purpose to power that new innovation.
And once it was clear that you could move railroad carts of coal with a steam engine, someone realized that you could move people too. And England suddenly blossomed with railroads all over the country. The canal age was over and the railroad age began and it all followed from this early transition from wood to coal. And all the industrialization that came with the development of a stable, reliable source of continual power, which water power had not been, and animal power had been limited.
Here was an engine you fed it coal and it gave you – it gave you a turning wheel that would turn mills to loom cotton that would turn mills to make steel whatever you needed to do. So, it really was an innovation stage by stage just kind of piggyback from one to the next, really a fascinating transition time in history.
The Downside of Coal Energy
One of the interesting things that’s very clear, and it’s as clear today in Beijing as it was in London at 1660. The first thing you do is get your energy, you do what you have to do to increase the energy supply to your country or your society. Then as a kind of a luxury good in a way, you start looking at how to reduce the baleful side effect such as air pollution that comes along with that source of energy.
The first paper published by the newly formed royal society of London in 1664 was a study of how to improve the air in London and it was remarkably similar to today’s ideas to move industry into the suburbs, to ring the city with plant life, trees. So, this particular writer proposed all sorts of wonderful trees that put out perfume during their flowering season that should be built in a belt around London.
The King was so busy having just been restored to power – after the Roundhead Revolution that had caused his father to be beheaded– that he was much too busy selling monopolies and refilling his coffers to actually do anything about it. But the point is people were thinking about this prospect and it was not different from what happened when Pittsburgh at the turn of the century was so filled with coal smoke that from a nearby hill where you barely can see the city.
Pittsburgh train station in the 1940s.
Pittsburgh Faces Coal Air Pollution
And that was pretty much true in most American cities up until the 1950s. As they went about cleaning up their air supply in the early 1950s, there was a proposal by United States government to share the cost of building the first commercial nuclear power plant in the United States at a place called Shippingport near Pittsburgh on the river. I talked to the president of the Duquesne Light which was the company that was going to be the private contractor for this power plant.
He said you know, we sold this power plant to the city council of Pittsburgh as a green technology. People have come to think of nuclear as the devil’s excrement, but compared to burning coal, compared to burning what they had available to burn at the time, nuclear was great with its total absence of carbon production. The past can really inform the present when you look at how things have been done before and why they were done that way and what lessons they offer us to learn in the process.
Smog in Los Angeles
The stories that I tell are very much intertwined. So let’s jump to Los Angeles in the 1950s when what we now call smog was beginning to be a very serious problem there. The companies that refined oil in and around Los Angeles wanted to do whatever was available to clean up the air pollution, because it was commonly believed that it all was coming from their refineries or from trash burning.
Previously cities had been focused and states had been focused primarily on coal smoke – on smoke and its baleful effects on the atmosphere. Smog was originally smoke and fog two words combined. But in the 50s in Los Angeles they became this photochemical phenomenon that was going on in the atmosphere that was making everything look brown. And the question now was what do you do about that?
Anti-Smog Device.
Dr. Haagen-Smit at Caltech was carrying out an exercise in identifying the perfume essence of ripe pineapple. So, he had a room full of ripe pineapples that he was sucking the air in the room through a machine that included some liquid nitrogen that would freeze out of the air, the essential aroma of chemicals. It was to Dr. Haagen-Smit that the California county people turned and asked him if he could identify the component of this smog that was in the air.
So, he used the same machinery, but he put the pineapples away and he opened the window and sucked in about 30,000 liters of California smog into the room ran it through his machine and ended up with a few drops of very nasty brownish sticky material, which was essence of California photochemical smog, and identified where it came from. And its primary component with the other things like the refineries and so forth were certainly a part of it.
But the main component was automobile exhaust and that gave Los Angeles the beginning of what turned out to be a large national struggle with the automobile manufacturers to get them to put catalytic converter on their automobiles and eventually to get rid of the nitrous oxides, which was another component of automobile exhaust that was deadly. I repeat this not merely a technical book this is really a collection of the most amazing human stories.
But Haagen-Smit who was then of course put down by the great laboratories that had been turned to by automobile companies to refute his work, he with his simple experiments, he was a veteran of World War II. He had been a survivor of World War II so he knew how to make things simple. He with his laboratory work was able to identify what needed to be done and finally by the 1980s the entire country was trying to deal with smog by way of adding catalytic converter to cars.
Whale Oil and Petroleum
It’s a truism of the oil industry that the petroleum saved the whales. And they say that because one of the main sources of lighting for wealthier people – it was pretty expensive this whale oil–particularly spermaceti which was the very lovely refined oil that whales carry on their heads as a way of controlling their buoyancy. By heating and cooling the oil in their heads they can adjust their neutral buoyancy and therefore don’t sink to the bottom or rise to the top unless they want to.
So, these beautiful whales were used to make candles and were collected at the rate of 10,000 whales a year at the height of the whaling industry as Herman Melville beautifully describes it in Moby Dick. But most people couldn’t afford whale oil that was pretty expensive item. What they actually used — and I and most people had never known about this — was something that was called burning fluid which was basically the sap of the long leaf pines of south eastern United States which could be refined into turpentine and the turpentine could then be mixed with plain alcohol.
And with a little bit of menthol to sweeten the smell because turpentine burning is not a great smell, this then became something called burning fluid which is what almost everyone used in their lamps. One particular brand of burning fluid was called kerosene we know that name from its later application to petroleum I’ll jump to that in a sec. But — so most people burn lamps or they simply burn the cheap tallow candles, which smell like burning beef fat, not a great smell in your home either. Then came the discovery of petroleum in 1859 or rather the discovery of “rock oil” or “coal oil”. If you could drill for this stuff and pump it out in vast quantities, you could make all the kerosene you wanted.
There had been petroleum seeps in various parts of the country and particularly one in Pennsylvania where they tried to use the petroleum by soaking it up as it floated on the surface of streams where it oozed out from underground, in blankets, and squeezing the blankets out into a jug, and then selling that for liniment to rub on your sores and on your gums and swallow as a healthy item and so forth, if you can imagine. Anyway, once Colonel Drake went off to oil city as it came to be called and drilled a well and showed how you could pump oil out of the ground or indeed some wells would pump it for you and spate it into the air.
Gasoline, A Dangerous Byproduct, Transforms Society
All of a sudden petroleum was the new stuff, but it wasn’t the new stuff for powering machinery, nobody had found that use yet. Its first use for the next 50 years was for lighting, once they figured out how to refine petroleum into what was now called kerosene made from petroleum or it was used for the lubrication. But since the automobile hadn’t been invented and they had among their waste products their refinery — this stuff called gasoline which was much too volatile to put in a lamp.
The lamp would blow up from the fumes, so they would either pour it out on the ground to evaporate it into the air or they dump it into the streams and rivers of America in the dark of the night and so much other waste was in those days. It was beginning in the 1880s to be question among oil refiners so would they kind of run out of possible uses for their stuff. In a way the automobile saved petroleum. It was the automobile that came along just at the turn of the century and the industry took off.
Beyond the Petroleum Society
Jason Bordoff: You write about the disruptive, unexpected consequences of these innovations. As you said the whales are being slaughtered by the 10s of 1000s and oil is discovered and then that significantly reduces demand for whale oil. And then you wrote about the automobile and how one of the major problems at the turn of the century was horse populations and horse manure in cities like London and New York. Problems were sort of solved with the technology innovation that we didn’t expect. Does that tell you anything about what’s coming around the corner and maybe the level of humility we should have for our ability to anticipate it.
Richard Rhodes: We’re now in the middle of what I think is the largest energy transition in human history. And you know, I’ve written so much around this subject and here was the chance to take a look all the way back to what was really the beginning and the rest. One of the things that I discovered when I was working on The Making of the Atomic bomb, in fact one of the reasons I wrote that book is because we seem to be in the early 1980s at a crossroads where it looked so dangerous for the world. All the nuclear weapons brought in the world.
And it seems to me that if we went back to the beginning and took another look there might have been alternative pathways that would have led in a safer and better direction. And I thought that my thing – the same thing might be true for our energy dilemmas of today. So, that’s the reason I wrote the book.
I simply say we have to use every available energy source that isn’t carbon heavy in order to survive this largest of all energy transitions. But much of the world is just in the process of developing; that is to say people who have lived for millennia in deep poverty are slowly beginning to see the possibility. China being the most obvious example of moving up to the kind of middle class lives that we in the United States pretty much take for granted.
So, we have a double problem which is increasing the energy usage of large numbers of people around the world while at the same time reducing the carbon levels of the energy we use that is a really big challenge, bigger than people realize. And that means that we’re not going to be able to sit down and say well nuclear is dangerous, because once in a while nuclear power plant blows up–which is true of any energy source and particularly unusual in the nuclear world by the way.
We’re going to have to find a way to work with nuclear, as well as, these other energy sources. You cannot power the world on renewables. The United States is rich enough that if it really wanted to it could probably work out a way to run its entire energy economy on renewables although I don’t think it would be a very efficient system it would be a very expensive system. But the rest of the world doesn’t really have that luxury. Right now China has on the drawing boards or in development some 125 nuclear power reactors. They are not even to deal with global warming. They are to deal with air pollution. And the Chinese are selling coal to the rest of the world unfortunately.
So, when Germany for example decided to eliminate its nuclear power and go all renewables, it has found itself compelled by its own energy demands to increase its use of brown coal which is the most carbon producing of all the various kinds of coal. They actually have increased their production of carbon dioxide since they decided to go — to eliminate their nuclear power supply.
The Italians eliminated their nuclear power, so now they buy their electricity from the French and the French of course are about 80% nuclear, which is pretty hypocritical of the Italians. I mean this is the kind of discussion that I think we’re all going to have to have and swallow hard and look again at nuclear, look again at all the other sources of energy we can think of that are not carbon producing to deal with what is the worlds – the histories most enormous energy transition yet.
These days the media are full of stories about people setting targets to “decarbonize” the energy sources fueling their societies. Some are claiming (and some have failed notoriously) to achieve zero carbon electrification. We should take a deep breath, step back and rationally consider what is being discussed and proposed.
The History of Energy Transitions
Thanks to Bill Gates we have this helpful graph showing the progress of human civilization resulting from shifts in the mix of energy sources.
Before the 19th century, it is all biomass, especially wood. Some historians think that the Roman Empire collapsed partly because the cost of importing firewood from the far territories exceeded the benefits. More recently, the 1800’s saw the rise of coal and the industrial revolution and a remarkable social transformation, along of course with issues of mining and urban pollution. The 20th century is marked first by the discovery and use of oil and later by natural gas. Since the chart is proportional, it shows how oil and gas took greater importance, but in fact the total amount of energy produced and consumed in the modern world has grown exponentially. So energy from all sources, even biomass has increased in absolute terms.
The global view also hides the great disparity between advanced societies who exploited carbon-based energy to become wealthy and build large middle classes composed of human resources multiplying the social capital and extending the general prosperity. Those societies have also used their wealth to protect to a greater extent their natural environments.
The 21st Century Energy Concern
Largely due to reports of rising temperatures 1980 to 2000, alarms were sounded about global warming/climate change and calls to stop using carbon-based energy. To understand what “decarbonization” actually means, we have two recent resources that explain clearly what is involved and why we should be skeptical and rationally critical.
First, Master Resource describes how the anti-carbon agenda is now embedded in societal structures. Mark Krebs writes Paris Lives! “Deep Decarbonization” at DOE. Excerpts in italics with my bolds.
Despite President Trump’s announcement that the U.S. would withdraw for the Paris Agreement, the basis of that agreement–“deep decarbonization” through “beneficial electrification”–is proceeding virtually unabated. The reason that this is occurring is because it serves the purposes of the electric utility industry and their environmentalist allies, e.g., the Natural Resources Defense Council (NRDC).
According to the Paris Agreement, the fundamental strategy for climate stabilization would be by “deep decarbonization” primarily through “beneficial electrification” powered with “clean energy.” But how are these terms defined exactly?
Deep decarbonization: [4] The primary strategy of the Paris Agreement for climate stabilization through an 80% reduction in the global use of fossil fuels to “decarbonize” the World’s energy systems by 2050.
Beneficial Electrification: [5] Replacing consumers direct consumption of natural gas and gasoline, along with other forms of fossil fuels, and on to electricity (with the assumption that electricity generation will be dominated by “clean energy”).
Clean Energy: Strictly interpreted, it’s just renewables. And more specifically, renewable electric generation. However, many variant definitions exist. For example, DOE includes nuclear, bioenergy and fuel cells as “clean.” And so-called clean coal also appears to qualify via “carbon capture & sequestration” (CCS) as does natural gas, if it is used as a feedstock to make electricity. Energy efficiency (e.g., “nega-watts”) is also deemed “clean energy” by some.
Think about it: Transitioning to a global clean energy economy means there must be a transition from something. By the process of elimination, about the only energy sources not clean are the direct use of fossil fuels. In addition to natural gas direct use, “not clean energy” also includes gasoline, propane, etc. Regardless, “clean energy” (i.e. electrification) is being put forth as the universal cure without disclosure of side effects. In essence the ‘clean energy’ future striven for by EERE exports environmental impacts to others and at high costs. Such non-climate related impacts are ignored.
Whether it’s called regulatory capture, rent seeking or political capitalism, the result is the same: Power accrues to the powerful. In addition to receiving taxpayer funding, advocates of “deep decarbonization” have profited greatly by climate change fear mongering for donations as well as from the deep pockets of Tom Steyer and the like. And now these advocates have officially joined forces with the electric utility industry as evidenced by the recent pact between NRDC and EEI that includes the pursuit of “efficient electrification of transportation, buildings, and facilities.” [21]
In large measure, EERE’s current activities should be viewed as inappropriate subsidies for deep decarbonization via electrification in contravention of President Trump’s proclamation to withdraw from the Paris Agreement. It is also contrary to President Trump’s Executive Order 13783.
Abstract: There are lessons from recent history of technology introductions which should not be forgotten when considering alternative energy technologies for carbon dioxide emission reductions.
The growth of the ecological footprint of a human population about to increase from 7B now to 9B in 2050 raises serious concerns about how to live both more efficiently and with less permanent impacts on the finite world. One present focus is the future of our climate, where the level of concern has prompted actions across the world in mitigation of the emissions of CO2. An examination of successful and failed introductions of technology over the last 200 years generates several lessons that should be kept in mind as we proceed to 80% decarbonize the world economy by 2050. I will argue that all the actions taken together until now to reduce our emissions of carbon dioxide will not achieve a serious reduction, and in some cases, they will actually make matters worse. In practice, the scale and the different specific engineering challenges of the decarbonization project are without precedent in human history. This means that any new technology introductions need to be able to meet the huge implied capabilities. An altogether more sophisticated public debate is urgently needed on appropriate actions that (i) considers the full range of threats to humanity, and (ii) weighs more carefully both the upsides and downsides of taking any action, and of not taking that action.
Key Points
Only fossil fuels and nuclear fuels have the ability to power megacities in 2050, when over half of the then 9B people will live in them.
As the more severe predictions of climate change over the last 25 years are simply not happening, it makes no sense to deploy the more costly options for renewable energy.
Abandoned infrastructure projects (such as derelict wind and solar farms in the Mojave desert) remain to have their progenitors mocked.
In this review, I want to concentrate on the measures taken to reduce the global emissions of carbon dioxide, and how the lessons from recent history of technology introductions can inform the decarbonization project. I want to review the last 20 years in particular and see what this portends for the next 40 years which will take us beyond 2050, which is the pivotal date in the public discourse. A Royal Commission into Environmental Pollution in 2000 advocated a 60% reduction of carbon dioxide emissions for the UK by 2050. 14 The date was fixed by the response to the enquiry as to when energy from nuclear fusion might supply 10% of the world’s energy needs. The answer was not before 2050, and we will need to get there without it. The revision from 60% to 80% reduction came from concern that developed countries should make allowances for developing countries using fossil fuel to escape poverty, i.e., they can take the same route as developed countries did to their relative affluence.
We have had over 20 years since the first Earth Summit in Rio de Janeiro in 1992, where 1990 emissions of carbon dioxide were agreed upon as the benchmark for reductions. Before discussing specific technologies, I want to establish the scale of the challenge in engineering, technology, and project delivery terms: this does include economics, societal attitudes, and the public discourse. I also discuss some engineering fundamentals. I will then summarize the many lessons of technology introductions, the preparation for other global challenges, and finally discuss a realistic way forward.
Scale
It is important to note the scale of the perceived problem. The entire history of modern civilization that started with the first industrial revolution has been enabled by the burning of fossil fuels. Our mobility, our health and lifestyles, our diet and its variety, our education system, particularly at the higher level, and our high culture would be quite impossible without fossil fuels, which have provided over 90% of the energy consumed on the earth since 1800. Today, geothermal, hydro- and nuclear power, together with the historic biofuels of wood and straw, account for about 15% of our energy use. 18 Even though it is 40 years since the first oil shocks kick-started the modern renewable energy developments (wind, solar, and cultivated biomass), we still get rather less than 1% of our world energy from these sources. Indeed the rate at which fossil fuels are growing is seven times that at which the low carbon energies are growing, as the ratio of fossil fuel energy used to total energy used has remained unchanged since 1990 at 85%. 19 The call to decarbonize the global economy by 80% by 2050 can now only be described as glib in my opinion, as the underlying analysis shows it is only possible if we wish to see large parts of the population die from starvation, destitution or violence in the absence of enough low-carbon energy to sustain society.
A further insight into the scale of present day energy consumption is as follows. In Europe, today, we use about 6–7 times as much energy per person per day as was used in 1800, and there are seven times as many people on earth now as compared with then. 18 The energy of 1800 was expended on heating and lighting one room in a house and producing hot water used in that same room, and on the purchase of local produce and manufactures. By examining the breakdown of today’s energy usage in the UK and Europe, 20 this energy use persists today, but with lighting and central heating of whole buildings. In addition, Europeans today use as much energy per person per day on private motoring as they used in total in 1800. They use an equal amount on mobility through public transport: trains, ships, and aeroplanes. Three times the personal consumption of 1800 is used in the manufacture and logistics of things we consume or use, such as food or manufactured goods.
Over the next 20 years, the World Bank estimates that the middle class will rise from 3B to 5B, on the basis of which BP estimates a further increase in global energy demand of 40% still to be met in the main by fossil fuels. The graph in Fig. 1(a) is on the wrong scale to show that the total installed renewable energy capacity as of today is equal to the combined capacity of the nuclear power plants shut down in Japan and scheduled to close in Germany, making the challenge of carbon free energy impossible to meet.
Energy return on investment (EROI)
The debate over decarbonization has focussed on technical feasibility and economics. There is one emerging measure that comes closely back to the engineering and the thermodynamics of energy production. The energy return on (energy) investment is a measure of the useful energy produced by a particular power plant divided by the energy needed to build, operate, maintain, and decommission the plant. This is a concept that owes its origin to animal ecology: a cheetah must get more energy from consuming his prey than expended on catching it, otherwise it will die. If the animal is to breed and nurture the next generation then the ratio of energy obtained from energy expended has to be higher, depending on the details of energy expenditure on these other activities. Weißbach et al. 23 have analysed the EROI for a number of forms of energy production and their principal conclusion is that nuclear, hydro-, and gas- and coal-fired power stations have an EROI that is much greater than wind, solar photovoltaic (PV), concentrated solar power in a desert or cultivated biomass: see Fig. 2. In human terms, with an EROI of 1, we can mine fuel and look at it—we have no energy left over. To get a society that can feed itself and provide a basic educational system we need an EROI of our base-load fuel to be in excess of 5, and for a society with international travel and high culture we need EROI greater than 10. The new renewable energies do not reach this last level when the extra energy costs of overcoming intermittency are added in. In energy terms the current generation of renewable energy technologies alone will not enable a civilized modern society to continue!
Successful new technologies improve the lot of mankind
I have already referred to the use of Watt’s steam energy as a source of energy to improve harvesting, greatly aiding agricultural productivity. Notice too that the windmills of Europe stopped turning: the new source of energy was compact, moveable, reliable, available when needed, and of relatively low maintenance. This differential has widened ever since, and the recent windmills do not greatly close the gap in practical utility or cost. Later in the 19th century, electricity from steam turbines became available to lighten the darkness, power an increasing range of machinery, and increase the productivity of mankind to the extent that can be seen today when one contrasts an industrial city with a remote off-grid rural community. It is this energy which has underpinned the ability to improve sanitation, transport goods, and allow modern communications and advanced healthcare. During the 20th century, jet engines greatly reduced the time taken to get between two distant places, with semiconductor technologies eliminating that time with virtual presence anywhere anytime. The genetic engineering technologies have greatly speeded up the processes of plant breeding and the recent green revolution means that the larger population of the world now is better fed than ever before. The remaining areas of starvation are universally associated with war, and/or bad governance interfering with supply chains.
R&D in new technologies is a good use of public money
Really new technologies often span several existing sectors of private industry which would benefit or lose out if the new technology were introduced: coachmen in the age of buses and trains, pigeon carriers in the age of telegraph. It is difficult to have foreseen the rise of electronics to its current pervasive state if governments around the world had not supported relevant R&D in the early stages. Today the global R&D budget exceeds $1T, and the public purse contributes much of that. 33 In many advanced countries, there is significant public support of private R&D in the perceived total public interest, an interest that is not the particular focus of any one company in the private sector. The analysis of the origin of Apple’s technologies is an exemplary case of the private capture of public investment. 34
In the last 40 years, there has continued to be public-good R&D undertaken in many countries on new energy technologies that have given rise to the first generation of renewables. The support goes well down the development channel as well as the background research. This is because the private risk of initial small scale deployment to test the effectiveness of a new technology is often too high for a single company or consortium to bear. The USA has the most effective ecosystem of innovation in the world, eclipsed only for a brief period in the 1980s by Japan.
Premature roll-out of immature/uneconomic technologies is a recipe for failure
The virtuous role of government funding in R&D is to be contrast with the litany of failure in recent times of subsidies in support of the premature rollout of technologies that are uneconomic and/or immature.
At its prime, the Carrizo Plain (S. California) was by far the largest photovoltaic array in the world, with 100,000 1′x 4′ photovoltaic arrays generating 5.2 megawatts at its peak. The plant was originally constructed by ARCO in 1983 and was dismantled in the late 1990s. The used panels are still being resold throughout the world.
In the late 1980s, large scale installations were made in the Mojave desert of farms of windmills and solar panels. One can see the square kilometers of green industrial dereliction by googling the phrases ‘abandoned wind farm’ and ‘abandoned solar farm’, respectively. The useful energy generated within these farms was insufficient to pay the interest on capital and to maintain production. The companies have gone bankrupt, and there is no one to decommission the infrastructure and return the sites to their pristine condition. I note that the remains are there to be mocked as an infrastructure project has gone wrong, and they will remain for decades, a modern version of the hubris of Ozymandias or the builders of the Tower of Babel. It is important to note that some (but not all 36 ) second and third generation wind and solar farms in the Mojave desert have fared and are faring better, 37 but the lesson here remains that premature roll-out of unready technology is unwise.
The primary problem is the use of public money, i.e., subsidies, to encourage the roll-out. They have a plethora of unintended consequences in the energy infrastructure sector. During the economic crisis of 2008/9 many of the subsidies were reduced or withdrawn in the USA. Many small companies went bankrupt. This has continued with subsidy reductions in the UK, Germany, Spain and elsewhere with further bankruptcies in the alternative energy sector. 38 Indeed there is an index for the stock value of alternative energy companies, RENIXX, that lost 80% of its value between 2008 and 2013, although it has recovered a little of that fall more recently. It is certainly not the place for pension fund investments: if the market were mature and stable, a 40-year programme to renew the global energy infrastructure should be the place for pension funds. 39 The reason so far for these failures is that the technologies are uneconomic over their lifecycles and immature in terms of the energy return on their investment (as in section “Energy return on investment (EROI)” above). In China, public subsidies continue with solar panels being sold at about a 30% loss on the cost of production. 40 That is a political strategy at work rather than an industrial strategy. In democracies, there is unlikely to be multiparty, multigovernment consensus lasting for the multidecadal timescales implied by major infrastructure change.
There is an unintended and unwanted social consequence of the roll out of these new technologies. There is ample evidence in the UK of increasing fuel poverty (i.e., household spending over 10% of disposable income keeping warm in winter) in the regions of wind farm deployment where higher electricity bills are needed to cover the rent of the land (from usually already rich) landowners, a direct reversal of the process whereby cheap energy over the last century has lifted a significant fraction of the world’s poor from their poverty. 41 Renewable energy supplements are viewed as socially divisive.
Technology breakthroughs are not pre-programmable
When public commentators such as Thomas L Friedman enter the debate about energy technologies, they urge more research to produce a breakthrough energy technology, in his case, a ‘plentiful supply of clean green cheap electrons’. 43 It is salutary to realize that all but two of the energy technologies used today have counterparts in biblical times, the only newcomers being nuclear energy and solar photovoltaics. The delivery of coal, gas, wind, water, and solar energy may be quite different today from then but the underlying principles of operation have not changed. Since nuclear fusion was first demonstrated, there has been a 60-year effort to tame it for a source of electrical energy, but so far without success. One can ask the experts whether they might have made more progress with more money, but the challenges have remained profound. Even if there were a breakthrough tomorrow in the basic processes, it would still take of order 40 years (rather than 20 in my opinion) to complete the further engineering and technology work and deploy fusion reactions to be able to provide (say) 10% of the world’s electricity. We must get to 2050 without it.
Finance is limited, so actions at scale must be prioritized
The sum involved in renewing the energy infrastructure in the UK is about £200B over the next decade. 46 A large element of this cost is to make good the lack of infrastructure investment over the last 20 years since a privatized energy market was introduced. In addition the large scale modification of the grid to cope with multiple renewable energy inputs has to be included. There remains a dispute in the public domain as to where these costs lie. The grid as we know it in major industrial countries has evolved over a period of 100 years on the basis of a relatively few large sources of energy connected to the grid which circulates power to substations from which it is transmitted to individual end-users in a broadcast mode. With multiple small and independent sources of energy from wind and solar installations, the grid topology has to change to cope with this very different quality of energy. The conventional suppliers of energy say that they should not have to cover these extra costs which should be book-kept with the renewable energies in the overall balance sheet of costs. A similar book-keeping problem arises with the costs of back-up to intermittent renewable energies. The combined cycle gas turbine generators that have delivered base load electricity to the grid in Germany are now being asked to act in back-up mode, with frequent acceleration and deceleration of the turbines, for which purpose they were not designed and they shorten the in-service life time as a result. 47 The cost analyses of future energy are bedevilled by the assignments of additional consequential costs. In practice as consumers, we buy the energy provided by electricity, blind to the particular way it is produced.
The scale of the costs of these energy bills is such that one cannot make mistakes in infrastructure investment decisions. A wrong investment is a missed opportunity on a large scale.
Finally, it is as well to remember that there are only ever two sources of payment, the consumer and the taxpayer, and the only issue at stake between them is the directness with which costs are recovered.
The way forward
It is surely time to review the current direction of the decarbonization project which can be assumed to start in about 1990, the reference point from which carbon dioxide emission reductions are measured. No serious inroads have been made into the lion’s share of energy that is fossil fuel based. Some moves represent total madness. The closure of all but one of the aluminium smelters that used gas-fired electricity in the UK (because of rising electricity costs from the green tariffs that are over and above any global background fossil fuel energy costs) reduces our nation’s carbon dioxide emissions. 62 However, the aluminium is now imported from China where it is made with more primitive coal-based sources of energy, making the global problem of emissions worse! While the UK prides itself in reducing indigenous carbon dioxide emissions by 20% since 1990, the attribution of carbon emissions by end use shows a 20% increase over the same period. 63
It is also clear that we must de-risk all energy infrastructure projects over the next two decades. While the level of uncertainty remains high, the ‘insurance policy’ justification of urgent large-scale intervention is untenable, and we do not pay premiums if we would go bankrupt as a consequence. Certain things we do not insure against, such as a potential future mega-tsunami, 64 or a supervolcano, 65 or indeed a meteor strike, even though there have been over 20 of these since 2000 with the local power of the Hiroshima bomb! 66 Using a significant fraction of the global GDP to possibly capture the benefits of a possibly less troublesome future climate leaves more urgent actions not undertaken.
Two important points remain. The first is that there is no alternative to business as usual carrying on, with one caveat expressed in the following paragraph. Since energy use has a cost, it is normal business practice to minimize energy use, by increasing energy efficiency (see especially the recent improvement in automobile performance), 67 using less resource material and more effective recycling. These drivers have become more intense in recent years, but they were always there for a business trying to remain competitive.
The second is that, over the next two decades, the single place where the greatest impact on carbon dioxide emissions can be achieved is in the area of personal behaviour. Its potential dwarfs that of new technology interventions. Within the EU over the last 40 years there has been a notable change in public attitudes and behaviour in such diverse arenas as drinking and driving, smoking in public confined spaces, and driving without a seatbelt. If society’s attitude to the profligate consumption of any materials and resources including any forms of fuel and electricity was to regard this as deeply antisocial, it has been estimated we could live something like our present standard of living on half the energy consumption we use today in the developed world. 68 This would mean fewer miles travelled, fewer material possessions, shorter supply chains, and less use of the internet. While there is no public appetite to follow this path, the short term technology fix path is no panacea.
Conclusions
Over the last 200 years, fossil fuels have provided the route out of grinding poverty for many people in the world (but still less than half of all people) and Fig. 1 shows that this trend is certain to continue for at least the next 20 years based on the technologies of scale that are available today. A rapid decarbonization is simply impossible over the next 20 years unless the trend of a growing number who succeed to improve their lot is stalled by rich and middle class people downgrading their own standard of living. The current backlash against subsidies for renewable energy systems in the UK, EU and USA is a sign that all is not well with current renewable energy systems in meeting the aspirations of humanity.
Finally, humanity is owed a serious investigation of how we have gone so far with the decarbonization project without a serious challenge in terms of engineering reality. Have the engineers been supine and lacking in courage to challenge the orthodoxy? Or have their warnings been too gentle and dismissed or not heard? Science and politicians can take too much comfort from undoubted engineering successes over the last 200 years. When the sums at stake are on the scale of 1–10% of the world’s GDP, this is a serious business.
Figure 12: Figure 9 with Y-scale expanded to 100% and thermal generation included, illustrating the magnitude of the problem the G20 countries still face in decarbonizing their energy sectors.
Hydraulic fracturing (AKA “fracking”) is in the news every day, and often in a disparaging way, despite the great benefits bestowed on nations applying the process, especially the US.
On a recent river cruise I found myself at a table with a couple from California, and the woman began spouting about the dangers and horribleness of fracking. My civility censor was suppressed by the wine I’d consumed, and I interrupted to say she was talking Bullshit. She halted, then asked her husband, a retired geologist, to comment, and he stated that fracking is a risky business. The geologist husband did not present any evidence for his view, IMO he was only speaking to support his spouse. I said I respected his opinion but still disagreed. The next day I apologized for my rudeness but said I still think she has been misled. We shared a congenial dinner later on, but avoided the subject.
The experience revealed I had been unprepared to engage on the details of the fracking issue. So this post is to summarize some research to assemble persuasive facts and resources to counter the fear mongering on this subject.
Originated at treehugger.com.
1.Obama’s EPA Found Fracking Has Not Contaminated Drinking Water
First, let me provide a bit of background on hydraulic fracturing. I find that most people who are against fracking don’t actually know what it is. The EPA report goes out of its way to blur the lines as well by lumping it all into “activities in the hydraulic fracturing water cycle.” By doing this, if a guy driving a truck filled with fracking chemicals has a wreck, it’s a “fracking issue.” So let’s define some terms.
Hydraulic fracturing has been around since the late 1940s, and has now been used in the U.S. more than a million times to increase production from oil and gas wells. Fracking involves pumping water, chemicals and a proppant down an oil or gas well under high pressure to break open channels (fractures) in the reservoir rock trapping the deposit. Oil and gas do not travel easily through these some formations, which is why they need to be fractured. The proppant is a granular material (usually sand) designed to hold those channels open, allowing the oil (or natural gas) to flow to the well bore. While fracking has been around for decades, two developments in recent years are responsible for thrusting the technique into the public eye. The first is the fairly recent development in which fracking was combined with horizontal drilling, another common technique used in the oil and gas industry.
Like fracking, horizontal drilling was invented decades ago, and has been widely used in the oil and gas industry since the 1980s. As its name implies, horizontal drilling involves drilling down to an oil or gas deposit and then turning the drill horizontal to the formation to access a greater fraction of the deposit.
The marriage of these two techniques of hydraulic fracturing and horizontal drilling enabled the shale oil and gas boom in the U.S.
But the second development is what primarily thrust the technique(s) into the public spotlight. Some of the shale oil and gas formations are in areas that had never experienced significant fossil fuel development. Many locals resented this intrusion into their lives, and anti-fracking sentiments fed into a great deal of misinformation around the technique.
The movie Gasland is a perfect example. Director Josh Fox, whose family farm lies atop the Marcellus Shale in Pennsylvania, relied on misinformation and appeals to emotion instead of scientific data. Nevertheless, it was embraced by anti-fracking activists, and many who had never heard of fracking became convinced the technique was regularly polluting water supplies.
The concern among anti-fracking activists was that the fractures that allowed oil and gas to reach the well bore could also allow oil, gas, and chemicals to seep into the water supplies. But the reason this is a remote possibility is that a mile or more of rock will separate an oil and gas formation that is being fractured and an underground water resource. The fractures themselves extend for a few hundred feet, thus unsurprisingly there has never been a proven case where chemicals migrated from a fracked zone into water supplies.
That hasn’t stopped some from claiming that fracking has contaminated water supplies. However, those cases have always been a result of some activity peripheral to fracking. For example, if a well is improperly cemented it can leak. That in fact has happened, leading to the charge that “fracking contaminated the water.” There is an important distinction, however, and that is that this is not a result of the fracking process. A well may leak regardless of whether it was fracked. But activists (and now the EPA) seem bent on blurring the lines to the greatest extent possible by lumping lots of peripheral activities into the “fracking process.”
In 2010, Congress asked the EPA to investigate the safety of fracking. In 2015, the EPA issued a draft report. The bombshell statement from that report was that there was no evidence that fracking had “led to widespread, systemic impacts on drinking water resources in the United States.” This report was cheered by the fossil fuel industry, but caused a backlash with environmentalists, and spawned many counterclaims that the “fracking process” had led to contaminated water.
In December 2016 the EPA released its final report on the topic: Hydraulic Fracturing for Oil and Gas: Impacts from the Hydraulic Fracturing Water Cycle on Drinking Water Resources in the United States. Environmentalists were quick to note that the EPA had deleted its previous claim of no evidence of widespread water contamination, and were now reporting that “hydraulic fracturing activities can impact drinking water resources under some circumstances.” This story from The New York Times, for instance, was pretty typical of the reporting on the issue: Reversing Course, E.P.A. Says Fracking Can Contaminate Drinking Water.
But did the EPA actually reverse course? No. They gave examples where fracking could contaminate water. For instance they state that “Injection of hydraulic fracturing fluids directly into groundwater resources” can cause contamination. Yeah, no joke. Likewise, filling your car with gasoline can contaminate drinking water, because if you spill the gasoline all over the ground, it can get into the drinking water.
The EPA’s final report on hydraulic fracturing wasn’t that much different from the draft report. As in the previous report, the EPA noted that activities related to — but not exclusive to — fracking, have contaminated water supplies. Chemical spills happen all the time, but if the chemicals in question are for fracking, it becomes a “fracking issue.” Note that if the chemicals in question are to be used for fighting fires, we don’t say “firefighting contaminates water.” We should properly identify and address the actual problem, which in this instance would be the cause of the chemical spill.
Ultimately, the final report deleted a phrase from the draft report that there was no evidence of widespread impact on water supplies, and selectively used hypotheticals to show how fracking “could” contaminate water supplies. This is the Obama Administration laying down one more speed bump for the oil and gas industry while it still can.
A new Environment America “report” uses a couple old anti-fracking tactics — exploitation of children and blatant misinformation from activist studies — to try to stoke fears and rally support for its extremist call to ban fracking nationwide.
The ominously-titled “Dangerous and Close: Fracking Puts the Nation’s Most Vulnerable People at Risk” finds there are nearly 2,000 child care facilities, better than 1,300 schools, nearly 250 nursing care providers and more than 100 hospitals within a one-mile radius of fracked wells in the nine states examined, stating:
“Given the scale and severity of fracking’s impacts, fracking should be prohibited wherever possible and existing wells should be shut down beginning with those near institutions that serve our most vulnerable populations.”
Here are the report’s most egregious claims, followed by the facts.
Environment America Claim: “Fracking creates a range of threats to our health, including creating toxic air pollution that can reduce lung function even among healthy people, trigger asthma attacks, and has been linked to premature death. Children and the elderly are especially vulnerable to fracking’s health risks.”
A pumpjack works in the Bakken shale of North Dakota.
REALITY: There is actually ample evidence that fracking is improving overall air quality and health by reducing major pollutants such as fine particulate matter, sulfur dioxide and nitrogen dioxide. Furthermore, all three studies EA singles out as “evidence” close proximity to fracking sites can lead to the myriad of adverse health effects have been thoroughly debunked.
EA even cites an Earthworks study that claims “A series of 2012 measurements by officials of the Texas Commission on Environmental Quality (TCEQ) found VOCs levels so high at one fracking location that the officials themselves were forced to stop taking measurements and leave the site because it was too dangerous for them to remain.”
EA fails to mention TCEQ responded to Earthworks’ report by saying the agency has collected “several millions of data points for volatile organic compounds” in the Barnett Shale and Eagle Ford Shale and “Overall, the monitoring data provide evidence that shale play activity does not significantly impact air quality or pose a threat to human health.”
EA also conveniently ignores that the West Virginia Department of Environmental Protection (DEP) and the Colorado Department of Public Health (CDPH) have conducted air monitoring near well sites as well and found no credible risk to public health.
Environment America Claim: “Currently, oil and gas companies are exempt from key provisions in the Safe Drinking Water Act, the Clean Air Act, the Clean Water Act, and the Resource Conservation and Recovery Act.”
REALITY: The notion that the oil and natural gas industry is under-regulated is absolutely absurd narrative activists such as EA continue to push. Oil and gas production activities are subject to eight federal laws: including all relevant provisions of the Safe Drinking Water Act (SDWA); Clean Water Act (CWA); Clean Air Act (CAA); Resources Conservation and Recovery Act (RCRA); Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA); the EPCRA; Toxic Substances Control Act (TSCA); and Federal Insecticide, Fungicide and Rodenticide Act (FIFRA). Additionally, the oil and gas production sector is also heavily regulated at the state level.
A drilling rig works in the Eagle Ford shale, South Texas region.
Environment America Claim: “Exposure to low levels of many of the chemicals used in or generated by oil and gas extraction activities can contribute to a variety of health effects, including asthma, cancer, birth defects, damage to the reproductive system and impaired brain development. For example, children’s long-term exposure to low levels of benzene, generally classified as a carcinogen, also harms respiratory health.”
REALITY: It is essential to understand that toxicity is completely dependent on dose level and exposure. The mere presence of benzene, for example, does not mean that it is present in toxic levels, as the numerous studies air monitoring studies referred to earlier illustrate. EA insinuates that even low-level benzene exposure is harmful. But benzene is actually present in countless everyday products such as shampoo, tooth paste, paint, PVC pipes and countless plastic products.
Environment America Claim: “Fracking targets the oil and gas trapped in shale formations… Sometimes that means wells are drilled in rural areas, such as portions of Colorado or North Dakota, and sometimes that wells are in densely populated areas, such as Los Angeles…”
REALITY: There are no fracking or unconventional oil production operations in the city of Los Angeles — none. EA attempts to justify this claim by employing the common activist tactic of expanding the definition of fracking to encompass all oil and gas related activity:
“Throughout this report, we refer to “fracking” as including all of the activities needed to bring a well into production using high-volume hydraulic fracturing. This includes drilling the well, operating that well, processing the gas or oil produced from that well, and delivering the gas or oil to market. The oil and gas industry often uses a more restrictive definition of “fracking” that includes only the actual moment in the extraction process when rock is fractured – a definition that obscures the broad changes to environmental, health and community conditions that result from the use of high-volume hydraulic fracturing in oil and gas extraction.”
Fracking is not used as a completion technique at any of the urban drill sites in the city. All of the facilities recover oil through traditional water flood operations. The report’s attempt to shoehorn fracking and unconventional production into its report proves that it is not engaged in an honest attempt to inform the public.
Environment America Claim: “Because of the health hazard created by radon, Pennsylvania has a long record of radon measurements in homes. An analysis of those radon measurements by researchers at Johns Hopkins School of Public Health found that radon levels have increased in counties with extensive fracking since 2004, and also found elevated radon levels on the first floor of houses located within 12.5 miles of a fracked well.”
REALITY: The Johns Hopkins study EA is referring to actually found the highest concentrations of radon were in areas with no shale development and direct sampling found radon not linked to fracking. As is the case with so many of the studies EA uses as evidence, the authors merely speculated fracking was the cause.
Environment America Claim: “Oil and gas production at fracked wells releases volatile organic compounds and nitrogen oxides that contribute to the formation of smog.”
REALITY: Oil and gas production is not a major contributor to ground-level ozone.
As EID has emphasized before, publicly available information demonstrates oil and gas production is not the significant contributor to ozone levels. Vehicle exhaust adds far more non-methane volatile organic compounds (NMVOCs) and nitrogen oxides (NOx) — both precursors to ground-level ozone — to the atmosphere than oil and gas production, as data from the EPA’s 2016 Greenhouse Gas Inventory clearly demonstrates.
Not only do oil and gas activities account for just six percent of total NOx emissions, which play more of a role in ground-level formation than VOCs, another recent NOAA report found that “The increased use of natural gas has…led to emissions reductions of NOx (40%) and SO2 (44%).”
Environment America Claim: “Contaminants can reach water supplies through faulty well construction, through surface spills, through improper wastewater disposal, or potentially through migration from the shale layer itself.”
REALITY: The EPA’s landmark five-year study confirmed, “hydraulic fracturing activities have not led to widespread, systemic impacts to drinking water resources,” and at least 15 other studies say the fracking process, specifically, have not contaminated groundwater.
Conclusion
EA’s claims in this report — aimed at generating headlines — are quite profound.
“Schools and day care centers should be safe places for kids to play and learn,” said Rachel Richardson, director of Environment America’s Stop Drilling program and co-author of the report. “Unfortunately our research shows far too many kids may be exposed to dirty air and toxic chemicals from fracking right next door.”
The problem is EA’s “research” merely found that there are some schools, nursing homes and hospitals near oil and natural gas development. It made no effort to collect its own data to support their claim that this is leading to adverse health effects.
Instead, it relied on long-debunked studies and tired fear tactics. Maybe that’s why the report’s hyperbolic claim that it “serves as a reminder of the unacceptable dangers of fracking, its potential to harm, and the need to bring this risky form of drilling to an end” was virtually ignored by the media.
3. Extensive research Study Found No link between groundwater pollution and fracking.
(Source: National Science Foundation and Duke University study summarized by Jeffrey Folks for American Thinker The science is settled, fracking is safe.)
Among the 130 wells studied, the researchers found only a subset of cases, including seven in Pennsylvania and one in Texas, in which faulty well construction or cementing was to blame for the seepage of gases into groundwater. According to Professor Avner Bengosh of Duke University, “[t]hese results appear to rule out the migration of methane up into drinking water aquifers from depth because of horizontal drilling or hydraulic fracturing.” That is to say, in the rare cases where it occurs, gases are entering the water supply from outside the borehead as a result of faulty well construction or poor cementing, both of which are manageable problems.
While the new report answers the most important question, proving beyond doubt that fracking itself does not cause gas to seep into the water supply, it does not address several other important questions. One of these is the frequency of contamination of water supplies by naturally occurring petroleum, methane, and other gases.
Natural pollution of this kind would seem to be extremely common, and in fact this natural process has been known for millennia. At sites where petroleum seeped to the surface, as in the vicinity of the 19th-century Drake oil field in Pennsylvania, Native Americans had made use of the oily substance as a lubricant for hundreds if not thousands of years. That oil, flowing naturally to the surface, was “contaminating” nearby streams and groundwater.
What humans add to natural emisions as a result of drilling is so minor as to be of little consequence. If some future study confirmed this fact, it would help to counter the myth that oil and gas drilling is polluting an otherwise pure land and sea environment. The reality is that wherever shale and other carbon-rich formations occur, natural leakage of petroleum and/or methane is inevitable. Oil and gas are naturally occurring features that are constantly interacting with the environment and entering the water supply through natural processes. As is so often the case, the idea that there once existed an environment free of all that modern intellectuals might consider unpleasant is simply a fantasy.
The NSF/Duke report is crucial to the debate over the safety of hydraulic fracturing. The oil and gas industry has already achieved a near perfect safety record, given the handful of failed wells in proportion to more than one million that have been fracked. The industry needs to continue working to achieve certainty that wells do not fail. It also needs to do a better job of communicating its intention to do so to a skeptical public.
Members of Congress, gas companies, news organization, drilling opponents: They’ve all made bold claims about hydraulic fracturing (fracking) and the U.S. supply of underground natural gas. We take on 10 controversial quotes about natural gas and set the record straight.
WE ARE THE SAUDI ARABIA OF NATURAL GAS.” SEN. JOHN KERRY, D-MASS., MAY 2010
Less than a decade ago, industry analysts and government officials fretted that the United States was in danger of running out of gas. No more. Over the past several years, vast caches of natural gas trapped in deeply buried rock have been made accessible by advances in two key technologies: horizontal drilling, which allows vertical wells to turn and snake more than a mile sideways through the earth, and hydraulic fracturing, or fracking. Developed more than 60 years ago, fracking involves pumping millions of gallons of chemically treated water into deep shale formations at pressures of 9000 pounds per square inch or more. This fluid cracks the shale or widens existing cracks, freeing hydrocarbons to flow toward the well.
These advances have led to an eightfold increase in shale gas production over the past decade. According to the Energy Information Administration, shale gas will account for nearly half of the natural gas produced in the U.S. by 2035. But the bonanza is not without controversy, and nowhere, perhaps, has the dispute over fracking grown more heated than in the vicinity of the Marcellus Shale. According to Terry Engelder, a professor of geosciences at Penn State, the vast formation sprawling primarily beneath West Virginia, Pennsylvania and New York could produce an estimated 493 trillion cubic feet of gas over its 50- to 100-year life span. That’s nowhere close to Saudi Arabia’s total energy reserves, but it is enough to power every natural gas—burning device in the country for more than 20 years. The debate over the Marcellus Shale will shape national energy policy—including how fully, and at what cost, we exploit this vast resource.
HYDRAULIC FRACTURING SQUANDERS OUR PRECIOUS WATER RESOURCES.” Green Party of Pennsylvania, April 2011
There is no question that hydraulic fracturing uses a lot of water: It can take up to 7 million gallons to frack a single well, and at least 30 percent of that water is lost forever, after being trapped deep in the shale. And while there is some evidence that fracking has contributed to the depletion of water supplies in drought-stricken Texas, a study by Carnegie Mellon University indicates the Marcellus region has plenty of water and, in most cases, an adequate system to regulate its usage. The amount of water required to drill all 2916 of the Marcellus wells permitted in Pennsylvania in the first 11 months of 2010 would equal the amount of drinking water used by just one city, Pittsburgh, during the same period, says environmental engineering professor Jeanne VanBriesen, the study’s lead author. Plus, she notes, water withdrawals of this new industry are taking the place of water once used by industries, like steel manufacturing, that the state has lost. Hydrogeologist David Yoxtheimer of Penn State’s Marcellus Center for Outreach and Research gives the withdrawals more context: Of the 9.5 billion gallons of water used daily in Pennsylvania, natural gas development consumes 1.9 million gallons a day (mgd); livestock use 62 mgd; mining, 96 mgd; and industry, 770 mgd.
“NATURAL GAS IS CLEANER, CHEAPER, DOMESTIC, AND IT’S VIABLE NOW.” OILMAN TURNED NATURAL-GAS CHEERLEADER T. BOONE PICKENS, SEPTEMBER 2009
Burning natural gas is cleaner than oil or gasoline, and it emits half as much carbon dioxide, less than one-third the nitrogen oxides, and 1 percent as much sulfur oxides as coal combustion. But not all shale gas makes it to the fuel tank or power plant. The methane that escapes during the drilling process, and later as the fuel is shipped via pipelines, is a significant greenhouse gas. At least one scientist, Robert Howarth at Cornell University, has calculated that methane losses could be as high as 8 percent. Industry officials concede that they could be losing anywhere between 1 and 3 percent. Some of those leaks can be prevented by aggressively sealing condensers, pipelines and wellheads. But there’s another upstream factor to consider: Drilling is an energy-intensive business. It relies on diesel engines and generators running around the clock to power rigs, and heavy trucks making hundreds of trips to drill sites before a well is completed. Those in the industry say there’s a solution at hand to lower emissions—using natural gas itself to power the process. So far, however, few companies have done that.
“[THERE’S] NEVER BEEN ONE CASE—DOCUMENTED CASE—OF GROUNDWATER CONTAMINATION IN THE HISTORY OF THE THOUSANDS AND THOUSANDS OF HYDRAULIC FRACTURING [WELLS]” SEN. JAMES INHOFE, R-OKLA., APRIL 2011
The senator is incorrect. In the past two years alone, a series of surface spills, including two blowouts at wells operated by Chesapeake Energy and EOG Resources and a spill of 8000 gallons of fracking fluid at a site in Dimock, Pa., have contaminated groundwater in the Marcellus Shale region. But the idea stressed by fracking critics that deep-injected fluids will migrate into groundwater is mostly false. Basic geology prevents such contamination from starting below ground. A fracture caused by the drilling process would have to extend through the several thousand feet of rock that separate deep shale gas deposits from freshwater aquifers. According to geologist Gary Lash of the State University of New York at Fredonia, the intervening layers of rock have distinct mechanical properties that would prevent the fissures from expanding a mile or more toward the surface. It would be like stacking a dozen bricks on top of each other, he says, and expecting a crack in the bottom brick to extend all the way to the top one. What’s more, the fracking fluid itself, thickened with additives, is too dense to ascend upward through such a channel. EPA officials are closely watching one place for evidence otherwise: tiny Pavillion, Wyo., a remote town of 160 where high levels of chemicals linked to fracking have been found in groundwater supplies. Pavillion’s aquifer sits several hundred feet above the gas cache, far closer than aquifers atop other gas fields. If the investigation documents the first case of fracking fluid seeping into groundwater directly from gas wells, drillers may be forced to abandon shallow deposits—which wouldn’t affect Marcellus wells.
“THE GAS ERA IS COMING, AND THE LANDSCAPE NORTH AND WEST OF [NEW YORK CITY] WILL INEVITABLY BE TRANSFORMED AS A RESULT. WHEN THE VALVES START OPENING NEXT YEAR, A LOT OF POOR FARM FOLK MAY BECOME TEXAS RICH. AND A LOT OF OTHER PEOPLE—ESPECIALLY THE ECOSENSITIVE NEW YORK CITY CROWD THAT HAS SETTLED AMONG THEM—WILL BE APOPLECTIC AS THEIR PRISTINE WEEKEND SANCTUARY IS CONVERTED INTO AN INDUSTRIAL ZONE, CRISSCROSSED WITH DRILL PADS, PIPELINES, AND ACCESS ROADS.” New York magazine, Sept. 21, 2008
Much of the political opposition to fracking has focused on the Catskill region, headwaters of the Delaware River and the source of most of New York City’s drinking water. But the expected boom never happened—there’s not enough gas in the watershed to make drilling worthwhile. “No one has to get excited about contaminated New York City drinking water,” Penn State’s Engelder told the Times Herald-Record of Middletown, N.Y., in April. The shale is so close to the surface that it’s not concentrated in large enough quantities to make recovering it economically feasible. But just to the west, natural gas development is dramatically changing the landscape. Drilling rigs are running around the clock in western Pennsylvania. Though buoyed by the economic windfall, residents fear that regulators can’t keep up with the pace of development. “It’s going to be hard to freeze-frame and say, ‘Let’s slow down,’?” Sen. Robert P. Casey Jr., D-Pa., said last fall. “That makes it more difficult for folks like us, who say we want to create the jobs and opportunity in the new industry, but we don’t want to do it at the expense of water quality and quality of life.”
“NATURAL GAS IS AFFORDABLE, ABUNDANT AND AMERICAN. IT COSTS ONE-THIRD LESS TO FILL UP WITH NATURAL GAS THAN TRADITIONAL GASOLINE.” REP. JOHN LARSON, D-CONN., CO-SPONSOR OF H.R. 1380, A MEASURE THAT WOULD PROVIDE TAX INCENTIVES FOR THE DEVELOPMENT AND PURCHASE OF NATURAL GAS VEHICLES, MARCH 2011
That may be true. Plus, there’s another incentive: Vehicles powered by liquefied natural gas, propane or compressed natural gas run cleaner than cars with either gasoline or diesel in the tank. According to the Department of Energy, if the transportation sector switched to natural gas, it would cut the nation’s carbon-monoxide emissions by at least 90 percent, carbon-dioxide emissions by 25 and nitrogen-oxide emissions by up to 60. But it’s not realistic: Nationwide, there are only about 3500 service stations (out of 120,000) that offer natural gas—based automotive fuel, and it would cost billions of dollars and take years to develop sufficient infrastructure to make that fuel competitive with gasoline or diesel. And only Honda makes a car that can run on natural gas. That doesn’t mean natural gas has no role in meeting the nation’s short-term transportation needs. In fact, buses in several cities now rely on it, getting around the lack of widespread refueling opportunities by returning to a central terminal for a fill-up. The same could be done for local truck fleets. But perhaps the biggest contribution natural gas could make to America’s transportation picture would be more indirect—as a fuel for electric-generation plants that will power the increasingly popular plug-in hybrid vehicles.
“DO NOT DRINK THIS WATER” HANDWRITTEN SIGN IN THE DOCUMENTARY GASLAND, 2010
It’s an iconic image, captured in the 2010 Academy Award—nominated documentary GasLand. A Colorado man holds a flame to his kitchen faucet and turns on the water. The pipes rattle and hiss, and suddenly a ball of fire erupts. It appears a damning indictment of the gas drilling nearby. But Colorado officials determined the gas wells weren’t to blame; instead, the homeowner’s own water well had been drilled into a naturally occurring pocket of methane. Nonetheless, up to 50 layers of natural gas can occur between the surface and deep shale formations, and methane from these shallow deposits has intruded on groundwater near fracking sites. In May, Pennsylvania officials fined Chesapeake Energy $1 million for contaminating the water supplies of 16 families in Bradford County. Because the company had not properly cemented its boreholes, gas migrated up along the outside of the well, between the rock and steel casing, into aquifers. The problem can be corrected by using stronger cement and processing casings to create a better bond, ensuring an impermeable seal.
“AS NEW YORK GEARS UP FOR A MASSIVE EXPANSION OF GAS DRILLING IN THE MARCELLUS SHALE, STATE OFFICIALS HAVE MADE A POTENTIALLY TROUBLING DISCOVERY ABOUT THE WASTEWATER CREATED BY THE PROCESS: IT’S RADIOACTIVE.” ProPublica, November 2009
Shale has a radioactive signature—from uranium isotopes such as radium-226 and radium-228—that geologists and drillers often measure to chart the vast underground formations. The higher the radiation levels, the greater the likelihood those deposits will yield significant amounts of gas. But that does not necessarily mean the radioactivity poses a public health hazard; after all, some homes in Pennsylvania and New York have been built directly on Marcellus shale. Tests conducted earlier this year in Pennsylvania waterways that had received treated water—both produced water (the fracking fluid that returns to the surface) and brine (naturally occurring water that contains radioactive elements, as well as other toxins and heavy metals from the shale)—found no evidence of elevated radiation levels. Conrad Dan Volz, former scientific director of the Center for Healthy Environments and Communities at the University of Pittsburgh, is a vocal critic of the speed with which the Marcellus is being developed—but even he says that radioactivity is probably one of the least pressing issues. “If I were to bet on this, I’d bet that it’s not going to be a problem,” he says.
“CLAIMING THAT THE INFORMATION IS PROPRIETARY, DRILLING COMPANIES HAVE STILL NOT COME OUT AND FULLY DISCLOSED WHAT FRACKING FLUID IS MADE OF.” Vanity Fair, June 2010
Under mounting pressure, companies such as Schlumberger and Range Resources have posted the chemical compounds used in some of their wells, and in June, Texas became the first state to pass a law requiring full public disclosure. This greater transparency has revealed some oddly benign ingredients, such as instant coffee and walnut shells—but also some known and suspected carcinogens, including benzene and methanol. Even if these chemicals can be found under kitchen sinks, as industry points out, they’re poured down wells in much greater volumes: about 5000 gallons of additives for every 1 million gallons of water and sand. A more pressing question is what to do with this fluid once it rises back to the surface. In Texas’s Barnett Shale, wastewater can be reinjected into impermeable rock 1.5 miles below ground. This isn’t feasible in the Marcellus Shale region; the underlying rocks are not porous enough. Currently, a handful of facilities in Pennsylvania are approved to treat the wastewater. More plants, purpose-built for the task, are planned. In the meantime, most companies now recycle this water to drill their next well.
“THE INCREASING ABUNDANCE OF CHEAP NATURAL GAS, COUPLED WITH RISING DEMAND FOR THE FUEL FROM CHINA AND THE FALL-OUT FROM THE FUKUSHIMA NUCLEAR DISASTER IN JAPAN, MAY HAVE SET THE STAGE FOR A ‘GOLDEN AGE OF GAS.” WALL STREET JOURNAL SUMMARIZING AN INTERNATIONAL ENERGY AGENCY REPORT, JUNE 6, 2011
There’s little question that the United States, with 110 years’ worth of natural gas (at the 2009 rate of consumption), is destined to play a major role in the fuel’s development. But even its most ardent supporters, men like T. Boone Pickens, concede that it should be a bridge fuel between more polluting fossil fuels and cleaner, renewable energy. In the meantime, the U.S. should continue to invest in solar and wind, conserve power and implement energy-efficient technology. Whether we can effectively manage our natural gas resource while developing next-gen sources remains to be seen. Margie Tatro, director of fuel and water systems at Sandia National Laboratories, says, “I think natural gas is a transitioning fuel for the electricity sector until we can get a greater percentage of nuclear and renewables on the grid.”
The United States has made massive improvements in air quality over the past decade and study after study has shown that the increased use of natural gas for electricity generation – made possible by the shale revolution – is the reason we’ve achieved this feat.
This progress is the centerpiece of Energy In Depth’s new report – Compendium of Studies Demonstrating the Safety and Health Benefits of Fracking – which includes data from 23 peer-reviewed studies, 17 government health and regulatory agencies and reports from 10 research institutions that clearly demonstrate: • Increased natural gas use — thanks to hydraulic fracturing —has led to dramatic declines in air pollution. The United States is the number one oil and gas producer in the world and it has some of the lowest death rates from air pollution in the world. Numerous studies have shown that pollution has plummeted as natural gas production has soared. • Emissions from well sites and associated infrastructure are below thresholds regulatory authorities consider to be a threat to public health – that’s the conclusion of multiple studies using air monitors that measure emissions directly. • There is no credible evidence that fracking causes or exacerbates asthma. In fact, asthma rates and asthma hospitalizations across the United States have declined as natural gas production has ramped up. • There is no credible evidence that fracking causes cancer. Studies that have directly measured emissions at fracking sites have found emissions are below the threshold that would be harmful to public health. • There is no credible evidence that fracking leads to adverse birth outcomes. In fact, adverse birth outcomes have decreased while life expectancy has increased in areas that are ramping up natural gas use. • Fracking is not a credible threat to groundwater. Study after study has shown that there are no widespread, systemic impacts to drinking water from hydraulic fracturing. It is well known that the shale revolution has been a boon to our nation’s economy, its geopolitical position, and the millions of consumers and manufacturers who continue to benefit from historically low energy costs. But the case in support of shale’s salubrious effect on air quality and health continues to be an underreported phenomenon – this new report puts the health benefits of our increased use of natural gas in the spotlight.
Conclusion To be clear, no form of energy development, whether we’re talking about fossil fuels or renewables, is risk free. But the data clearly show, time and time again, that emissions from fracking are not a credible risk to public health.
In fact, the data show that enormous reductions in pollution across the board are attributable to the significant increases in natural gas consumption that hydraulic fracturing has made possible.
They show power plant emissions of SO2 declining by 86 percent, emissions of NOx declining by 67 percent, and emissions of mercury by 55 percent. They also show hospitalizations for asthma declining as natural gas ramps up. At the same time life expectancy and birth outcomes have improved.
And, of course, all these positive health outcomes can be largely traced back to significantly cleaner air, thanks to fracking.
Our petro-industrial civilization produces and consumes a seemingly diverse suite of energies: oil, coal, ethanol, hydroelectricity, gasoline, geothermal heat, hydrogen, solar power, propane, uranium, wind, wood, dung. At the most foundational level, however, there are just two sources of energy. Two sources provide more than 99 percent of the power for our civilization: solar and nuclear. Every other significant energy source is a form of one of these two. Most are forms of solar.
When we burn wood we release previously captured solar energy. The firelight we see and the heat we feel are energies from sunlight that arrived decades ago. That sunlight was transformed into chemical energy in the leaves of trees and used to form wood. And when we burn that wood, we turn that chemical-bond energy back into light and heat. Energy from wood is a form of contemporary solar energy because it embodies solar energy mostly captured years or decades ago, as distinct from fossil energy sources such as coal and oil that embody solar energy captured many millions of years ago.
Straw and other biomass are a similar story: contemporary solar energy stored as chemical-bond energy then released through oxidation in fire. Ethanol, biodiesel, and other biofuels are also forms of contemporary solar energy (though subsidized by the fossil fuels used to create fertilizers, fuels, etc.).
Coal, natural gas, and oil products such as gasoline and diesel fuel are also, fundamentally, forms of solar energy, but not contemporary solar energy: fossil. The energy in fossil fuels is the sun’s energy that fell on leaves and algae in ancient forests and seas. When we burn gasoline in our cars, we are propelled to the corner store by ancient sunlight.
Wind power is solar energy. Heat from the sun creates air-temperature differences that drive air movements that can be turned into electrical energy by wind turbines, mechanical work by windmills, or geographic motion by sailing ships.
Hydroelectric power is solar energy. The sun evaporates and lifts water from oceans, lakes, and other water bodies, and that water falls on mountains and highlands where it is aggregated by terrain and gravity to form the rivers that humans dam to create hydro-power.
Of course, solar energy (both photovoltaic electricity and solar-thermal heat) is solar energy.
Approximately 86 percent of our non-food energy comes from fossil-solar sources such as oil, natural gas, and coal. Another 9 percent comes from contemporary solar sources, mostly hydro-electric, with a small but rapidly growing contribution from wind turbines and solar photovoltaic panels. In total, then, 95 percent of the energy we use comes from solar sources—contemporary or fossil. As is obvious upon reflection, the Sun powers the Earth.
The only major energy source that is not solar-based is nuclear power: energy from the atomic decay of unstable, heavy elements buried in the ground billions of years ago when our planet was formed. We utilize nuclear energy directly, in reactors, and also indirectly, when we tap geothermal energies (atomic decay provides 60-80 percent of the heat from within the Earth). Uranium and other radioactive elements were forged in the cores of stars that exploded before our Earth and Sun were created billions of years ago. The source for nuclear energy is therefore not solar, but nonetheless stellar; energized not by our sun, but by another. Our universe is energized by its stars.
There are two minor exceptions to the rule that our energy comes from nuclear and solar sources: Tidal power results from the interaction of the moon’s gravitational field and the initial rotational motion imparted to the Earth; and geothermal energy is, in its minor fraction, a product of residual heat within the Earth, and of gravity. Tidal and geothermal sources provide just a small fraction of one percent of our energy supply.
Some oft-touted energy sources are not mentioned above. Because some are not energy sources at all. Rather, they are energy-storage media. Hydrogen is one example. We can create purified hydrogen by, for instance, using electricity to split water into its oxygen and hydrogen atoms. But this requires energy inputs, and the energy we get out when we burn hydrogen or react it in a fuel cell is less than the energy we put in to purify it. Hydrogen, therefore, functions like a gaseous battery: energy carrier, not energy source.
Knowing that virtually all energy flows have their origins in our sun or other stars helps us critically evaluate oft-heard ideas that there may exist undiscovered energy sources. To the contrary, it is extremely unlikely that there are energy sources we’ve overlooked. The solution to energy supply constraints and climate change is not likely to be “innovation” or “technology.” Though some people hold out hope for nuclear fusion (creating a small sun on Earth rather than utilizing the conveniently-placed large sun in the sky) it is unlikely that fusion will be developed and deployed this century. Thus, the suite of energy sources we now employ is probably the suite that will power our civilization for generations to come. And since fossil solar sources are both limited and climate-disrupting, an easy prediction is that contemporary solar sources such as wind turbines and solar photovoltaic panels will play a dominant role in the future.
Summary
Understanding that virtually all energy sources are solar or nuclear in origin reduces the intellectual clutter and clarifies our options. We are left with three energy supply categories when making choices about our future: – Fossil solar: oil, natural gas, and coal; – Contemporary solar: hydroelectricity, wood, biomass, wind, photovoltaic electricity, ethanol and biodiesel (again, often energy-subsidized from fossil-solar sources); and – Nuclear.
Footnote: The author ends with support for windmills and solar panels, but drops nuclear without explanation. Also there are presently unsolved problems when substituting those intermittent power sources for fossil fuels. Details are at Climateers Tilting at Windmills
Fortunately, we have time to adapt to the ongoing slight fluctuations in weather while assembling the longer-term transition to nuclear and other energy sources. Unfortunately, a recent study of energy subsidies in the US shows only a small amount is directed toward nuclear, and the sole purpose is decommissioning.
WASHINGTON, D.C. – U.S. Secretary of Energy Rick Perry announced today that the U.S. Department of Energy (DOE) has selected 13 projects to receive approximately $60 million in federal funding for cost-shared research and development for advanced nuclear technologies. These selections are the first under DOE’s Office of Nuclear Energy’s U.S. Industry Opportunities for Advanced Nuclear Technology Development funding opportunity announcement (FOA), and subsequent quarterly application review and selection processes will be conducted over the next five years. DOE intends to apply up to $40 million of additional FY 2018 funding to the next two quarterly award cycles for innovative proposals under this FOA.
“Promoting early-stage investment in advanced nuclear power technology will support a strong, domestic, nuclear energy industry now and into the future,” said Secretary Perry. “Making these new investments is an important step to reviving and revitalizing nuclear energy, and ensuring that our nation continues to benefit from this clean, reliable, resilient source of electricity. Supporting existing as well as advanced reactor development will pave the way to a safer, more efficient, and clean baseload energy that supports the U.S. economy and energy independence.”
Albertans pay around five cents a kilowatt hour — compared to the up to 18 cents Ontarians experienced, but for how long?Postmedia News
Kevin Libin writes in Financial Post: Alberta’s now copying Ontario’s disastrous electricity policies. What could go wrong? Get ready, Albertans, a new report reveals that all the thrills and spills that follow when politicians start meddling in a boring, but well-functioning electricity market are coming your way. Excerpts in italics below with my bolds.
A report released Thursday by the University of Calgary’s School of Public Policy gives a sneak peek of how the Alberta script could play out. It begins once again with a “progressive” government convinced that its legacy lies in climate activism, out to redesign an electricity grid from something meant to provide affordable, reliable power into a showpiece of uncompetitive solar and wind power. And like Ontario, the Alberta NDP is determined to turn its provincial electricity grid into not just a green project that ignores economics, but an affirmative-action diversity project that sets aside certain renewable deals for producers owned by First Nations.
Alberta Premier Rachel Notley’s plan, like McGuinty’s, is to phase out all of Alberta’s cheap, abundant but terribly uncool coal-fired power (by 2030, in Alberta’s case) and force onto the grid instead large amounts of unreliable, expensive solar and wind power. Albertans have been so preoccupied fighting through a barrage of energy woes since Notley’s NDP was elected — the oil-price crash, government-imposed carbon taxes and emission caps, blocked and cancelled pipelines and the Trudeau government’s wholesale politicization of energy regulation — that they probably haven’t realized yet how vast an overhaul Notley was talking about when she began revealing this plan in 2015. But the report’s author, Brian Livingston, an engineer and lawyer with deep experience in the energy business in Alberta, runs through the shocking numbers: As of last year, Alberta’s grid had a capacity of roughly 17,000 megawatts, but the envisioned grid of 2032 will require nearly 13,000 megawatts that do not currently exist. Think of it as rebuilding 75 per cent of Alberta’s current grid in less than 15 years. Hey, what could go wrong?
Alberta Electricity System Operator is planning for so much wind power that the province will blow past Ontario, a province three times its size. Postmedia News
And if Ontarians thought their government was obsessed with green power, Livingston notes that the Alberta Electricity System Operator is planning for so much wind power that the province will blow past Ontario, a province three times its size, with 5,000 megawatts of wind compared to Ontario’s 4,213 megawatts, and nearly twice as much solar power, 700 megawatts, compared to Ontario’s 380 megawatts.
Learning from McGuinty’s mistake, the Alberta NDP is smart enough to ensure the extra cost of all this uneconomic power won’t show up printed in black and white on consumers’ power bills, likely hoping that spares them the political fallout that now threatens the Ontario Liberals. Rather than ratepayers shouldering the pain, it will be taxpayers — largely the same people — who pay most for any additional costs through added deficits and debts, at least for the next few years. That’s because Notley has ordered a temporary cap on household electricity rates of 6.8 cents per kilowatt hour (which is still significantly higher than the current rate). When wholesale rates rise higher than that, the government will use carbon-tax revenues to pay the difference. But businesses pay full freight from the get go.
Hiding from the real costs of using energy is a curious move for a government that gives away energy-efficient light bulbs and other products designed to conserve while imposing carbon taxes to try suppressing energy use. It’s also a costly move. Estimates from the C. D. Howe Institute estimate it will cost Alberta taxpayers up to $50 million this year alone; a recent report from electricity consultants at EDC Associates estimates that by 2021, the extra costs moved off electric bills and onto tax bills will total $700 million. That’s when the price cap expires and costs could start showing up on power bills, instead.
Of course, Ontario has proven that it’s easy to underestimate how expensive these political experiments can get, but the Alberta redesign is already getting pricey. First, Notley accidentally stuck Alberta consumers with nearly $2 billion in extra surcharges when she rewrote carbon policies without realizing that gave producers the right to cancel unprofitable contracts. Her plan also requires the government to create a new “capacity” payment system for electricity producers, who will able to charge substantial sums even if they don’t produce a single watt. Livingston shows that many producers can earn almost as much just for offering capacity to the grid as they do for producing. Meanwhile, since solar power is perennially and embarrassingly uncompetitive economically, even with expensive wind power, the government plans to let solar providers sell electricity at premium rates to government facilities, with taxpayers covering that cost, too, just as they’ll cover the cost of overpriced wind power, which doesn’t approach the affordability of fossil fuels.
In his report, Livingston drily notes that the way Albertans think of the future of their electricity system could probably be summed up as: “Whatever we do here in Alberta, please let us not do it like they did it in Ontario.” They have reason to fear, since Livingston shows Ontario households have faced rates as much as four times higher than those in Alberta. Even if it doesn’t look exactly like the way they did things in Ontario, that doesn’t mean it still can’t go very wrong. Whenever progressive politics infests the electrical grid, people always pay for it in the end.
The Trans-Alaska Pipeline system’s 420 miles above ground segments are built in a zig-zag configuration to allow for expansion or contraction of the pipe.
In their near-religious belief about demonic CO2, activists are obstructing construction or expansion of pipelines delivering the energy undergirding our civilization. The story is explained by James Conca in a recent Forbes article Supersize It! Building Bigger Pipelines Over Old Ones Is A Good Idea Excerpts below with my bolds.
The idea of replacing an older small pipe with a wider new pipe is not new. But it really makes a difference when applied to the oil and natural gas pipelines that crisscross North America.
The concept of supersizing pipelines is taking hold in the United States and Canada to address the growing dearth of pipelines in areas where oil production is increasing but can’t get to the refineries in the Gulf.
At the same time, natural gas use is increasing in places that don’t have many pipelines and where the public is strongly against building new pipelines.
A bigger pipeline along the same route as the older smaller one is cheaper to build than a bunch of new small ones. It also falls under the existing permits and rights-of-way of the old pipe. And the public doesn’t have to approve any new pipeline routes.
Supersizing also increases safety and ensures less environmental impact since you’re not building additional lines, but replacing an old one with a new one, and older lines are more likely to leak or fail completely, like the Colonial Pipeline Spill.
Not only do we need more crude oil pipelines, the ones we have quite old. But they can be supersized without finding and permitting new routes, which saves a lot of money and time and doesn’t increase their environmental footprint. Of course, if there are no pipelines to supersize, like in much of New England and New York, that’s a whole other problem. Photo is of a section of the Alaska Pipeline.
Increasing the diameter of a pipe increases the flow within it much more than you might think. In fact, the volume of flow per minute goes as the fourth power of the change in diameter. This means, if you double the diameter, you increase the flow by 16 times – (2^4 = 2 x 2 x 2 x 2 = 16).
Enbridge is a good example. The company has replaced a 26-inch-diameter shale gas line running from Pennsylvania into New England with a 42-inch line. That doesn’t seem like much, it’s only 1.6 times as big. But raised to the power of four, this new pipeline can carry six times as much gas as the old one.
‘Once the pipe is in the ground, you can do a lot of things: reverse flows, expand it, optimize it,’ said Al Monaco, President and Chief Executive of Enbridge.
These types of upgrades have essentially made the controversial portion of the Keystone XL pipeline unnecessary.
Of course, if you don’t have old pipelines to begin with, supersizing doesn’t help. Parts of the U.S. simply need more pipelines, especially where fracking has unleashed huge amounts of oiland gas in areas that didn’t have a lot pipelines to begin with.
New England is a good example.
New England is closing their nuclear plants faster than you can say “who cares about science.” They are replacing them with natural gas and renewables. But there are hardly any gas pipelines in the region. They are forced to import liquefied natural gas from places like Russia or use dirty oil plants when demand is high, like this last winter. When this happens the customers in New England pay a high price for their electricity, sometimes ten times the normal price.
Yet, the people of New England do not want more gas pipelines.
The citizens of New England also want more renewables, whose electricity has to be imported into the northeast from other regions, especially hydropower from Canada. But this requires installing high voltage transmission lines which the public keeps voting down.
When the public does not listen to scientists and experts, things don’t go well. This problem is engulfing the energy sector. The public seems to care about climate change, but don’t want to do what the climate scientists advise – to increase nuclear and renewables – because they reject them, or their infrastructure requirements, out of ignorance.
And they aren’t putting solar panels on their roofs very much either.
So next year, they will wonder why electricity costs keep going up, their carbon emissions keep going up, and their black-outs keep getting longer.
Natural gas requires the least steel and concrete of any energy source, is cheap and quick to build, has cheap fuel, and is beloved by state legislatures. But you need pipelines to deliver it, and no one wants them in their backyard. Supersizing helps with this problem.
Natural gas is the fastest growing energy source in America. Replacing coal plants with gas plants is the only reason our carbon emissions are at a 27-year low. It’s the easiest power plant to build, requires the least amount of steel and concrete (see figure above), is the easiest to permit, will have cheap fuel for decades, and is vastly preferred over coal.
However, it requires pipelines to deliver the gas and no one seems to like them. You can ship gas as liquefied natural gas (LNG), which is what happens a lot in the New England, but that’s three times more expensive and we have very limited LNG facilities.
This is a classic case of the public misunderstanding how the real world works. These issues are somewhat complex and you have to juggle a lot of conflicting issues. Our oil pipelines themselves are generally old (see figure above). Half of them are older than 40 years. Some date from before World War One.
They need to be replaced, and new ones built, or we risk the very environmental damage feared by those who don’t want them at all. Just supersizing the oldest half would be tremendous, and remove most of the need for additional pipelines.
Our pipelines, refineries and transmission lines are all at capacity. When problems occur, there’s no back-up plan. We have over 3,000 blackouts a year, the highest in the developed world. It’s why the American Society of Civil Engineers gave America a D+ on our 2017 Infrastructure Report Card.
Somewhere around 1985, we became the world’s richest, most powerful, greatest nation in the history of the world. But we still have to work hard to remain the best. It doesn’t just maintain itself. And infrastructure is where long-term neglect first shows, often catastrophically.
So we need to upgrade our transmission lines and transformers, build new non-fossil power plants and supersize our pipelines.
We can’t afford not to.
Dr. James Conca is an expert on energy, nuclear and dirty bombs, a planetary geologist, and a professional speaker.
Footnote:
Dr. Conca is writing from an American perspective, while in Canada a major pipeline supersizing project is being resisted by one province, British Columbia, despite the pipeline gaining federal approval after a long, tortuous environmental assessment process.
The BC Premier is holding power in a coalition government beholden to a few Greens, who are adamant about “keeping it in the ground.”
“The current developments are a real test of Canada’s commitment to the rule of law and the ability of any resource company to rely on the legal approval process for projects,” said Dwight Newman, one of Canada’s top constitutional scholars and Munk senior fellow at the Macdonald-Laurier Institute.
To diminish the importance to Alberta and Canada of the Trans Mountain project, Horgan refers to it as a Texas project, adopting the pejorative jargon of eco-activists dependent on U.S. funding and organizational support to run their campaigns.
“The interest of the Texas boardrooms are not the interests of British Columbians,” Horgan said. In fact, Kinder Morgan Canada has been publicly traded since last year and is 77-per-cent owned by Canadians, with Manulife and TD the biggest shareholders. Its shippers are predominantly Canadian oilsands producers and Canada as a whole benefits from the pipeline’s opening of the Asian market.
Horgan’s actions have already triggered a trade war with Alberta, fomented irrational fears about oil spills, put the spotlight on B.C. for all the wrong reasons, and exposed his province to potentially hard retribution from both from Alberta and from Ottawa. If Horgan doesn’t see that, he’s not looking.
There’s been a lot of crazy talk regarding energy coming from the Golden State, but there are also serious scientists in California, especially at Cal Tech, where Steven Koonin studied, taught and served as Provost. This recent announcement caught my eye: Scientists breed bacteria that make tiny high-energy carbon rings. Text below with my bolds
Caltech scientists have created a strain of bacteria that can make small but energy-packed carbon rings that are useful starting materials for creating other chemicals and materials. These rings, which are otherwise particularly difficult to prepare, now can be “brewed” in much the same way as beer.
The bacteria were created by researchers in the lab of Frances Arnold, Caltech’s Linus Pauling Professor of Chemical Engineering, Bioengineering and Biochemistry, using directed evolution, a technique Arnold developed in the 1990s. The technique allows scientists to quickly and easily breed bacteria with the traits that they desire. It has previously been used by Arnold’s lab to evolve bacteria that create carbon-silicon and carbon-boron bonds, neither of which is found among organisms in the natural world. Using this same technique, they set out to build the tiny carbon rings rarely seen in nature.
Familiar energetic organic compounds found in nature.
“Bacteria can now churn out these versatile, energy-rich organic structures,” Arnold says. “With new lab-evolved enzymes, the microbes make precisely configured strained rings that chemists struggle to make.”
In a paper published this month in the journal Science, the researchers describe how they have now coaxed Escherichia coli bacteria into creating bicyclobutanes, a group of chemicals that contain four carbon atoms arranged so they form two triangles that share a side. To visualize its shape, imagine a square piece of paper that’s lightly creased along a diagonal.
Source: Wikipedia
Bicyclobutanes are difficult to make because the bonds between the carbon atoms are bent at angles that put them under a great deal of strain. Bending these bonds away from their natural shape takes a lot of energy and can result in unwanted byproducts if the conditions for their synthesis aren’t just right. But it’s the strain that makes bicyclobutanes so useful. The bent bonds act like tightly wound springs: they pack a lot of energy that can be used to drive chemical reactions, making bicyclobutanes useful precursors to a variety of chemical products, such as pharmaceuticals, agrochemicals, and materials. When strained rings, like bicyclobutanes, are incorporated into larger molecules, they can imbue those molecules with interesting properties—for example, the ability to conduct electricity but only when an external force is applied—making them potentially useful for creating smart materials that are responsive to their environments.
Unlike other carbon rings, such as cyclohexanes and cyclopentanes, bicyclobutanes are rarely found in nature. This could be due to their inherit instability or the lack of suitable biological machineries for their assembly. But now, Arnold and her team have shown that bacteria can be genetically reprogrammed to produce bicyclobutanes from simple commercial starting materials. As the E. coli cells go about their bacterial business, they churn out bicyclobutanes. The setup is kind of like putting sugar and letting it ferment into alcohol.
“To our surprise, the enzymes can be engineered to efficiently make such crazy carbon rings under ambient conditions,” says graduate student Kai Chen, lead author on the paper. “This is the first time anyone has introduced a non-native pathway for bacteria to forge these high-energy structures.”
Chen and his colleagues, postdocs Xiongyi Huang, Jennifer Kan, and graduate student Ruijie Zhang, did this by giving the bacteria a copy of a gene that encodes an enzyme called cytochrome P450. The enzyme had previously been modified through directed evolution by the Arnold lab and others to create molecules containing small rings of three carbon atoms—essentially half of a bicyclobutane group.
“The beauty is that a well-defined active-site environment was crafted in the enzyme to greatly facilitate formation of these high-energy molecules,” Huang says.
The precision with which the bacterial enzymes do their work also allows the researchers to efficiently make the exact strained rings they want, with a precise configuration and in a single chiral form. Chirality is a property of molecules in which they can be “right-handed” or “left-handed,” with each form being the mirror image of the other. It matters because living things are selective about which “handedness” of a molecule they use or produce. For instance, all living things exclusively use the right-handed form of the sugar ribose (the backbone of DNA), and many chiral pharmaceutical chemicals are only effective in one handedness; in the other, they can be toxic.
Chiral forms of a molecule are difficult to separate from one another, but by changing the genetic code of the bacteria, the researchers can ensure the enzymes favor one chiral product over another. Mutation in the genes tuned the enzymes to forge a broad range of bicyclobutanes with high precision.
Kan says advancements like theirs are pushing chemistry in a greener direction.
“In the future, instead of building chemical plants for making the products we need to improve lives, wouldn’t it be great if we could just program bacteria to make what we want?” Kan says.
The paper, titled “Enzymatic Construction of Highly Strained Carbocycles,” appears in the April 5 issue of Science.
The inter provincial Canadian war over a bitumen pipeline is taking on Shakespearean drama as reported in the Globe and Mail The symbolism of Kinder Morgan’s Trans Mountain pipeline The article is by Arno Kopecky, an environmental journalist and author based in Vancouver. Excerpts below with my bolds and images introducing the principle players.
Spare a thought for Rachel Notley. While you’re at it, spare another for John Horgan and Justin Trudeau. Three star-crossed allies, progressivesall, steering ships through a Kinder Morgan tempest no pundit can describe without saying “collision course.” Shakespearean, ain’t it?
There’s tragedy, comedy and irony galore. Ms. Notley’s take on A Midsummer Night’s Dream with her midwinter ban on British Columbia wines – which was lifted on Thursday – lent itself so well to “Reign of Terroir” jokes that it can only end up raising the provincial wine industry’s profile. As for Alberta’s oil industry, this is more like Much Ado About Nothing. Whether the oil sands grow or shrink has much less to do with any one pipeline (even one that leads to almighty tidewater) than the global price of oil. What about all those Kinder Morgan jobs? Comedy. Anyone serious about creating oil-sector jobs for Canadians would be pushing to refine bitumen at home instead of exporting it raw. That’s why Unifor, the biggest union in the oil sands, intervened against the project in the NEB hearings.
But the notion that Kinder Morgan’s Trans Mountain pipeline expansion would add carbon to the atmosphere is comedy, too. If Kinder Morgan isn’t built, trains will keep moving the bitumen they’re already moving, at least until a higher force than pipeline capacity reduces Fort McMurray’s output. Everyone just makes less money that way.
When a country already has more than 840,000 kilometres of pipeline running through it, the fight over roughly 1,000 new kilometres is symbolic for both sides. But symbols matter. Now Trans Mountain has come to symbolize everything from the oil sands to climate change and reconciliation, and everyone’s job is at stake.
Premier Rachel Notley of oil rich, but landlocked and economically struggling Alberta.
None more than Ms. Notley’s, our likeliest candidate for tragedy. Alberta’s most progressive premier in more than 30 years, the woman who imposed a provincial carbon tax and raised royalties on oil sands operators and lifted Alberta’s minimum wage from the lowest to the highest in the country, Rachel Notley will not be replaced by someone nicer. Alberta’s profoundly oil-positive United Conservative Party, freshly merged and braying at her heels, threaten every last NDP policy with a Trumpian corrective. They’ll probably win the next election, too. Ms. Notley’s only hope is in proving to Albertans she can fight as dirty as any conservative would to protect the Symbol.
British Columbia Premier John Horgan, who recently formed a government propped up by a few Green party MPs.
Enter John Horgan, stage left. Poor guy. He’s running a province whose biggest, greenest city overwhelmingly voted against a once-in-a-generation opportunity to massively expand public transit, but has already proved itself willing to get arrested en masse in anti-Kinder Morgan protests. Mr. Horgan’s first major decision as Premier, the tortured approval of the Site C dam, earned him thecondemnation of every environmentalist and First Nation leader in the province, if not the country. Now that he’s following through on his campaign promise to “use every tool in our tool box” against Kinder Morgan, fans and critics are trading placards. Never mind that Site C will keep far more carbon in the ground than any thwarted pipeline.
The thing is, pipeline battles on the coast aren’t about pipelines or even climate change. They’re about oil tankers. Want symbols? Wild salmon and orca populations are rapidly approaching extinction in southern B.C. Yes, oil tankers do already ply these waters. No, we don’t love hearing that the only way to pay for sorely lacking coastal protection is to heighten the risk of an oil spill by tripling the number of tankers. But that’s the deal.
Canadian Prime Minister Justin Trudeau currently touring India.
Enter Justin Trudeau, our doomed and dashing Hamlet, haunted by the ghost of his father, asking not to be or not to be, but can the ends justify the means? The greater the ends, it seems, the crueler the means. For all his capacity to renege on inconvenient promises, Mr. Trudeau clearly does regard the fight against climate change as a Very Great End. He knows we’re losing the glaciers whose meltwater irrigates half of Canada’s agriculture; he’s aware of our metastasizing cycle of flood and forest fire; he’s already dealt with one wave of climate refugees, from Syria (yes – that war was largely triggered by a calamitous drought that beggared a million farmers); he knows this is just the beginning. (Note: The reporter and Trudeau believe things cited in this paragraph contrary to evidence, but ignorance and hubris are essential to any tragedy.)
Against all that, he weighs the spill risk of one new pipeline, twinned to a pre-existing condition, with a corresponding increase in tanker traffic through relatively safe waters in which oil tankers can so far boast a 100-per-cent safety rate. Without Kinder Morgan, Mr. Trudeau mutters, pacing the stage from left to right, you lose Alberta; without Alberta, you lose your national climate-change strategy, your coastal protections, your whole progressive agenda. You lose everything.
Enter, in our closing act, B.C.’s coastal First Nations. Of course, they’ve been here all along – Mr. Trudeau made them some promises, too. But so far, his definition of consultation looks a lot like the old one: a process to determine not if a project should proceed on Indigenous territory, but when. The courts may yet cancel Trans Mountain because of it, as they did Northern Gateway. That’s probably Mr. Trudeau’s best hope for a happy ending.He’s created his own Birnam Wood, an army of First Nations and their allies ready to lead the march to Dunsinane Hill, aka Burnaby Mountain and the terminus of the pipeline, for the biggest act of civil disobedience our generation’s seen.
Grasshopper begs for food from the ant, but is turned away.
Aesop’s Fable of The Ant and the Grasshopper
In a field one summer’s day a Grasshopper was hopping about, chirping and singing to its heart’s content. An Ant passed by, bearing along with great toil an ear of corn he was taking to the nest.
“Why not come and chat with me,” said the Grasshopper, “instead of toiling and moiling in that way?”
“I am helping to lay up food for the winter,” said the Ant, “and recommend you to do the same.”
“Why bother about winter?” said the Grasshopper; “We have got plenty of food at present.” But the Ant went on its way and continued its toil.
When the winter came the Grasshopper had no food and found itself dying of hunger – while it saw the ants distributing every day corn and grain from the stores they had collected in the summer. Then the Grasshopper knew: It is best to prepare for days of need.
Children are told this story and given the moral truth that warm summer days (good times) don’t last and wise people “make hay while the sun shines.” In the longer version of the story, the starving grasshopper begs for food from the ants and is turned away.
Climate Grasshoppers
Today we are seeing climate alarmists in the role of grasshoppers. They presume future warming is certain and want to stop anyone from storing energy reserves. A post yesterday (NIMBY Wars) reported on west coast grasshoppers in Oregon and British Columbia acting against fossil fuel transport and storage. Today is a report on New England’s energy outlook facing the return of normal winters.
U.S. infrastructure promises to be a top priority for the Trump administration in 2018. In his State of American Energy keynote address, API President and CEO Jack Gerard highlighted how resistance to infrastructure development has left New Englanders with some of the highest electricity costs in the nation, particularly so through extreme winters.
In December and early January, New England’s wholesale electricity prices averaged nearly three times those of equally-frigid Chicago. Over the past four years, New England’s wholesale electricity prices averaged 24 percent higher than those in Chicago and were nearly three times more volatile.
While price volatility provides signals that are important to the efficiency of wholesale energy markets, the disproportionate rise of the region’s winter prices stems from New York’s beggar-thy-neighbor policies, which blocked planned, much needed and federally approved new pipeline capacity. Politics and policies that accentuate high and volatile natural gas and electricity prices, not only in New York but also in New England, harm consumers and undermine regional investment, jobs and competitiveness. It is high time that New York and New England take a collective and collaborative approach to solve the problems with permitting and incentives that utilities have to contract longer-term for natural gas.
Project abandoned in 2017
In the first week of January, contrasting headlines proclaimed natural gas got bomb-cycloned in the new year with daily New York (Transco Zone 6) prices spiking as high as $140 per million BTU (MMBtu). Yet the U.S. Energy Information Administration (EIA) reduced its 2018 price forecast to $2.88/MMBtu at Henry Hub, Louisiana. The low Henry Hub price represents a major natural gas production region and liquid trading hub. By contrast, the high New York City price reflects a major natural gas production region that could have ample low-cost natural gas, except that it is stymied by a lack of effective political and regulatory mechanisms to enable responsible resource and infrastructure development.
The rise in natural gas prices has reverberated through electricity markets. For example, the chart below compares wholesale electricity prices for December through early January between Boston and equally-frigid Chicago. Chicago is amply supplied by natural gas pipelines that flow from the U.S. Gulf Coast, Appalachia, and Rockies regions plus Canada. Based on data from Bloomberg, wholesale electricity prices between Dec. 1 and Jan. 8 averaged $31.31/MWh and had a standard deviation of more than $35/MWh. By comparison, Boston’s prices averaged $85.36/MWh with a standard deviation of $67.64/MWh over the same period. New England utilities thus paid more than twice as much for electricity and experienced about twice the price volatility.
Project abandoned in April 2016.
The region’s relatively high and volatile energy prices are rooted in the inadequacy of natural gas infrastructure to meet peak seasonal demand and, to a lesser extent, the long-standing shift in market incentives toward utilities’ reliance mainly on spot markets and very short-term contracts for gas. Specifically, natural gas-fired generators set the region’s electricity price about 75 percent of the time. Gas-fired power generation in the Northeast census region rose by 53 percent between 2007 and 2017, according to the EIA short-term energy outlook. Natural gas-fired generation contributed 38.6 percent of the region’s electricity in 2017, compared with 24.3 percent in 2007, and gained market share due to its abundance, low cost and efficient and clean-burning environmental properties.
However, natural gas pipeline constraints have hampered utilization at peak times. For example, ISO New England recently highlighted that four gigawatts of natural gas-fired generation capacity – 24% of the region’s gas-fired net winter capacity – was at risk of not being able to get fuel when needed. Much of the constraint is due to New York state’s blocking the Constitution and Northern Access Pipeline projects, among others.
To compensate for New York’s un-neighborly policies, ISO New England performed triage for the past five years with its Winter Reliability Program, which has been paying as much as $32 million per year for reserves of oil and liquified natural gas (LNG) as back-up fuels. Beginning this year, ISO New England will change to a Pay for Performance plan that alters how a generation resource’s capacity payments are calculated. These costly plans might not be needed if economic rationality and the overall region’s welfare were top of the New York state of mind.
New York and New England must take a collective and collaborative approach to solve the problems with permitting and incentives that utilities have to contract longer-term for natural gas. Given policy alternatives that provide positive incentives and enable investment and infrastructure development, lower prices, and lessen price volatility, it should be a dominant strategy for the region to pursue outcomes that are less risky, better facilitate market operations and ultimately provide a win for the region’s consumers and economy.
There is a destructive “keep it in the ground” movement arising from environmental groups to block oil and natural gas development and the pipelines required to transport them. The center of this is New York and the New England states, where numerous policymakers seek to artificially constrain energy supply. Yet, despite producing none themselves, their reliance on gas electricity has surged. ISO New England now gets about 50% of its power from gas, versus 10-15% a decade ago. New York sits in the same boat: gas now generates ~45% of the state’s electricity, doubling its market share since 2005.
It’s no wonder then that blocking energy development and pipelines has established home power rates in New England and New York that are at least 50% higher than the national average. Industrial rates are more than double, and encourages companies to leave the area. This is unfortunate and illogical since U.S. natural gas prices have been at historic lows.
Policies intended to reduce oil and gas consumption are dubious: even if they work in the short-term, over time they just lower oil and gas prices and encourage more usage. I’ve already clearly explained it. Oil and gas are so obviously ingrained in the U.S. and global economies, supplying over 60% of all energy. In the real world, this means that more economic growth ultimately means more oil and gas demand. That’s why year after year demand continues to grow. There is no significant substitute for oil whatsoever. And know that natural gas power plants don’t get retired with more wind and solar power, but get built even morebecause gas is flexible backup required.
This is also why more oil and gas pipelines are so crucial, not just the most economical way to transport energy but also the safest. The Trump administration calls for a bill that gets at least $1.5 trillion for the new infrastructure investment that our country so desperately needs. It’s safe to assume that much of that would include provisions for oil and gas infrastructure. But it’s increasingly obvious that the approval process for pipelines needs expedited, now taking an average of four years for permits to be issued.
Summary
Climatists are fixated on future warming and blaming fossil fuels, the very energy which society needs to face cold or stormy weather. It is not only New York state at fault, but also obstructionists in states like Massachusetts and Connecticut. Climate change is not the problem, it’s climate policies that threaten our collective security.