Too Many People, or Too Few?

A placard outside the UN headquarters in New York City, November 2011

Some years ago I read the book Boom, Bust and Echo. It described how planners for public institutions like schools and hospitals often fail to anticipate demographic shifts. The authors described how in North America, the baby Boom after WWII overcrowded schools, and governments struggled to build and staff more facilities. Just as they were catching up came the sexual revolution and the drop in fertility rates, resulting in a population Bust in children entering the education system. Now the issue was to close schools and retire teachers due to overcapacity, not easy to do with sentimental attachments. Then as the downsizing took hold came the Echo. Baby boomers began bearing children, and even at a lower birth rate, it still meant an increased cohort of students arriving at a diminished system.

The story is similar to what is happening today with world population. Zachary Karabell writes in Foreign Affairs The Population Bust: Demographic Decline and the End of Capitalism as We Know It. Excerpts in italics with my bolds.

For most of human history, the world’s population grew so slowly that for most people alive, it would have felt static. Between the year 1 and 1700, the human population went from about 200 million to about 600 million; by 1800, it had barely hit one billion. Then, the population exploded, first in the United Kingdom and the United States, next in much of the rest of Europe, and eventually in Asia. By the late 1920s, it had hit two billion. It reached three billion around 1960 and then four billion around 1975. It has nearly doubled since then. There are now some 7.6 billion people living on the planet.

Just as much of the world has come to see rapid population growth as normal and expected, the trends are shifting again, this time into reverse. Most parts of the world are witnessing sharp and sudden contractions in either birthrates or absolute population. The only thing preventing the population in many countries from shrinking more quickly is that death rates are also falling, because people everywhere are living longer. These oscillations are not easy for any society to manage. “Rapid population acceleration and deceleration send shockwaves around the world wherever they occur and have shaped history in ways that are rarely appreciated,” the demographer Paul Morland writes in The Human Tide, his new history of demographics. Morland does not quite believe that “demography is destiny,” as the old adage mistakenly attributed to the French philosopher Auguste Comte would have it. Nor do Darrell Bricker and John Ibbitson, the authors of Empty Planet, a new book on the rapidly shifting demographics of the twenty-first century. But demographics are clearly part of destiny. If their role first in the rise of the West and now in the rise of the rest has been underappreciated, the potential consequences of plateauing and then shrinking populations in the decades ahead are almost wholly ignored.

The mismatch between expectations of a rapidly growing global population (and all the attendant effects on climate, capitalism, and geopolitics) and the reality of both slowing growth rates and absolute contraction is so great that it will pose a considerable threat in the decades ahead. Governments worldwide have evolved to meet the challenge of managing more people, not fewer and not older. Capitalism as a system is particularly vulnerable to a world of less population expansion; a significant portion of the economic growth that has driven capitalism over the past several centuries may have been simply a derivative of more people and younger people consuming more stuff. If the world ahead has fewer people, will there be any real economic growth? We are not only unprepared to answer that question; we are not even starting to ask it.

BOMB OR BUST?
At the heart of The Human Tide and Empty Planet, as well as demography in general, is the odd yet compelling work of the eighteenth-century British scholar Thomas Malthus. Malthus’ 1798 Essay on the Principle of Population argued that growing numbers of people were a looming threat to social and political stability. He was convinced that humans were destined to produce more people than the world could feed, dooming most of society to suffer from food scarcity while the very rich made sure their needs were met. In Malthus’ dire view, that would lead to starvation, privation, and war, which would eventually lead to population contraction, and then the depressing cycle would begin again.

Yet just as Malthus reached his conclusions, the world changed. Increased crop yields, improvements in sanitation, and accelerated urbanization led not to an endless cycle of impoverishment and contraction but to an explosion of global population in the nineteenth century. Morland provides a rigorous and detailed account of how, in the nineteenth century, global population reached its breakout from millennia of prior human history, during which the population had been stagnant, contracting, or inching forward. He starts with the observation that the population begins to grow rapidly when infant mortality declines. Eventually, fertility falls in response to lower infant mortality—but there is a considerable lag, which explains why societies in the modern world can experience such sharp and extreme surges in population. In other words, while infant mortality is high, women tend to give birth to many children, expecting at least some of them to die before reaching maturity. When infant mortality begins to drop, it takes several generations before fertility does, too. So a woman who gives birth to six children suddenly has six children who survive to adulthood instead of, say, three. Her daughters might also have six children each before the next generation of women adjusts, deciding to have smaller families.

The population bust is going global almost as quickly as the population boom did in the twentieth century.  The burgeoning of global population in the past two centuries followed almost precisely the patterns of industrialization, modernization, and, crucially, urbanization. It started in the United Kingdom at the end of the nineteenth century (hence the concerns of Malthus), before spreading to the United States and then France and Germany. The trend next hit Japan, India, and China and made its way to Latin America. It finally arrived in sub-Saharan Africa, which has seen its population surge thanks to improvements in medicine and sanitation but has not yet enjoyed the full fruits of industrialization and a rapidly growing middle class.

With the population explosion came a new wave of Malthusian fears, epitomized by the 1968 book The Population Bomb, by Paul Ehrlich, a biologist at Stanford University. Ehrlich argued that plummeting death rates had created an untenable situation of too many people who could not be fed or housed. “The battle to feed all of humanity is over,” he wrote. “In the 1970’s the world will undergo famines—hundreds of millions of people are going to starve to death in spite of any crash programs embarked on now.”

Ehrlich’s prophecy, of course, proved wrong, for reasons that Bricker and Ibbitson elegantly chart in Empty Planet. The green revolution, a series of innovations in agriculture that began in the early twentieth century, accelerated such that crop yields expanded to meet humankind’s needs. Moreover, governments around the world managed to remediate the worst effects of pollution and environmental degradation, at least in terms of daily living standards in multiple megacities, such as Beijing, Cairo, Mexico City, and New Delhi. These cities face acute challenges related to depleted water tables and industrial pollution, but there has been no crisis akin to what was anticipated.

Doesn’t anyone want my Green New Deal?

Yet visions of dystopic population bombs remain deeply entrenched, including at the center of global population calculations: in the forecasts routinely issued by the United Nations. Today, the UN predicts that global population will reach nearly ten billion by 2050. Judging from the evidence presented in Morland’s and Bricker and Ibbitson’s books, it seems likely that this estimate is too high, perhaps substantially. It’s not that anyone is purposely inflating the numbers. Governmental and international statistical agencies do not turn on a dime; they use formulas and assumptions that took years to formalize and will take years to alter. Until very recently, the population assumptions built into most models accurately reflected what was happening. But the sudden ebb of both birthrates and absolute population growth has happened too quickly for the models to adjust in real time. As Bricker and Ibbitson explain,

“The UN is employing a faulty model based on assumptions that worked in the past but that may not apply in the future.”

Population expectations aren’t merely of academic interest; they are a key element in how most societies and analysts think about the future of war and conflict. More acutely, they drive fears about climate change and environmental stability—especially as an emerging middle class numbering in the billions demands electricity, food, and all the other accoutrements of modern life and therefore produces more emissions and places greater strain on farms with nutrient-depleted soil and evaporating aquifers. Combined with warming-induced droughts, storms, and shifting weather patterns, these trends would appear to line up for some truly bad times ahead.

Except, argue Bricker and Ibbitson, those numbers and all the doomsday scenarios associated with them are likely wrong. As they write,

“We do not face the challenge of a population bomb but a population bust—a relentless, generation-after-generation culling of the human herd.”

Already, the signs of the coming bust are clear, at least according to the data that Bricker and Ibbitson marshal. Almost every country in Europe now has a fertility rate below the 2.1 births per woman that is needed to maintain a static population. The UN notes that in some European countries, the birthrate has increased in the past decade. But that has merely pushed the overall European birthrate up from 1.5 to 1.6, which means that the population of Europe will still grow older in the coming decades and contract as new births fail to compensate for deaths. That trend is well under way in Japan, whose population has already crested, and in Russia, where the same trends, plus high mortality rates for men, have led to a decline in the population.

What is striking is that the population bust is going global almost as quickly as the population boom did in the twentieth century. Fertility rates in China and India, which together account for nearly 40 percent of the world’s people, are now at or below replacement levels. So, too, are fertility rates in other populous countries, such as Brazil, Malaysia, Mexico, and Thailand. Sub-Saharan Africa remains an outlier in terms of demographics, as do some countries in the Middle East and South Asia, such as Pakistan, but in those places, as well, it is only a matter of time before they catch up, given that more women are becoming educated, more children are surviving their early years, and more people are moving to cities.

Both books note that the demographic collapse could be a bright spot for climate change. Given that carbon emissions are a direct result of more people needing and demanding more stuff—from food and water to cars and entertainment—then it would follow that fewer people would need and demand less. What’s more, larger proportions of the planet will be aging, and the experiences of Japan and the United States are showing that people consume less as they age. A smaller, older population spells some relief from the immense environmental strain of so many people living on one finite globe.

The Reinvention of Chess

That is the plus side of the demographic deflation. Whether the concomitant greening of the world will happen quickly enough to offset the worst-case climate scenarios is an open question—although current trends suggest that if humanity can get through the next 20 to 30 years without irreversibly damaging the ecosystem, the second half of the twenty-first century might be considerably brighter than most now assume.

The downside is that a sudden population contraction will place substantial strain on the global economic system.

Capitalism is, essentially, a system that maximizes more—more output, more goods, and more services. That makes sense, given that it evolved coincidentally with a population surge. The success of capitalism in providing more to more people is undeniable, as are its evident defects in providing every individual with enough. If global population stops expanding and then contracts, capitalism—a system implicitly predicated on ever-burgeoning numbers of people—will likely not be able to thrive in its current form. An aging population will consume more of certain goods, such as health care, but on the whole aging and then decreasing populations will consume less. So much of consumption occurs early in life, as people have children and buy homes, cars, and white goods. That is true not just in the more affluent parts of the world but also in any country that is seeing a middle-class surge.

The future world may be one in which capitalism at best frays and at worst breaks down completely.
But what happens when these trends halt or reverse? Think about the future cost of capital and assumptions of inflation. No capitalist economic system operates on the presumption that there will be zero or negative growth. No one deploys investment capital or loans expecting less tomorrow than today. But in a world of graying and shrinking populations, that is the most likely scenario, as Japan’s aging, graying, and shrinking absolute population now demonstrates. A world of zero to negative population growth is likely to be a world of zero to negative economic growth, because fewer and older people consume less. There is nothing inherently problematic about that, except for the fact that it will completely upend existing financial and economic systems. The future world may be one of enough food and abundant material goods relative to the population; it may also be one in which capitalism at best frays and at worst breaks down completely.

The global financial system is already exceedingly fragile, as evidenced by the 2008 financial crisis. A world with negative economic growth, industrial capacity in excess of what is needed, and trillions of dollars expecting returns when none is forthcoming could spell a series of financial crises. It could even spell the death of capitalism as we know it. As growth grinds to a halt, people may well start demanding a new and different economic system. Add in the effects of automation and artificial intelligence, which are already making millions of jobs redundant, and the result is likely a future in which capitalism is increasingly passé.

If population contraction were acknowledged as the most likely future, one could imagine policies that might preserve and even invigorate the basic contours of capitalism by setting much lower expectations of future returns and focusing society on reducing costs (which technology is already doing) rather than maximizing output.

But those policies would likely be met in the short term by furious opposition from business interests, policymakers, and governments, all of whom would claim that such attitudes are defeatist and could spell an end not just to growth but to prosperity and high standards of living, too. In the absence of such policies, the danger of the coming shift will be compounded by a complete failure to plan for it.

Different countries will reach the breaking point at different times. Right now, the demographic deflation is happening in rich societies that are able to bear the costs of slower or negative growth using the accumulated store of wealth that has been built up over generations. Some societies, such as the United States and Canada, are able to temporarily offset declining population with immigration, although soon, there won’t be enough immigrants left. As for the billions of people in the developing world, the hope is that they become rich before they become old. The alternative is not likely to be pretty: without sufficient per capita affluence, it will be extremely difficult for developing countries to support aging populations.

So the demographic future could end up being a glass half full, by ameliorating the worst effects of climate change and resource depletion, or a glass half empty, by ending capitalism as we know it. Either way, the reversal of population trends is a paradigm shift of the first order and one that is almost completely unrecognized. We are vaguely prepared for a world of more people; we are utterly unprepared for a world of fewer. That is our future, and we are heading there fast.

See also Control Population, Control the Climate. Not.

On Stable Electric Power: What You Need to Know

electric-power-system

nzrobin commented on my previous post Big Wind Blacklisted   that he had more to add.  So this post provides excerpts from a 7 part series Anthony wrote at kiwithinker on Electric Power System Stability. Excerpts are in italics with my bolds to encourage you to go read the series of posts at kiwithinker.

1. Electrical Grid Stability is achieved by applying engineering concepts of power generation and grids.

Some types of generation provide grid stability, other types undermine it. Grid stability is an essential requirement for a power supply reliability and security. However there is insufficient understanding of what grid stability is and the risk that exists if stability is undermined to the point of collapse. Increasing grid instability will lead to power outages. The stakes are very high.

2.Electric current is generated ‘on demand’. There is no stored electric current in the grid.

The three fundamental parts of a power system are:

its generators, which make the power,
its loads, which use the power, and
its grid, which connects them together.

The electric current delivered when you turn on a switch is generated from the instant you operate the switch. There is no store of electric current in the grid. Only certain generators can provide this instant ‘service’.

So if there is no storage in the grid the amount of electric power being put into the grid has to very closely match that taken out. If not, voltage and frequency will move outside of safe margins, and if the imbalance is not corrected very quickly it will lead to voltage and frequency excursions resulting in damage or outages, or both.

3. A stable power system is one that continuously responds and compensates for power/ frequency disturbances, and completes the required adjustments within an acceptable timeframe to adequately compensate for the power/frequency disturbances.

Voltage is an important performance indicator and it should of course be kept within acceptable tolerances. However voltage excursions tend to be reasonably local events. So while voltage excursions can happen from place to place and they cause damage and disruption, it turns out that voltage alone is not the main ‘system wide’ stability indicator.

The key performance indicator of an acceptably stable power system is its frequency being within a close margin from its target value, typically within 0.5 Hz from either 50 Hz or 60 Hz, and importantly, the rise and fall rate of frequency deviations need to be managed to achieve that narrow window.

An increasing frequency indicates more power is entering the system than is being taken out. Likewise, a reducing frequency indicates more is being taken out than is entering. For a power supply system to be stable it is necessary to control the frequency. Control systems continuously observe the frequency, and the rate of change of the frequency. The systems control generator outputs up or down to restore the frequency to the target window.

Of course energy imbalances of varying size are occurring all the time. Every moment of every day the load is continuously changing, generally following a daily load curve. These changes tend to be gradual and lead to a small rate of change of frequency. Now and then however faults occur. Large power imbalances mean a proportionately faster frequency change occurs, and consequently the response has to be bigger and faster, typically within two or three seconds if stability is to be maintained. If not – in a couple blinks of an eye the power is off – across the whole grid.

If the system can cope with the range of disturbances thrown at it, it is described as ‘stable’. If it cannot cope with the disturbances it is described as ‘unstable’.

4.There are two main types of alternating current machines used for the generation of electricity; synchronous and asynchronous. The difference between them begins with the way the magnetic field of the rotor interacts with the stator. Both types of machine can be used as either a generator or motor.

There are two key differences affecting their contribution to stability.

The kinetic energy of the synchronous machine’s rotor is closely coupled to the power system and therefore available for immediate conversion to power. The rotor kinetic energy of the asynchronous machine is decoupled from the system by virtue of its slip and is therefore not easily available to the system.

Synchronous generators are controllable by governors which monitor system frequency and adjust prime mover input to bring correction to frequency movements. Asynchronous generators are typically used in applications where the energy source is not controllable, eg: wind turbines. These generators cannot respond to frequency movements representing a system energy imbalance. They are instead a cause of energy imbalance.

Short -term stability

The spinning kinetic energy in the rotors of the synchronous machines is measured in megawatt seconds. Synchronous machines provide stability under power system imbalances because the kinetic energy of their rotors (and prime movers) is locked in synchronism with the grid through the magnetic field between the rotor and the stator. The provision of this energy is essential to short duration stability of the power system.

Longer-term stability

Longer term stability is managed by governor controls. These devices monitor system frequency (recall that the rate of system frequency change is proportional to energy imbalance) and automatically adjust machine power output to compensate for the imbalance and restore stability.

5.For a given level of power imbalance the rate of rise and fall of system frequency is directly dependent on synchronously connected angular momentum.

The rotational form of Newton’s second law of motion; Force = Mass * Acceleration describes the power flow between the rotating inertia (rotational kinetic energy) of a synchronous generator and the power system. It applies for the first few seconds after the onset of a disturbance, i.e.: before the governor and prime mover have had opportunity to adjust the input power to the generator.

Pm – Pe = M * dw/dt

Pm is the mechanical power being applied to the rotor by the prime mover. We consider this is a constant for the few seconds that we are considering.

Pe is the electrical power being taken from the machine. This is variable.

M is the angular momentum of the rotor and the directly connected prime mover. We can also consider M a constant, although strictly speaking it isn’t constant because it depends on w. However as w is held within a small window, M does not vary more than a percent or so.

dw/dt is the rate of change of rotor speed, which relates directly to the rate of increasing or reducing frequency.

The machine is in equilibrium when Pm = Pe. This results in dw/dt being 0, which represents the rotor spinning at a constant speed. The frequency is constant.

When electrical load has been lost Pe is less than Pm and the machine will accelerate resulting in increasing frequency. Alternatively when electrical load is added Pe is greater than Pm the machine will slow down resulting in reducing frequency.

Here’s the key point, for a given level of power imbalance the rate of rise and fall of system frequency is directly dependent on synchronously connected angular momentum, M.

It should now be clear how central a role that synchronously connected angular momentum plays in power system stability. It is the factor that determines how much time generator governors and automatic load shedding systems have to respond to the power flow variation and bring correction.

 

6 .Generation Follows Demand. The machine governor acts in the system as a feedback controller. The governor’s purpose is to sense the shaft rotational speed, and the rate of speed increase /decrease, and to adjust machine input via a gate control.

The governor’s job is to continuously monitor the rotational speed w of the shaft and the rate of change of shaft speed dw/dt and to control the gate(s) to the prime mover. In the example below, a hydro turbine, the control applied is to adjust the flow of water into the turbine, and increasing or reducing the mechanical power Pm compensate for the increase or reduction in electrical load, ie: to approach equilibrium.

It should be pointed out that while the control systems aim for equilibrium, true equilibrium is never actually achieved. Disturbances are always happening and they have to compensated for continuously, every second of every minute of every hour, 24 hours a day, 365 days a year, year after year.

The discussion has been for a single synchronous generator, whereas of course the grid has hundreds of generators. In order for each governor controlled generator to respond fairly and proportionately to a network power imbalance, governor control is implemented with what is called a ‘droop characteristic’. Without a droop characteristic, governor controlled generators would fight each other each trying to control the frequency to its own setting. A droop characteristic provides a controlled increase in generator output, in inverse proportion to a small drop in frequency.

In New Zealand the normal operational frequency band is 49.8 to 50.2 Hz. An under frequency event is an event where the frequency drops to 49.25 Hz. It is the generators controlled by governors with a droop characteristic that pick up the load increase and thereby maintain stability. If it happens that the event is large and the governor response is insufficient to arrest the falling frequency, under frequency load shedding relays turn off load.

Here is a record of an under frequency event earlier this month, where a power station tripped.

The generator tripped at point A which started the frequency drop. The rate of drop dw/dt is determined by size of the power imbalance divided by the synchronous angular momentum (Pm – Pe)/M. In only 6 seconds the frequency drop was arrested at point B by other governor controlled generators and under frequency load shedding, and in about 6 further seconds additional power is generated, once again under the control of governors, and the frequency was restored to normal at point C. The whole event lasting merely 12 seconds.

So why would we care about a mere 12 second dip in frequency of less than 1 Hz. The reason is that without governor action and under frequency load shedding, a mere 12 second dip would instead be a complete power blackout of the North Island of New Zealand.

Local officials standing outside substation Masterton NZ .

7.An under frequency event on the North Island of New Zealand demonstrates how critical is electrical system stability.

The graph below which is based on 5 minute load data from NZ’s System Operator confirms that load shedding occurred. The North Island load can be seen to drop 300 MW, from 3700 MW at 9.50 to 3400 MW at 9.55. The load restoration phase can also be observed from this graph. From 10.15 through 10.40 the shed load is restored in several steps.

The high resolution data that we’ll be looking at more closely was recorded by a meter with power quality and transient disturbance recording capability. It is situated in Masterton, Wairarapa, about 300 km south of the power station that tripped. The meter is triggered to capture frequency excursions below 49.2 Hz. The graph below shows the captured excursion on June 15th. The graph covers a total period of only one minute. It shows the frequency and Masterton substation’s load. I have highlighted and numbered several parts of the frequency curve to help with the discussion.

The first element we’ll look at is element 1 to 2. The grid has just lost 310 MW generation and the frequency is falling. No governors nor load shedding will have responded yet. The frequency falls 0.192 Hz in 0.651 seconds giving a fall rate df/dt of -0.295 Hz/s. From this df/dt result and knowing the lost generation is 310 MW we can derive the system angular momentum M as 1,052 MWs/Hz from -310 = M * -0.295.

It is interesting (and chilling) to calculate how long it would take for blackout to occur if no corrective action is taken to restore system frequency and power balance. 47 Hz is the point where cascade tripping is expected. Most generators cannot operate safely below 47 Hz, and under frequency protection relays disconnect generators to protect them from damage. This sets 47 Hz as the point at which cascade outage and complete grid blackout is likely. A falling frequency of -0.295 Hz/s would only take 10.2 seconds to drop from 50 to 47 Hz. That’s not very long and obviously automated systems are required to arrest the decline. The two common automatic systems that have been in place for decades are governor controlled generators and various levels of load shedding.

The fall arrest between 4 and 5 has been due to automatic load shedding. New Zealand has a number of customers contracted to disconnect load at 49.2 Hz. From these figures we can estimate a net shed load of 214 MW (114 MW + 100 MW).

From 7 to 8 the frequency is increasing with df/dt of 0.111 Hz/s and the system has a surplus of 117 MW of generation. At point 8 the system reached 50 Hz again, but the system then over shoots a little and governor action works to reduce generation to control the overshoot between 8 and 9.

This analysis shows how system inertia, under frequency load shedding and governor action work together to maintain system stability.

Summary: The key points

  • The system needs to be able to maintain stability second by second, every minute, every hour, every day, year after year. Yet when a major disturbance happens, the time available to respond is only a few seconds.
  • This highlights the essential role of system inertia in providing this precious few seconds. System inertia defines the relationship between power imbalance and frequency fall rate. The less inertia the faster the collapse and the less time we have to respond. Nearly all system inertia is provided by synchronous generators.
  • Control of the input power to the generators by governor action is essential to control frequency and power balance, bringing correction to maintain stability. This requires control of prime mover, typically this is only hydro and thermal stations.
  • When the fall rate is too fast for governor response, automatic load shedding can provide a lump of very helpful correction, which the governors later tidy up by fine tuning the response.

Big Wind Blacklisted

What is wrong with wind farms? Let us count the ways.

Dear Congress, stop subsidizing wind like it’s 1999 and let the tax credit expire is written by Richard McCarty at Daily Torch.  Excerpts in italics with my bolds.

Congress created the production tax credit for wind energy in 1992. In other words, wind turbine owners receive a tax credit for each kilowatt hour of electricity their turbines create, whether the electricity is needed or not. The production tax credit was supposed to have expired in 1999; but, instead, Congress has repeatedly extended it. After nearly three decades of propping up the wind industry, it is past time to let the tax credit expire in 2020.

All Congress needs to do is nothing.

Addressing the issue of wind production tax credits, Americans for Limited Government President Rick Manning stated, “Wind energy development is no longer a nascent industry, having grown from 0.7 percent of the grid in 2007 to 6.6 percent in 2018 at 275 billion kWh. The rationale behind the wind production tax credit has always been that it is necessary to attract investors.”

Manning added, “wind energy development has matured to the point where government subsidization of billionaires like Warren Buffett cannot be justified, neither from an energy production standpoint nor a fiscal one. Americans for Limited Government strongly urges Congress to end the Wind Production Tax Credit. The best part is, they only need to do nothing as it expires at the end of the year.”

There are plenty of reasons for ending the tax credit. Here are some of them:

  • Wind energy is unreliable. Wind turbines require winds of six to nine miles per hour to produce electricity; when winds speeds reach approximately 55 miles per hour, turbines shut down to prevent damage to the equipment. Wind turbines also shut down in extremely cold weather.
  • Due to this unreliability, relatively large amounts of backup power capacity must be kept available.
  • Wind energy often requires the construction of costly, new high-voltage transmission lines. This is because some of the best places to generate wind energy are in remote locations far from population centers or offshore.
  • Generating electricity from wind requires much more land than does coal, natural gas, nuclear, or even solar power. According to a 2017 study, generating one megawatt of electricity from coal, natural gas, or nuclear power requires about 12 acres; producing one megawatt of electricity from solar energy requires 43.5 acres; and harnessing wind energy to generate one megawatt of electricity requires 70.6 acres.
  • Wind turbines have a much shorter life span than other energy sources. According to the Department of Energy’s National Renewable Energy Laboratory, the useful life of a wind turbine is 20 years while coal, natural gas, nuclear, and hydroelectric power plants can remain in service for more than 50 years.
  • Wind power’s inefficiencies lead to higher rates for customers.
  • Higher electricity rates can have a chilling effect on the local economy. Increasing electricity rates for businesses makes them less competitive and can result in job losses or reduced investments in businesses.
  • Increasing rates on poor consumers can have an even more negative impact sometimes forcing them to go without heat in the winter or air conditioning in the summer.
  • Wind turbines are a threat to aviators. Wind turbines are a particular concern for crop dusters, who must fly close to the ground to spray crops. Earlier this summer, a crop dusting plane clipped a wind turbine tower and crashed.
  • Wind turbines are deadly for birds and bats, which help control the pest population. Even if bats are not struck by the rotors, some evidence suggests that they may be injured or killed by the sudden drop in air pressure around wind turbines.

Large wind turbines endanger lives, the economy, and the environment. Even after decades of heavy subsidies, the wind industry has failed to solve these problems. For these and other reasons, Congress should finally allow the wind production tax credit to expire.

Richard McCarty is the Director of Research at Americans for Limited Government Foundation.

Update August 16, 2019

nzrobin commented regarding more technical detail about managing grid reliability.  A new post is a synopsis of his series on the subject On Stable Electric Power: What You Need to Know

EU Update: Pipelines and Pipe Dreams

Daniel Markind writes at Forbes The Nord Stream 2 Pipeline And The Dangers Of Moving Too Rashly Toward Renewable Energy. Excerpts in italics with my bolds.

Few Americans likely noticed last week that Denmark refused to grant a permit for finishing construction of the Russian natural gas pipeline Nord Stream 2, but its international significance is enormous. Denmark’s refusal is the latest chapter in a story of how good intentions in fighting climate change go bad. It is a cautionary tale of how a country – in this case, Germanywhile seeking to make itself and its energy use cleaner, more efficient and more self-sufficient, can produce the opposite of all three. As climate change becomes more of an issue in America heading into the 2020 election season, Nord Stream 2 provides a case study of the potential peril we face when our desire to switch as rapidly as possible to cleaner energy overwhelms current scientific, technological, political and economic realities.

The back story of Nord Stream 2 involves the desire of Germany to be the world leader in clean energy. In 2010, Germany embarked on a program called “Energiewende” – meaning literally, energy transition. This was designed to transform the German energy economy from being based on fossil fuels to being based on so-called “renewables”. In practical effect, the German government refused to approve any energy project that did not involve renewable energy. Germany hoped that Energiewende would reduce drastically Germany’s CO2 emissions and also end the country’s reliance on fossil fuels. This would strike a blow both for German energy independence and for the fight against climate change.

It didn’t work. At first the country’s CO2 emissions fell, but Germany never was able to generate enough reliable renewable energy to sustain itself. Instead, partially because it had not properly planned for its energy needs during the transition period to full renewable energy, Germany had to fall back on coal produced in the former Communist Eastern part. Ironically, the renewed reliance on this coal, called “lignite”, only made Germany’s short-term pollution problems worse, as lignite is a peculiarly dirty form of coal. By 2015, despite closing nuclear power plants and preventing new fossil fuel energy investment, Germany’s CO2 emissions started again to increase. They eventually dropped in 2018, but few are confident that decrease will continue.

Worse still, prices for German energy kept soaring, becoming among the highest in Europe. Simultaneously, Germany’s energy needs became more dependent on natural gas from Russia. Mainly for political reasons, Russia hardly is a reliable energy source. It certainly is not an environmentally conscious one. Instead of making Germany more self-reliant and a world clean energy leader, Energiewende actually drove Germany further into the arms of Russia. In addition, it otherwise thwarted Germany’s goal of a rapid renewable energy transition.

Had it been available, a more attractive and environmentally beneficial choice for Germany would have been imports of abundant, readily available, and above all relatively clean natural gas from the Marcellus Shale region of Pennsylvania, Ohio and West Virginia – at least on an interim basis until renewable energy transition could catch up to the political and economic realities. While there is more than enough gas in Appalachia and Northeastern Pennsylvania to export overseas to places like Germany and not delete supplies for domestic usage, American energy politics have prevented the needed pipeline and export infrastructure from being built. Simply put, without approved pipelines, the gas has no way to get from the point of production to ports where it can be shipped overseas. The Philadelphia area, which could be a center for the energy industry and for breaking Russia’s gas energy monopoly on Europe, remains woefully oblivious even of its possibilities.

The result is that instead of having natural gas transported to Germany from a NATO ally that drills and transports using stringent environmental safeguards, Germany now relies on Russia, a country that drills in a sensitive Arctic ecosystem with few environmental limitations. The money that could have gone to American companies, landowners and taxes goes instead to Gazprom, the Russian gas giant.

This still is not the end of the story. Germany receives its natural gas via pipelines that traverse Ukraine, Poland, and the Baltic States. Indeed the revenue to Ukraine for allowing transshipments of gas from Russia to Germany via existing overland pipelines within Ukraine’s borders constitutes over 2% of the total Ukrainian GDP. That mostly will end when Nord Stream 2 becomes operational. Nord Stream 2 will bypass the current overland route. That would largely cut out Ukraine, Poland and the Baltic States – all important United States and Western Europe allies.

Last July at the annual NATO summit, President Trump publicly excoriated German Chancellor Angela Merkel over Nord Stream 2. She rebuffed him, insisting on making her country more susceptible to Russian control while also upsetting other NATO allies. With the Nord Stream 2 pipeline currently 70% built and with the Ukrainian-Russian transshipment contract ending in 2020, it looked like all systems go.

Then Denmark stepped in. One of four countries that needs to approve the Nord Stream 2 pipeline route as it passed through Denmark’s territorial waters in the Baltic Sea, the Danes refused to grant the final permits. They demanded the pipeline be moved farther away from the country. At the least, based on published projections that may even understate the impacts, Denmark’s decision means an additional cost of €740M and an eight month delay, going beyond the end date for the current Ukrainian transit contract. This now will need to be extended, giving some consolation to Ukraine.

Still, Nord Stream 2 likely will be completed eventually, and by the same Europeans who routinely preach the loudest about climate change.

It appears to be a loser in every way a pipeline can.

Nord Stream 2 ties Germany closer to Russia, puts more money in the pockets of Gazprom, increases incentives for Russian President Vladimir Putin to ratchet up his environmentally unsound natural gas drilling in and transporting from the Arctic, gives Russia more ability to blackmail the West using its natural gas weapon, cuts out Western-leaning countries in Eastern Europe from needed revenue, and keeps money and investment out of the United States where it could go via exports from the Marcellus Shale deposits.

As always, reasonable people can argue about the wisdom of building new fossil fuel infrastructure when we hope to switch to renewables. However, given the current state of scientific knowledge and of world affairs, failure to do so also has real world adverse environmental, economic and political consequences.

To anyone serious about renewables and reducing our world-wide carbon footprint, the story of Energiewende and Nord Stream 2 should be studied carefully. Our desire to do something good for the planet cannot overwhelm our common sense and world realities. We must be very clear-eyed about how soon and how efficiently we can, in fact, switch from a carbon based energy infrastructure to one based entirely on renewable resources. The Danes just dealt Nord Stream 2 a temporary setback, but the only real winners from the Nord Stream 2 saga long term will be people in Moscow whose concern for the environment certainly is not equal to those who enacted Energiewende or who fight in the United States to stop oil and gas pipeline construction. This surely is not the result anyone in the West would have desired, nor is it good for the future of the planet.

Daniel Markind is a shareholder at Flaster Greenberg PC who practices in Energy, Real Estate, Corporate, and Aviation Law. He can be reached at daniel.markind@flastergreenberg.com.

Naming Heat Waves is Hype

 

Dr. Joel N. Myers, AccuWeather Founder and CEO writes Throwing cold water on extreme heat hype. Excerpts in italics with my bolds H/T John Ray

A story came to my attention recently that merited comment. It appeared in London’s The Telegraph, and was headlined, “Give heat waves names so people take them more seriously, say experts, as Britain braces for hottest day

The story’s leaping-off point was a press release from the London School of Economics (LSE), which noted, “A failure by the media to convey the severity of the health risks from heat waves, which are becoming more frequent due to climate change, could undermine efforts to save lives this week as temperatures climb to dangerous levels.” .” Is it time to start naming deadly heatwaves?

It added, “So how can the media be persuaded to take the risks of heat waves more seriously? Perhaps it is time … to give heat waves names [as is done] for winter storms.”

We disagree with some of the points being made.

First, and most important, we warn people all the time in plain language on our apps and on AccuWeather.com about the dangers of extreme heat, as well as all hazards. Furthermore, that is the reason we developed and patented the AccuWeather RealFeel® Temperature and our recently expanded AccuWeather RealFeel® Temperature Guide, to help people maximize their health, safety and comfort when outdoors and prepare and protect themselves from weather extremes. The AccuWeather RealFeel Temperature Guide is the only tool that properly takes into account all atmospheric conditions and translates them into actionable behavior choices for people.

Second, although average temperatures have been higher in recent years, there is no evidence so far that extreme heat waves are becoming more common because of climate change, especially when you consider how many heat waves occurred historically compared to recent history.

After June 2019 was recognized by a number of official organizations as the hottest June on record, July may have now been the hottest month ever recorded. That’s according to the Copernicus Centre for Climate Studies (C3S), which presented its data sets with the announcement that this July may have been marginally hotter than that of 2016, which was previously the hottest month on record.

New York City has not had a daily high temperature above 100 degrees since 2012, and it has had only five such days since 2002. However, in a previous 18-year span from 1984 through 2001, New York City had nine days at 100 degrees or higher. When the power went out in New York City earlier this month, the temperature didn’t even get to 100 degrees – it was 95, which is not extreme. For comparison, there were 12 days at 95 degrees or higher in 1999 alone.

Kansas City, Missouri, for example, experienced an average of 18.7 days a year at 100 degrees or higher during the 1930s, compared to just 5.5 a year over the last 10 years. And over the last 30 years, Kansas City has averaged only 4.8 days a year at 100 degrees or higher, which is only one-quarter of the frequency of days at 100 degrees or higher in the 1930s.

Here is a fact rarely, if ever, mentioned: 26 of the 50 states set their all-time high temperature records during the 1930s that still stand (some have since been tied). And an additional 11 state all-time high temperature records were set before 1930 and only two states have all-time record high temperatures that were set in the 21st century (South Dakota and South Carolina).

So 37 of the 50 states have an all-time high temperature record not exceeded for more than 75 years. Given these numbers and the decreased frequency of days of 100 degrees or higher,

it cannot be said that either the frequency or magnitude of heat waves is more common today.

Climate scientist Lennart Bengtsson said. “The warming we have had the last 100 years is so small that if we didn’t have meteorologists and climatologists to measure it we wouldn’t have noticed it at all.”

Why Al Gore Keeps Yelling “Fire!”

 

Some years ago I attended seminars regarding efforts to achieve operational changes in organizations. The notion was presented that people only change their habits, ie. leave their comfort zone, when they fear something else more than changing their behavior. The analogy was drawn comparing to workers leaping from a burning oil platform, or tenants from a burning building.

Al Gore is fronting an agenda to unplug modern societies, and thereby the end of life as we know it. Thus they claim the world is on fire, and only if we abandon our ways of living can we be saved.

The big lie is saying that the world is burning up when in fact nothing out of the ordinary is happening. The scare is produced by extrapolating dangerous, fearful outcomes from events that come and go in the normal flow of natural and seasonal climate change. They can not admit that the things they fear have not yet occurred.  We will jump only if we believe our platform, our way of life, is already crumbling.

And so we come to Al Gore recently claiming that his past predictions of catastrophe have all come true.

J.Frank Bullitt writes at Issues and Insights Gore Says His Global Warming Predictions Have Come True? Can He Prove It? Excerpts in italics with my bolds.

When asked Sunday about his 2006 prediction that we would reach the point of no return in 10 years if we didn’t cut human greenhouse gas emissions, climate alarmist in chief Al Gore implied that his forecast was exactly right.

“Some changes unfortunately have already been locked in place,” he told ABC’s Jonathan Karl.

 

Sea level increases are going to continue no matter what we do now. But, we can prevent much larger sea level increases. Much more rapid increases in temperature. The heat wave was in Europe. Now it’s in Arctic. We’re seeing huge melting of the ice there. So, the warnings of the scientists 10 years ago, 20 years ago, 30 years ago, unfortunately were accurate.”

Despite all this gloom, he’s found “good news” in the Democratic presidential field, in which “virtually all of the candidates are agreed that this is either the top issue or one of the top two issues.”

So what has Gore been predicting for the planet? In his horror movie “An Inconvenient Truth,” he claimed:

Sea levels could rise as much as 20 feet. He didn’t provide a timeline, which was shrewd on his part. But even if he had said 20 inches, over 20 years, he’d still have been wrong. Sea level has been growing for about 10,000 years, and, according to the National Oceanic and Atmospheric Administration, continues to rise about one-eighth of an inch per year.

“Storms are going to grow stronger.” There’s no evidence they are stronger nor more frequent.

Mt. Kilimanjaro was losing its snow cap due to global warming. By April 2018, the mountain glaciers were taking their greatest snowfall in years. Two months later, Kilimanjaro was “covered by snow” for “an unusually long stint. But it’s possible that all the snow and ice will be gone soon. Kilimanjaro is a stratovolcano, with a dormant cone that could erupt.

Point of no return. If we have truly gotten this far, why even care that “virtually all” of the Democratic candidates have agreed that global warming is a top issue? If we had passed the point of no return, there’d be no reason to maintain hope. The fact Gore’s looking for a “savior” from among the candidates means that even he doesn’t believe things have gone too far.

A year after the movie, Gore was found claiming that polar bears’ “habitat is melting” and “they are literally being forced off the planet.” It’s possible, however, that there are four times as many polar bears as there were in the 1960s. Even if not, they’ve not been forced off the planet.

Also in 2007, Gore started making “statements about the possibility of a complete lack of summer sea ice in the Arctic by as early as 2013,” fact-checker Snopes, which leans so hard left that it often falls over and has to pick itself up, said, before concluding that “Gore definitely erred in his use of preliminary projections and misrepresentations of research.”

Unwilling to fully call out one its own, Snopes added that “Arctic sea ice is, without question, on a declining trend.” A fact check shows that to be true. A deeper fact check, though, shows that while Arctic sea ice has been falling, Antarctic sea ice has been increasing.

 

Finally — just for today because sorting out Gore’s fabrications is an ongoing exercise — we remind readers of the British judge who found that “An Inconvenient Truth” contained “nine key scientific errors” and “ruled that it can only be shown with guidance notes to prevent political indoctrination,” the Telegraph reported in 2007.

Gore has been making declarative statements about global warming for about as long as he’s been in the public eye. He has yet to prove a single claim, though. But how can he? The few examples above show that despite his insistence to the contrary, his predictions have failed.

Even if all turned out to be more accurate than a local three-day forecast, there’s no way to say with 100% certainty that the extreme conditions were caused by human activity. Our climate is a complex system, there are too many other variables, and the science itself has limits, unlike Gore’s capacity to inflate the narrative.

Footnote: 

Lest anyone think this is all about altruism, Al Gore is positioned to become even more wealthy from the war on meat.

Generation Investment Management is connected to Kleiner Perkins, where former Vice President Al Gore is one of its partners and advisors.

Who’s Kleiner Perkins? It turns out they are Beyond Meat’s biggest investor, according to bizjournals.com here. Beyond Meat is a Los Angeles-based producer of plant-based meat substitutes founded in 2009 by Ethan Brown. The company went public in May and just weeks later more than quadrupled in value.

Yes, Al Gore, partner and advisor to Kleiner Perkins, Beyond Meat’s big investor, stands to haul in millions, should governments move to restrict real meat consumption and force citizens to swallow the dubious substitutes and fakes.

If taken seriously, the World Research Institute Report, backed by Gore hacks, will help move the transition over to substitute meats far more quickly.

 

July SSTs NH Anomaly

The best context for understanding decadal temperature changes comes from the world’s sea surface temperatures (SST), for several reasons:

  • The ocean covers 71% of the globe and drives average temperatures;
  • SSTs have a constant water content, (unlike air temperatures), so give a better reading of heat content variations;
  • A major El Nino was the dominant climate feature in recent years.

HadSST is generally regarded as the best of the global SST data sets, and so the temperature story here comes from that source, the latest version being HadSST3.  More on what distinguishes HadSST3 from other SST products at the end.

The Current Context

The chart below shows SST monthly anomalies as reported in HadSST3 starting in 2015 through June 2019.

A global cooling pattern is seen clearly in the Tropics since its peak in 2016, joined by NH and SH cycling downward since 2016.  In 2019 all regions had been converging to reach nearly the same value in April.

Now something exceptional is happening in NH rising 0.4C in the last two months, matching the 2015 summer peak.  Meanwhile the SH remains relatively cooler, and the Tropics not changing much.  Despite the sharp jump in NH, the global anomaly rose only slightly.

Note that higher temps in 2015 and 2016 were first of all due to a sharp rise in Tropical SST, beginning in March 2015, peaking in January 2016, and steadily declining back below its beginning level. Secondly, the Northern Hemisphere added three bumps on the shoulders of Tropical warming, with peaks in August of each year.  A fourth NH bump was lower and peaked in September 2018.  As noted above, July 2019 is matching the first of these upward bumps.

And as before, note that the global release of heat was not dramatic, due to the Southern Hemisphere offsetting the Northern one.  The major difference between now and 2015-2016 is the absence of Tropical warming driving the SSTs.

Note: The NH spike is unexpected since UAH ocean air tempts dropped sharply in July 2019.  The discrpency between the two datasets is surprising since previously they were quite similar.

The annual SSTs for the last five years are as follows:

Annual SSTs Global NH SH  Tropics
2014 0.477 0.617 0.335 0.451
2015 0.592 0.737 0.425 0.717
2016 0.613 0.746 0.486 0.708
2017 0.505 0.650 0.385 0.424
2018 0.480 0.620 0.362 0.369

2018 annual average SSTs across the regions are close to 2014, slightly higher in SH and much lower in the Tropics.  The SST rise from the global ocean was remarkable, peaking in 2016, higher than 2011 by 0.32C.

A longer view of SSTs

The graph below  is noisy, but the density is needed to see the seasonal patterns in the oceanic fluctuations.  Previous posts focused on the rise and fall of the last El Nino starting in 2015.  This post adds a longer view, encompassing the significant 1998 El Nino and since.  The color schemes are retained for Global, Tropics, NH and SH anomalies.  Despite the longer time frame, I have kept the monthly data (rather than yearly averages) because of interesting shifts between January and July.

Open image in new tab to enlarge.

1995 is a reasonable starting point prior to the first El Nino.  The sharp Tropical rise peaking in 1998 is dominant in the record, starting Jan. ’97 to pull up SSTs uniformly before returning to the same level Jan. ’99.  For the next 2 years, the Tropics stayed down, and the world’s oceans held steady around 0.2C above 1961 to 1990 average.

Then comes a steady rise over two years to a lesser peak Jan. 2003, but again uniformly pulling all oceans up around 0.4C.  Something changes at this point, with more hemispheric divergence than before. Over the 4 years until Jan 2007, the Tropics go through ups and downs, NH a series of ups and SH mostly downs.  As a result the Global average fluctuates around that same 0.4C, which also turns out to be the average for the entire record since 1995.

2007 stands out with a sharp drop in temperatures so that Jan.08 matches the low in Jan. ’99, but starting from a lower high. The oceans all decline as well, until temps build peaking in 2010.

Now again a different pattern appears.  The Tropics cool sharply to Jan 11, then rise steadily for 4 years to Jan 15, at which point the most recent major El Nino takes off.  But this time in contrast to ’97-’99, the Northern Hemisphere produces peaks every summer pulling up the Global average.  In fact, these NH peaks appear every July starting in 2003, growing stronger to produce 3 massive highs in 2014, 15 and 16.  NH July 2017 was only slightly lower, and a fifth NH peak still lower in Sept. 2018.  Note also that starting in 2014 SH plays a moderating role, offsetting the NH warming pulses. (Note: these are high anomalies on top of the highest absolute temps in the NH.)

What to make of all this? The patterns suggest that in addition to El Ninos in the Pacific driving the Tropic SSTs, something else is going on in the NH.  The obvious culprit is the North Atlantic, since I have seen this sort of pulsing before.  After reading some papers by David Dilley, I confirmed his observation of Atlantic pulses into the Arctic every 8 to 10 years.

But the peaks coming nearly every summer in HadSST require a different picture.  Let’s look at August, the hottest month in the North Atlantic from the Kaplan dataset.
AMO August 2018

The AMO Index is from from Kaplan SST v2, the unaltered and not detrended dataset. By definition, the data are monthly average SSTs interpolated to a 5×5 grid over the North Atlantic basically 0 to 70N. The graph shows warming began after 1992 up to 1998, with a series of matching years since. Because the N. Atlantic has partnered with the Pacific ENSO recently, let’s take a closer look at some AMO years in the last 2 decades.
This graph shows monthly AMO temps for some important years. The Peak years were 1998, 2010 and 2016, with the latter emphasized as the most recent. The other years show lesser warming, with 2007 emphasized as the coolest in the last 20 years. Note the red 2018 line is at the bottom of all these tracks. The short black line shows that 2019 began slightly cooler, then tracked 2018, but has now risen to match previous summer pulses.

Summary

The oceans are driving the warming this century.  SSTs took a step up with the 1998 El Nino and have stayed there with help from the North Atlantic, and more recently the Pacific northern “Blob.”  The ocean surfaces are releasing a lot of energy, warming the air, but eventually will have a cooling effect.  The decline after 1937 was rapid by comparison, so one wonders: How long can the oceans keep this up? If the pattern of recent years continues, NH SST anomalies may rise slightly in coming months, but once again, ENSO which has weakened will probably determine the outcome.

Footnote: Why Rely on HadSST3

HadSST3 is distinguished from other SST products because HadCRU (Hadley Climatic Research Unit) does not engage in SST interpolation, i.e. infilling estimated anomalies into grid cells lacking sufficient sampling in a given month. From reading the documentation and from queries to Met Office, this is their procedure.

HadSST3 imports data from gridcells containing ocean, excluding land cells. From past records, they have calculated daily and monthly average readings for each grid cell for the period 1961 to 1990. Those temperatures form the baseline from which anomalies are calculated.

In a given month, each gridcell with sufficient sampling is averaged for the month and then the baseline value for that cell and that month is subtracted, resulting in the monthly anomaly for that cell. All cells with monthly anomalies are averaged to produce global, hemispheric and tropical anomalies for the month, based on the cells in those locations. For example, Tropics averages include ocean grid cells lying between latitudes 20N and 20S.

Gridcells lacking sufficient sampling that month are left out of the averaging, and the uncertainty from such missing data is estimated. IMO that is more reasonable than inventing data to infill. And it seems that the Global Drifter Array displayed in the top image is providing more uniform coverage of the oceans than in the past.

uss-pearl-harbor-deploys-global-drifter-buoys-in-pacific-ocean

USS Pearl Harbor deploys Global Drifter Buoys in Pacific Ocean

Supremes May Rein In Agency Lawmaking

This post consists of a legal discussion regarding undesirable outcomes from some Supreme Court rulings that gave excessive deference to Executive Branch Agency regulators. Relying on the so called “Chevron Deference” can result in regulations going beyond what congress intended by their laws.

Professor Mike Rappaport writes at Law and Liberty Replacing Chevron with a Sounder Interpretive Regime. Excerpts in italics with my bolds

The Issue

The need for a new interpretive arrangement to replace Chevron is demonstrated by a climate change example cited at the end.

Importantly, this new arrangement would significantly limit agencies from using their legal discretion to modify agency statutes to combat new problems never envisioned by the enacting Congress. For example, when the Clean Air Act was passed, no one had in mind it would be addressed to anything like climate change. Yet, the EPA has used Chevron deference to change the meaning of the statute so that it can regulate greenhouse gases without Congress having to decide whether and in what way that makes sense.

Such discretion gives the EPA enormous power to pursue its own agenda without having to secure the approval of the legislative or judicial branches.

Background and Proposals

One of the most important questions within administrative law is whether the Supreme Court will eliminate Chevron deference. But if Chevron deference is eliminated, as I believe it should be, a key question is what should replace it. In my view, there is a ready alternative which makes sense as a matter of law and policy. Courts should not give agencies Chevron deference, but should provide additional weight to agency interpretations that are adopted close to the enactment of a statute or that have been followed for a significant period of time.

Chevron deference is the doctrine that provides deference to an administrative agency when it interprets a statute that it administers. In short, the agency’s interpretation will only be reversed if a court deems the interpretation unreasonable rather than simply wrong. Such deference means that the agency can select among the (often numerous) “reasonable” interpretations of a statute to pursue its agenda. Moreover, the agency is permitted to change from one reasonable interpretation to another over time based on its policy views. In conjunction with the other authorities given to agencies, such as the delegation of legislative power, Chevron deference constitutes a key part of agency power.

There is, however, a significant chance that the Supreme Court may eliminate Chevron deference. Two of the leaders of this movement are Justices Thomas and Gorsuch. But Chief Justice Roberts as well as Justices Alito and Kavanaugh have also indicated that they might be amenable to overturning Chevron. For example, in the Kisor case from this past term, which cut back on but declined to overturn the related doctrine of Auer deference, these three justices all joined opinions that explicitly stated that they thought Chevron deference was different from Auer deference, suggesting that Chevron might still be subject to overruling.

But if Chevron deference is eliminated, what should replace it? The best substitute for Chevron deference would be the system of interpretation employed in the several generations prior to the enactment of the Administrative Procedure Act. Under that system, as explained by Aditya Bamzai in his path-breaking article, judges would interpret the statute based on traditional canons of interpretation, including two—contemporaneous exposition and customary practice—that provide weight to certain agency interpretations.

Under the canon of contemporaneous exposition, an official governmental act would be entitled to weight as an interpretation of a statute (or of the Constitution) if it were taken close to the period of the enactment of the provision. This would apply to government acts by the judiciary and the legislature as well as those by administrative agencies. Thus, agency interpretations of statutes would be entitled to some additional weight if taken at the time of the statute’s enactment.

This canon has several attractive aspects. First, it has a clear connection to originalism. Contemporaneous interpretations are given added weight because they were adopted at the time of the law’s enactment and therefore are thought to be more likely to offer the correct interpretation—that is, one attuned to the original meaning. Second, this canon also promotes the rule of law by both providing notice to the public of the meaning of the statute and limiting the ability of the agency to change its interpretation of the law.

The second canon is that of customary practice or usage. Under this framework, an interpretation of a government actor in its official capacity would be entitled to weight if it were consistently followed over a period of time. Thus, the agency interpretation would receive additional weight if it became a regular practice, even if were not adopted at the time of statutory enactment.

The canon of customary practice has a number of desirable features. While it does not have a connection to originalism, it does, like contemporaneous exposition, promote the rule of law. Once a customary interpretation has taken hold, the public is better able to rely on the existing interpretation and the government is more likely to follow that interpretation.

Second, the customary interpretation may also be an attractive interpretation. That the interpretation has existed over a period of time suggests that it has not created serious problems of implementation that have led courts or the agency to depart from it. While the customary interpretation may not be the most desirable one as a matter of policy, it is unlikely to be very undesirable.

This traditional interpretive approach also responds to one of the principal criticisms of eliminating Chevron deference: that it will give significant power to a judiciary that lacks expertise and can abuse its authority. I don’t agree with this criticism, since I believe that judges are expert at interpreting statutes and are subject to less bias than agencies that exercise not merely executive power, but also judicial and legislative authority.

But even if one believed that the courts were problematic, this arrangement would leave the judiciary with much less power than a regime that provides no weight to agency interpretations. The courts would often be limited by agency interpretations that accorded with the canons—interpretations adopted when the statute was enacted or that were customarily followed. Since those interpretations would be given weight, the courts would often follow them. But while these interpretations would limit the courts, they would not risk the worst dangers of Chevron deference. This interpretive approach would not allow an agency essentially free reign to change its interpretation over time in order to pursue new programs or objectives. Once the interpretation is in place, the agency would not be able to secure judicial deference if it changed the interpretation.

Importantly, this new arrangement would significantly limit agencies from using their legal discretion to modify agency statutes to combat new problems never envisioned by the enacting Congress. For example, when the Clean Air Act was passed, no one had in mind it would be addressed to anything like climate change. Yet, the EPA has used Chevron deference to change the meaning of the statute so that it can regulate greenhouse gases without Congress having to decide whether and in what way that makes sense. Such discretion gives the EPA enormous power to pursue its own agenda without having to secure the approval of the legislative or judicial branches.

In short, if Chevron deference is eliminated, there is a traditional and attractive interpretive approach that can replace it. Hopefully, the Supreme Court will take the step it refused to take in Kisor and eliminate an unwarranted form of deference.

H2O Reduces CO2 Climate Sensitivity

Francis Massen writes at his blog meteoLCD on The Kauppinen papers, summarizing and linking to studies by Dr Jyrki Kauppinen (Turku University in Finland) regarding the climate sensitivity problem. Excerpts in italics with my bolds

Dr. Jyrki Kauppinen (et al.) has published during the last decade several papers on the problem of finding the climate sensitivity (List with links at end). All these papers are, at least for big parts, heavy on mathematics, even if parts thereof are not too difficult to grasp. Let me try to summarize in layman’s words (if possible):

The authors remember that the IPCC models trying to deliver an estimate for ECS or TCR usually take the relative humidity of the atmosphere as constant, and practically restrict to allowing one major cause leading to a global temperature change: the change of the radiative forcing Q. Many factors can change Q, but overall the IPCC estimates the human caused emission of greenhouse gases and the land usage changes (like deforestation) are the principal causes of a changing Q. If the climate sensitivy is called R, the IPCC assumes that DT = R*DQ (here “D” is taken as the greek capital “delta”). This assumption leads to a positive water vapour feedback factor and so to the high values of R.

Kauppinen et al. disagree: They write that one has to include in the expression of DT the changes of the atmospheric water mass (which may show up in changes of the relative humidity and/or low cloud cover. Putting this into a equation leads to the conclusion that the water vapour feedback is negative and as a consequence that climate sensitivity is much lower.

Let us insist that the authors do not write that increasing CO2 concentrations do not have any influence on global temperature. They have, but it is many times smaller than the influence of the hydrological cycle.

Here what Kauppinen et al. find if they take real observational values (no fudge parameters!) and compare their calculated result to one of the offical global temperature series:

Figure 4. [2] Observed global mean temperature anomaly (red), calculated anomaly (blue), which is the sum of the natural and carbon dioxide contributions. The green line is the CO2 contribution merely. The natural component is derived using the observed changes in the relative humidity. The time resolution is one year.

The visual correlation is quite good: the changes in low cloud cover explain almost completely the warming of the last 40 years!

In their 2017 paper, they conclude to a CO2 sensitivity of 0.24°C (about ten times lower than the IPCC consensus value). In the last 2019 paper they refine their estimate, find again R=0.24 and give the following figure:

Figure 2. [2] Global temperature anomaly (red) and the global low cloud cover changes (blue) according to the observations. The anomalies are between summer 1983 and summer 2008. The time resolution of the data is one month, but the seasonal signal is removed. Zero corresponds about 15°C for the temperature and 26 % for the low cloud cover.

Clearly the results are quite satisfactory, and show also clearly that their simple model can not render the spikes caused by volcanic or El Nino activity, as these natural disturbances are not included in their balance.

The authors conclude that the IPCC models can not give a “correct” value for the climate sensitivity, as they practically ignore (at least until AR5) the influence of low cloud cover. Their finding is politically explosive in the sense that there is no need for a precipitous decarbonization (even if on the longer run a reduction in carbon intensity in many activities might be recommendable.

Francis Massen opinion

As written in part 1, Kauppinen et al. are not the first to conclude to a much lower climate sensitivity as the IPCC and its derived policies do. Many papers, even if based on different assumptions and methods come to a similar conclusion i.e. the IPCC models give values that are (much) too high. Kaupinnen et al. also show that the hydrological cycle can not be ignored, and that the influence of low clouds cover (possibly modulated by solar activity) should not be ignored.

What makes their papers so interesting is that they rely only on practically 2 observational factors and are not forced to introduce various fudge parameters.

The whole problem is a complicated one, and rushing into ill-reflected and painful policies should be avoided before we have a much clearer picture.

Footnote: The four Kauppinen papers.

2011 : Major portions in climate change: physical approach. (International Review of Physics) link

2014: Influence of relative humidity and clouds on the global mean surface temperature (Energy & Environment). Link to abstract.   Link to jstor read-only version (download is paywalled).

2018: Major feedback factors and effects of the cloud cover and the relative humidity on the climate. Link.

2019: No experimental evidence for the significant anthropogenic climate change. Link.
The last two papers are on arXiv and are not peer reviewed, not an argument to refute them in my opinion.

Francis Massen (francis.massen@education.lu), a physicist by education, who manages and operates the meteo/climate station http://meteo.lcd.lu of the Lycée Classique de Diekirch in Luxembourg, Europe.

See Also my recent post More 2019 Evidence of Nature’s Sunscreen

Postscript:

Dr. Dai Davies summarized this perspective this way:

The most fundamental of the many fatal mathematical flaws in the IPCC related modelling of atmospheric energy dynamics is to start with the impact of CO2 and assume water vapour as a dependent ‘forcing’ .  This has the tail trying to wag the dog. The impact of CO2 should be treated as a perturbation of the water cycle. When this is done, its effect is negligible.

See Davies article synopsis at Earth Climate Layers