Fatal Flaw in Earth Energy Balance Diagrams

Prof. Warren Stannard of Western Australia University provides the math analysis to correct the above mistaken energy balance cartoon published in 1997.  His paper in Natural Science (2018) is The Greenhouse Effect: An Evaluation of Arrhenius’ Thesis and a New Energy Equilibrium Model.  Excerpts in italics with my bolds and exhibits.

Abstract

In 1896, Svante Arrhenius proposed a model predicting that increased concentration of carbon dioxide and water vapour in the atmosphere would result in a warming of the planet. In his model, the warming effects of atmospheric carbon dioxide and water vapour in preventing heat flow from the Earth’ s surface (now known as the “Greenhouse Effect”) are counteracted by a cooling effect where the same gasses are responsible for the radiation of heat to space from the atmosphere. His analysis found that there was a net warming effect and his model has remained the foundation of the Enhanced Greenhouse Effect—Global Warming hypothesis.

This paper attempts to quantify the parameters in his equations but on evaluation his model cannot produce thermodynamic equilibrium. A modified model is proposed which reveals that increased atmospheric emissivity enhances the ability of the atmosphere to radiate heat to space overcoming the cooling effect resulting in a net cooling of the planet. In consideration of this result, there is a need for greenhouse effect—global warming models to be revised.

1. Introduction

In 1896 Arrhenius proposed that changes in the levels of “carbonic acid” (carbon dioxide) in the atmosphere could substantially alter the surface temperature of the Earth. This has come to be known as the greenhouse effect. Arrhenius’ paper, “On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground”, was published in Philosophical Magazine.  Arrhenius concludes:

“If the quantity of carbonic acid in the air should sink to one-half its present percentage, the temperature would fall by about 4˚; a diminution to one-quarter would reduce the temperature by 8˚. On the other hand, any doubling of the percentage of carbon dioxide in the air would raise the temperature of the earth’s surface by 4˚; and if the carbon dioxide were increased fourfold, the temperature would rise by 8˚ ” [ 2 ].

It is interesting to note that Arrhenius considered this greenhouse effect a positive thing if we were to avoid the ice ages of the past. Nevertheless, Arrhenius’ theory has become the foundation of the enhanced greenhouse effect―global warming hypothesis in the 21st century. His model remains the basis for most modern energy equilibrium models.

2. Arrhenius’ Energy Equilibrium Model

Arrhenius’ proposed a two-part energy equilibrium model in which the atmosphere radiates the same amount of heat to space as it receives and, likewise, the ground transfers the same amount of heat to the atmosphere and to space as it receives. The model contains the following assumptions:

Heat conducted from the center of the Earth is neglected.

Heat flow by convection between the surface and the atmosphere and throughout the atmosphere remains constant.

Cloud cover remains constant. This is questionable but allows the model to be quantified.

Part 1: Equilibrium of the Air

The balance of heat flow to and from the air (or atmosphere) has four components as shown in Figure 1. The arrow labelled S1 indicates the solar energy absorbed by the atmosphere. R indicates the infra-red radiation from the surface of the Earth to the atmosphere, M is the quantity of heat “conveyed” to the atmosphere by convection and Q1 represents heat loss from the atmosphere to space by radiation. All quantities are measured in terms of energy per unit area per unit time (W/m2).

Figure 1. Model of the energy balance of the atmosphere. The heat received by the atmosphere ( R+M+S1 ) equals the heat lost to space (Q1). In this single layer atmospheric model, the absorbing and emitting layers are one and the same.

 

Part 2: Thermal Equilibrium of the Ground

In the second part of his model, Arrhenius describes the heat flow equilibrium at the “ground” or surface of the Earth. There are four contributions to the surface heat flow as shown in Figure 2. S2 is the solar energy absorbed by the surface, R is the infra-red radiation emitted from the surface and transferred to the atmosphere, N is the heat conveyed to the atmosphere by convection and Q2 is the heat radiated to space from the surface. Note: Here Arrhenius uses the term N for the convective heat flow. It is equivalent to the term M used in the air equilibrium model.

Figure 2. The energy balance at the surface of the Earth. The energy received by the ground is equal to the energy lost.

3. Finding the Temperature of the Earth

Arrhenius combined these equations and, by eliminating the temperature of the atmosphere which according to Arrhenius “has no considerable interest”, he arrived at the following relationship:

ΔTg  is the expected change in the temperature of the Earth for a change in atmospheric emissivity from ε1 to ε2. Arrhenius determined that the current transparency of the atmosphere was 0.31 and, therefore the emissivity/absorptivity ε1 = 0.69. The current mean temperature for the surface of the Earth can be assumed to be To = 288 K.

Figure 3. Arrhenius’ model is used to determine the mean surface temperature of the Earth as a function of atmospheric emissivity ε. For initial conditions, ε = 0.69 and the surface temperature is 288 K. An increase in atmospheric emissivity produces an increase in the surface temperature of the Earth.

Arrhenius estimated that a doubling of carbon dioxide concentration in the atmosphere would produce a change in emissivity from 0.69 to 0.78 raising the temperature of the surface by approximately 6 K. This value would be considered high by modern climate researchers; however, Arrhenius’ model has become the foundation of the greenhouse-global warming theory today. Arrhenius made no attempt to quantify the specific heat flow values in his model. At the time of his paper there was little quantitative data available relating to heat flow for the Earth.

4. Evaluation of Arrhenius’ Model under Present Conditions

More recently, Kiehl and Trenberth (K & T) [ 3 ] and others have quantified the heat flow values used in Arrhenius’ model. K & T’s data are summarised in Figure 4.

The reflected solar radiation, which plays no part in the energy balance described in this model, is ignored. R is the net radiative transfer from the ground to the atmosphere derived from K & T’s diagram. The majority of the heat radiated to space originates from the atmosphere (Q1 > Q2). And the majority of the heat lost from the ground is by means of convection to the atmosphere (M > R + Q2).

Figure 4. Model of the mean energy budget of the earth as determined by Kiehl and Trenberth.

Q2=(1−ε)σνT4e(5)

Substituting ε = 0.567, ν = 1.0 and Tg = 288 K we get:  Q2=149.2 W/m2

Using Arrhenius value of 0.69 for the atmospheric emissivity Q2 = 120.9 W/m2.

Both values are significantly more than the 40 W/m2 determined by K & T.
The equation will not balance, something is clearly wrong.

Figure 5 illustrates the problem.

Equation (5) is based on the Stefan-Boltzmann law which is an empirical relationship which describes the amount of radiation from a hot surface passing through a vacuum to a region of space at a temperature of absolute zero. This is clearly not the case for radiation passing through the Earth’s atmosphere and as a result the amount of heat lost by radiation has been grossly overestimated.

No amount of adjusting parameters will allow this relationship to produce
sensible quantities and the required net heat flow of 40 W/m2.

This error affects the equilibrium heat flow values in Arrhenius’ model and the model is not able to produce a reasonable approximation of present day conditions as shown in Table 1. In particular, the convective heat flow takes on very different values from the two parts of the model. The values M and N in the table should be equivalent.

5. A New Energy Equilibrium Model

A modified model is proposed which will determine the change in surface temperature of the Earth caused by a change in the emissivity of the atmosphere (as would occur when greenhouse gas concentrations change). The model incorporates the following ideas:

1) The total heat radiated from the Earth ( Q1+Q2Q1+Q2 ) will remain constant and equal to the total solar radiation absorbed by the Earth ( S1+S2S1+S2 ).

2) Convective heat flow M remains constant. Convective heat flow between two regions is dependent on their temperature difference, as expressed by Newton’s Law of cooling1. The temperature difference between the atmosphere and the ground is maintained at 8.9 K (see Equation 7(a)). M = 102 W/m2 (K & T).

3) A surface temperature of 288 K and an atmospheric emissivity of 0.567 (Equation (7b)) is assumed for initial or present conditions.

Equation (9) represents the new model relating the emissivity of the atmosphere ε to the surface temperature Tg. Results from this model are shown in Table 2. The table shows the individual heat flow quantities and the temperature of the surface of the Earth that is required to maintain equilibrium:

The table shows that as the value of the atmospheric emissivity ε is increased less heat flows from the Earth’s surface to space, Q2 decreases. This is what would be expected. As well, more heat is radiated to space from the atmosphere; Q1 increases. This is also expected. The total energy radiated to space Q1+Q2=235 W/m2 . A plot of the resultant surface temperature Tg versus the atmospheric emissivity ε is shown below Figure 6.

Figure 6. Plot of the Earth’s mean surface temperature as a function of the atmospheric emissivity. This model predicts that the temperature of the Earth will decrease as the emissivity of the atmosphere increases.

6. Conclusion

Arrhenius identified the fact that the emissivity/absorptivity of the atmosphere increased with increasing greenhouse gas concentrations and this would affect the temperature of the Earth. He understood that infra-red active gases in the atmosphere contribute both to the absorption of radiation from the Earth’s surface and to the emission of radiation to space from the atmosphere. These were competing processes; one trapped heat, warming the Earth; the other released heat, cooling the Earth. He derived a relationship between the surface temperature and the emissivity of the atmosphere and deduced that an increase in emissivity led to an increase in the surface temperature of the Earth.

However, his model is unable to produce sensible results for the heat flow quantities as determined by K & T and others. In particular, his model and all similar recent models, grossly exaggerate the quantity of radiative heat flow from the Earth’s surface to space. A new energy equilibrium model has been proposed which is consistent with the measured heat flow quantities and maintains thermal equilibrium. This model predicts the changes in the heat flow quantities in response to changes in atmospheric emissivity and reveals that Arrhenius’ prediction is reversed. Increasing atmospheric emissivity due to increased greenhouse gas concentrations will have a net cooling effect.

It is therefore proposed by the author that any attempt to curtail emissions of CO2
will have no effect in curbing global warming.

Climatism is a Logic Fail

Two fallacies ensure meaningless public discussion about climate “crisis” or “emergency.” H/T to Terry Oldberg for comments and writings prompting me to post on this topic.

One corruption is the numerous times climate claims include fallacies of Equivocation. For instance, “climate change” can mean all observed events in nature, but as defined by IPCC all are 100% caused by human activities.  Similarly, forecasts from climate models are proclaimed to be “predictions” of future disasters, but renamed “projections” in disclaimers against legal liability.  And so on.

A second error in the argument is the Fallacy of Misplaced Concreteness, AKA Reification. This involves mistaking an abstraction for something tangible and real in time and space. We often see this in both spoken and written communications. It can take several forms:

♦ Confusing a word with the thing to which it refers

♦ Confusing an image with the reality it represents

♦ Confusing an idea with something observed to be happening

Examples of Equivocation and Reification from the World of Climate Alarm

“Seeing the wildfires, floods and storms, Mother Nature is not happy with us failing to recognize the challenges facing us.” – Nancy Pelosi

Mother Nature’ is a philosophical construct and has no feelings about people.

“This was the moment when the rise of the oceans began to slow and our planet began to heal …”
– Barack Obama

The ocean and the planet do not respond to someone winning a political party nomination. Nor does a planet experience human sickness and healing.

“If something has never happened before, we are generally safe in assuming it is not going to happen in the future, but the exceptions can kill you, and climate change is one of those exceptions.” – Al Gore

The future is not knowable, and can only be a matter of speculation and opinion.

“The planet is warming because of the growing level of greenhouse gas emissions from human activity. If this trend continues, truly catastrophic consequences are likely to ensue. “– Malcolm Turnbull

Temperature is an intrinsic property of an object, so temperature of “the planet” cannot be measured. The likelihood of catastrophic consequences is unknowable. Humans are blamed as guilty by association.

“Anybody who doesn’t see the impact of climate change is really, and I would say, myopic. They don’t see the reality. It’s so evident that we are destroying Mother Earth. “– Juan Manuel Santos

“Climate change” is an abstraction anyone can fill with subjective content. Efforts to safeguard the environment are real, successful and ignored in the rush to alarm.

“Climate change, if unchecked, is an urgent threat to health, food supplies, biodiversity, and livelihoods across the globe.” – John F. Kerry

To the abstraction “Climate Change” is added abstract “threats” and abstract means of “checking Climate Change.”

Climate change is the most severe problem that we are facing today, more serious even than the threat of terrorism.” -David King

Instances of people killed and injured by terrorists are reported daily and are a matter of record, while problems from Climate Change are hypothetical

 

Corollary: Reality is also that which doesn’t happen, no matter how much we expect it to.

Climate Models Are Built on Fallacies

 

A previous post Chameleon Climate Models described the general issue of whether a model belongs on the bookshelf (theoretically useful) or whether it passes real world filters of relevance, thus qualifying as useful for policy considerations.

Following an interesting discussion on her blog, Dr. Judith Curry has written an important essay on the usefulness and limitations of climate models.

The paper was developed to respond to a request from a group of lawyers wondering how to regard claims based upon climate model outputs. The document is entitled Climate Models and is a great informative read for anyone. Some excerpts that struck me in italics with my bolds and added images.

Climate model development has followed a pathway mostly driven by scientific curiosity and computational limitations. GCMs were originally designed as a tool to help understand how the climate system works. GCMs are used by researchers to represent aspects of climate that are extremely difficult to observe, experiment with theories in a new way by enabling hitherto infeasible calculations, understand a complex system of equations that would otherwise be impenetrable, and explore the climate system to identify unexpected outcomes. As such, GCMs are an important element of climate research.

Climate models are useful tools for conducting scientific research to understand the climate system. However, the above points support the conclusion that current GCM climate models are not fit for the purpose of attributing the causes of 20th century warming or for predicting global or regional climate change on timescales of decades to centuries, with any high level of confidence. By extension, GCMs are not fit for the purpose of justifying political policies to fundamentally alter world social, economic and energy systems.

It is this application of climate model results that fuels the vociferousness
of the debate surrounding climate models.

Evolution of state-of-the-art Climate Models from the mid 70s to the mid 00s. From IPCC (2007)

Evolution of state-of-the-art Climate Models from the mid 70s to the mid 00s. From IPCC (2007)

The actual equations used in the GCM computer codes are only approximations of
the physical processes that occur in the climate system.

While some of these approximations are highly accurate, others are unavoidably crude. This is because the real processes they represent are either poorly understood or too complex to include in the model given the constraints of the computer system. Of the processes that are most important for climate change, parameterizations related to clouds and precipitation remain the most challenging, and are the greatest source of disagreement among different GCMs.

There are literally thousands of different choices made in the construction of a climate model (e.g. resolution, complexity of the submodels, parameterizations). Each different set of choices produces a different model having different sensitivities. Further, different modeling groups have different focal interests, e.g. long paleoclimate simulations, details of ocean circulations, nuances of the interactions between aerosol particles and clouds, the carbon cycle. These different interests focus their limited computational resources on a particular aspect of simulating the climate system, at the expense of others.


Overview of the structure of a state-of-the-art climate model. See Climate Models Explained by R.G. Brown

Human-caused warming depends not only on how much CO2 is added to the atmosphere, but also on how ‘sensitive’ the climate is to the increased CO2. Climate sensitivity is defined as the global surface warming that occurs when the concentration of carbon dioxide in the atmosphere doubles. If climate sensitivity is high, then we can expect substantial warming in the coming century as emissions continue to increase. If climate sensitivity is low, then future warming will be substantially lower.

In GCMs, the equilibrium climate sensitivity is an ‘emergent property’
that is not directly calibrated or tuned.

While there has been some narrowing of the range of modeled climate sensitivities over time, models still can be made to yield a wide range of sensitivities by altering model parameterizations. Model versions can be rejected or not, subject to the modelers’ own preconceptions, expectations and biases of the outcome of equilibrium climate sensitivity calculation.

Further, the discrepancy between observational and climate model-based estimates of climate sensitivity is substantial and of significant importance to policymakers. Equilibrium climate sensitivity, and the level of uncertainty in its value, is a key input into the economic models that drive cost-benefit analyses and estimates of the social cost of carbon.

Variations in climate can be caused by external forcing, such as solar variations, volcanic eruptions or changes in atmospheric composition such as an increase in CO2. Climate can also change owing to internal processes within the climate system (internal variability). The best known example of internal climate variability is El Nino/La Nina. Modes of decadal to centennial to millennial internal variability arise from the slow circulations in the oceans. As such, the ocean serves as a ‘fly wheel’ on the climate system, storing and releasing heat on long timescales and acting to stabilize the climate. As a result of the time lags and storage of heat in the ocean, the climate system is never in equilibrium.

The combination of uncertainty in the transient climate response (sensitivity) and the uncertainties in the magnitude and phasing of the major modes in natural internal variability preclude an unambiguous separation of externally forced climate variations from natural internal climate variability. If the climate sensitivity is on the low end of the range of estimates, and natural internal variability is on the strong side of the distribution of climate models, different conclusions are drawn about the relative importance of human causes to the 20th century warming.

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (ΔT), 1861-2000. The thin dashed line represents annual ΔT, the bold line—its 13-year smoothing, and the line constructed from rectangles—WFC (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009

Anthropogenic (human-caused) climate change is a theory in which the basic mechanism is well understood, but whose potential magnitude is highly uncertain.

What does the preceding analysis imply for IPCC’s ‘extremely likely’ attribution of anthropogenically caused warming since 1950? Climate models infer that all of the warming since 1950 can be attributed to humans. However, there have been large magnitude variations in global/hemispheric climate on timescales of 30 years, which are the same duration as the late 20th century warming. The IPCC does not have convincing explanations for previous 30 year periods in the 20th century, notably the warming 1910-1945 and the grand hiatus 1945-1975. Further, there is a secular warming trend at least since 1800 (and possibly as long as 400 years) that cannot be explained by CO2, and is only partly explained by volcanic eruptions.

CO2 relation to Temperature is Inconsistent.

Summary

There is growing evidence that climate models are running too hot and that climate sensitivity to CO2 is on the lower end of the range provided by the IPCC. Nevertheless, these lower values of climate sensitivity are not accounted for in IPCC climate model projections of temperature at the end of the 21st century or in estimates of the impact on temperatures of reducing CO2 emissions.

The climate modeling community has been focused on the response of the climate to increased human caused emissions, and the policy community accepts (either explicitly or implicitly) the results of the 21st century GCM simulations as actual predictions. Hence we don’t have a good understanding of the relative climate impacts of the above (natural factors) or their potential impacts on the evolution of the 21st century climate.

Footnote:

There are a series of posts here which apply reality filters to attest climate models.  The first was Temperatures According to Climate Models where both hindcasting and forecasting were seen to be flawed.

Others in the Series are:

Sea Level Rise: Just the Facts

Data vs. Models #1: Arctic Warming

Data vs. Models #2: Droughts and Floods

Data vs. Models #3: Disasters

Data vs. Models #4: Climates Changing

Climate Medicine

Climates Don’t Start Wars, People Do

virtual-reality-1920x1200

Beware getting sucked into any model, climate or otherwise.

 

Nobel Prize for World’s Worst Climate Model

Patrick J. Michaels reports at Real Clear Policy Nobel Prize Awarded for the Worst Climate Model. Excerpts in italics with my bolds and added images.

Given the persistent headlines about climate change over the years, it’s surprising how long it took the Nobel Committee to award the Physics prize to a climate modeler, which finally occurred earlier this month.

Indeed, Syukuro Manabe has been a pioneer in the development of so-called general circulation climate models (GCMs) and more comprehensive Earth System Models (ESMs). According to the Committee, Manabe was awarded the prize “For the physical modelling of the earth’s climate, quantifying variability, and reliably predicting global warming.”

What Manabe did was to modify early global weather forecasting models, adapting them to long-term increases in human emissions of carbon dioxide that alter the atmosphere’s internal energy balance, resulting in a general warming of surface temperatures, along with a much larger warming of temperatures above the surface over the earth’s vast tropics.

Unlike some climate modelers, like NASA’s James Hansen — who lit the bonfire of the greenhouse vanities in 1988, Manabe is hardly a publicity hound. And while politics clearly influences it (see Al Gore’s 2007 Prize), the Nobel Committee also respects primacy, as Manabe’s model was the first comprehensive GCM. He produced it at the National Oceanic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton NJ. The seminal papers were published in 1975 and 1980.

And, after many modifications and renditions, it is also the most incorrect of all the world’s GCMs at altitude over the vast tropics of the planet.

Getting the tropical temperatures right is critical. The vast majority of life-giving moisture that falls over the worlds productive midlatitude agrosystems originates as evaporation from the tropical oceans.

The major determinant of how much moisture is wafted into our region is the vertical distribution of tropical temperature. When the contrast is great, with cold temperatures aloft compared to the normally hot surface, that surface air is buoyant and ascends, ultimately transferring moisture to the temperate zones. When the contrast is less, the opposite occurs, and less moisture enters the atmosphere.

Every GCM or ESM predicts that several miles above the tropical surface should be a “hot spot,” where there is much more warming caused by carbon dioxide emissions than at the surface. If this is improperly forecast, then subsequent forecasts of rainfall over the world’s major agricultural regions will be unreliable.

That in turn will affect forecasts of surface temperature. Everyone knows a wet surface heats up (and cools down) slower than a dry one (see: deserts), so getting the moisture input right is critical.

Following Manabe, vast numbers of modelling centers popped up, mushrooms fertilized by public — and only public — money.

Every six years or so, the U.S. Department of Energy collects all of these models, aggregating them into what they call Coupled Model Intercomparison Projects (CMIPs). These serve as the bases for the various “scientific assessments” of climate change produced by the U.N.’s Intergovernmental Panel on Climate Change (IPCC) or the U.S. “National Assessments” of climate.

Figure 8: Warming in the tropical troposphere according to the CMIP6 models. Trends 1979–2014 (except the rightmost model, which is to 2007), for 20°N–20°S, 300–200 hPa. John Christy (2019)

In 2017, University of Alabama’s John Christy, along with Richard McNider, published a paper that, among other things, examined the 25 applicable families of CMIP-5 models, comparing their performance to what’s been observed in the three-dimensional global tropics. Take a close look at Figure 3 from the paper, in the Asia-Pacific Journal of Atmospheric Sciences, and you’ll see that the model GFDL-CM3 is so bad that it is literally off the scale of the graph. [See Climate Models: Good, Bad and Ugly]

At its worst, the GFDL model is predicting approximately five times as much warming as has been observed since the upper-atmospheric data became comprehensive in 1979. This is the most evolved version of the model that won Manabe the Nobel.

In the CMIP-5 model suite, there is one, and only one, that works. It is the model INM-CM4 from the Russian Institute for Numerical Modelling, and the lead author is Evgeny Volodin. It seems that Volodin would be much more deserving of the Nobel for, in the words of the committee “reliably predicting global warming.”

Might this have something to do with the fact that INM-CM4 and its successor models have less predicted warming than all of the other models?

Patrick J. Michaels is a senior fellow working on energy and environment issues at the Competitive Enterprise Institute and author of “Scientocracy: The Tangled Web of Public Science and Public Policy.”

How Do We Know Humans Cause Climate Change?

Peter J. Wallison and Benjamin Zycher examine the evidence in their Law & Liberty article What We Really Know About Climate Change.  Excerpts in italics with my bolds and added images.

The assumption that humans are the single most significant cause of climate change is unsupported by the available science.

The sixth Assessment Report (AR6) of the Intergovernmental Panel on Climate Change (IPCC) continues a long history of alarmist predictions with the deeply dubious statement that human-caused climate change has now become “irreversible.” President Biden and many others have called climate change an “existential threat” to humanity; and Biden claimed in his inaugural address to have heard from the Earth itself “a cry for survival.”

Hurricane Ida also has brought new claims about the dangers of climate change, but those assertions are inconsistent with the satellite record on tropical cyclones, which shows no trend since the early 1970s.

Yet the headline on the front page of the New York Times of August 12, 2021 was: “Greek Island Burns in a Sign of Crises to Come.” The accompanying article, continuing the multi-year effort of that newspaper to spread fears about climate change unsupported by evidence, argued that this was “another inevitable episode of Europe’s extreme weather [caused] by the man-made climate change that scientists have now concluded is irreversible.”

Almost every word in that sentence is either false or seriously misleading, continuing a multi-decade campaign of apocalyptic warnings about the effects of greenhouse gas emissions. Data on centuries and millennia of climate phenomena, constructed by scientists over many years around the world, show that the severe weather that the Times attributes to “man-made climate change” is consistent with the normal weather patterns and variability displayed in both the formal records and such proxy data as ice cores. In fact, there is little evidence that “extreme weather” events have become more frequent since 1850, the approximate end of the little ice age.

Increasing atmospheric concentrations of greenhouse gases have yielded detectable effects, but “scientists” in general have not, as the Times falsely stated, concluded that “extreme weather” is now “irreversible.” The statement itself comes from the “Summary for Policymakers” published as part of the most recent IPCC study of climate change; it is deeply problematic given the analyses and data provided in the scientific chapter (“The Physical Science Basis”) of the report to which the Times referred. Scientists disagree sharply about the significance of climate change and the analytic tools used to evaluate it, let alone whether it is “irreversible.”

There has been no upward trend in the number of “hot” days between 1895 and 2017; 11 of the 12 years with the highest number of such days occurred before 1960. Since 2005, NOAA has maintained the U.S. Climate Reference Network, comprising 114 meticulously maintained temperature stations spaced more or less uniformly across the lower 48 states, along with 21 stations in Alaska and two stations in Hawaii. They are placed to avoid heat-island effects and other such distortions as much as possible. The reported data show no increase in average temperatures over the available 2005-2020 period. In addition, a recent reconstruction of global temperatures over the past 1 million years—created using data from ice-sheet formations—shows that there is nothing unusual about the current warm period.

These alarmist predictions almost always are based upon climate models that have proven poor at reconstructing the past and ongoing temperature record. For example, the Times article implies that wildfires will increase in the future as the earth grows hotter. But there has been no trend in the number of U.S. wildfires in the 35 years since 1985, and global acreage burned has declined over past decades.

Unable to demonstrate that observed climate trends are due to human-caused (anthropogenic) climate change—or even that these events are particularly unusual or concerning—climate catastrophists will often turn to dire predictions about prospective climate phenomena. The problem with such predictions is that they are almost always generated by climate models driven by highly complex sets of assumptions about which there is significant dispute. It goes without saying that the predictions of models that cannot reconstruct what has happened in the past should not be given heavy weight in terms of predictions about the future, but that is exactly what many analysts are doing.

Extreme weather occurrences are likewise used as evidence of an ongoing climate crisis, but again, a study of the available data undercuts that assessment. U.S. tornado activity shows either no increase or a downward trend since 1954. Data on tropical storms, hurricanes, and accumulated cyclone energy (a wind-speed index measuring the overall strength of a given hurricane season) reveal little change since satellite measurements of the phenomena began in the early 1970s. The Palmer Drought Severity Index shows no trend since 1895.

Rising sea levels are another frequently cited example of the impending climate crisis. And yet sea levels have been rising since at least the mid-19th century, a phenomenon unlikely to have been caused only by human activity. The earth has been warming due to both natural and anthropogenic causes, resulting in some melting of sea ice, and a thermal expansion of sea water; the degree to which rising sea level has been caused by man is unknown. And the current rate of sea-level rise as measured by the satellites is 3.3 millimeters per year, or about 13 inches over the course of a century. Will that yield a crisis?

The data reported by the National Oceanic and Atmospheric Administration show that temperatures have risen and fallen since 1850, with an overall upward movement of about 1 degree C for 1850 through 2020. The 1910-1945 warming—which was very roughly the same magnitude as that observed from the mid-1970s through about 2000—is of particular interest in that it cannot be explained by higher greenhouse-gas concentrations, which increased from 300 parts per million to 310 parts per million over that period. This reinforces the commonsense observation that temperatures result from some combination of natural and anthropogenic influences, but alarmist reports seldom if ever suggest that there is any cause of warming other than the latter.

Changes in the extents of Arctic and Antarctic sea ice also raise questions about the importance of moderate warming. Since 1979, Arctic sea ice has declined relative to the 30-year average (again, the degree to which this is the result of anthropogenic factors is not known). Meanwhile, Antarctic sea ice has been growing relative to the 30-year average, and the global sea-ice total has remained roughly constant since 1979.

It is important to recognize that the assumption of many politicians, environmental groups, media sources like the New York Times, and no small number of scientist-activists—that humans are the single most significant cause of climate change—is unsupported by the available science. Such an absence of evidence should produce humility among this group, but it seems to foster more alarmism. At the very least, it should make Americans think twice before embracing radical solutions to a supposed problem that may not be important.

Spatial pattern of trends in Gross Primary Production (1982- 2015). Source: Sun et al. 2018.

Much of the mainstream press has touted, loudly, the alarmist conclusions of the latest IPCC report—amusingly, the IPCC AR6 provides a “Headlines Statements” link to assist alarmist politicians and media—but that reporting has obscured its problems, very real and very large. The report concedes that the mainstream climate models on the whole overstate warming for any given set of parameters and assumptions, but it then relies on those models for predictions about the future.

Figure 8: Warming in the tropical troposphere according to the CMIP6 models. Trends 1979–2014 (except the rightmost model, which is to 2007), for 20°N–20°S, 300–200 hPa. John Christy (2019)

The report concedes as well that the most extreme among its alternative scenarios— “RCP8.5”—has a likelihood that “is considered low,” but then uses RCP8.5 more than any other scenario. The report pretends to measure separately the magnitudes of natural and anthropogenic warming, but in reality does not do so (see Figure SPM.2); instead, it assumes away natural influences, which are asserted to have an average effect of zero. The IPCC models in summary have only two important variables: greenhouse gases, which have a warming effect, and sulfate aerosols, which have a cooling effect. That is why the IPCC models cannot explain the warming observed from 1910-1945. IPCC assumes, but does not attempt to model, a zero effect of sunlight variation, volcanic eruptions, and other such natural phenomena.

The fact is that we don’t understand all the elements in the complex climate system—the effects of clouds alone are understood poorly—and it is beyond irresponsible to adopt policies on the basis of flawed model projections that would slow economic growth in the US and elsewhere. That is a senseless and dangerous policy, which will only hurt people around the world who are striving to create better lives for themselves and their families.

 

 

IPCC Scenarios Ensure Unreal Climate Forecasts

 

Figure 5. CO2 emissions (a) and concentrations (b), anthropogenic radiative forcing (c), and global mean temperature change (d) for the three long-term extensions. As in Fig. 3, concentration, forcing, and temperature outcomes are calculated with a simple climate model (MAGICC version 6.8.01 BETA; Meinshausen et al., 2011a, b). Outcomes for the CMIP5 versions of the long-term extensions of RCP2.6 and RCP8.5 (Meinshausen et al., 2011c), as calculated with the same model, are shown for comparison.

Roger Pielke Jr. has a new paper at Science Direct Distorting the view of our climate future: The misuse and abuse of climate pathways and scenarios.  Excerpt in italics with my bolds.

Abstract

Climate science research and assessments under the umbrella of the Intergovernmental Panel on Climate Change (IPCC) have misused scenarios for more than a decade. Symptoms of misuse have included the treatment of an unrealistic, extreme scenario as the world’s most likely future in the absence of climate policy and the illogical comparison of climate projections across inconsistent global development trajectories.

Reasons why such misuse arose include (a) competing demands for scenarios from users in diverse academic disciplines that ultimately conflated exploratory and policy relevant pathways, (b) the evolving role of the IPCC – which extended its mandate in a way that creates an inter-relationship between literature assessment and literature coordination, (c) unforeseen consequences of employing a temporary approach to scenario development, (d) maintaining research practices that normalize careless use of scenarios, and (e) the inherent complexity and technicality of scenarios in model-based research and in support of policy.

Consequently, much of the climate research community is presently off-track from scientific coherence and policy-relevance.

Attempts to address scenario misuse within the community have thus far not worked. The result has been the widespread production of myopic or misleading perspectives on future climate change and climate policy. Until reform is implemented, we can expect the production of such perspectives to continue, threatening the overall credibility of the IPCC and associated climate research. However, because many aspects of climate change discourse are contingent on scenarios, there is considerable momentum that will make such a course correction difficult and contested – even as efforts to improve scenarios have informed research that will be included in the IPCC 6th Assessment.

Discussion of How Imaginary Scenarios Spoil Attempts to Envision Climate Futures

The article above is paywalled, but a previous post reprinted below goes into the background of the role of scenarios in climate modelling, and demonstrates the effects referring to results from the most realistic model, INMCM5

Roger Pielke Jr. explains that climate models projections are unreliable because they are based on scenarios no longer bounded by reality.  His article is The Unstoppable Momentum of Outdated Science.  Excerpts in italics with my bolds.

Much of climate research is focused on implausible scenarios of the future, but implementing a course correction will be difficult.

In 2020, climate research finds itself in a similar situation to that of breast cancer research in 2007. Evidence indicates the scenarios of the future to 2100 that are at the focus of much of climate research have already diverged from the real world and thus offer a poor basis for projecting policy-relevant variables like economic growth and carbon dioxide emissions. A course-correction is needed.

In a new paper of ours just out in Environmental Research Letters we perform the most rigorous evaluation to date of how key variables in climate scenarios compare with data from the real world (specifically, we look at population, economic growth, energy intensity of economic growth and carbon intensity of energy consumption). We also look at how these variables might evolve in the near-term to 2040.

We find that the most commonly-used scenarios in climate research have already diverged significantly from the real world, and that divergence is going to only get larger in coming decades. You can see this visualized in the graph above, which shows carbon dioxide emissions from fossil fuels from 2005, when many scenarios begin, to 2045. The graph shows emissions trajectories projected by the most commonly used climate scenarios (called SSP5-8.5 and RCP8.5, with labels on the right vertical axis), along with other scenario trajectories. Actual emissions to date (dark purple curve) and those of near-term energy outlooks (labeled as EIA, BP and ExxonMobil) all can be found at the very low end of the scenario range, and far below the most commonly used scenarios.

Our paper goes into the technical details, but in short, an important reason for the lower-than-projected carbon dioxide emissions is that economic growth has been slower than expected across the scenarios, and rather than seeing coal use expand dramatically around the world, it has actually declined in many regions.

It is even conceivable, if not likely, that in 2019 the world has passed “peak carbon dioxide emissions.” Crucially, the projections in the figure above are pre-Covid19, which means that actual emissions 2020 to 2045 will be even less than was projected in 2019.

While it is excellent news that the broader community is beginning to realize that scenarios are increasingly outdated, voluminous amounts of research have been and continue to be produced based on the outdated scenarios. For instance, O’Neill and colleagues find that “many studies” use scenarios that are “unlikely.” In fact, in their literature review such “unlikely” scenarios comprise more than 20% of all scenario applications from 2014 to 2019. They also call for “re-examining the assumptions underlying” the high-end emissions scenarios that are favored in physical climate research, impact studies and economic and policy analyses.

Make no mistake. The momentum of outdated science is powerful. Recognizing that a considerable amount of climate science to be outdated is, in the words of the late Steve Rayer, “uncomfortable knowledge” — that knowledge which challenges widely-held preconceptions. According to Rayner, in such a context we should expect to see reactions to uncomfortable knowledge that include:

  • denial (that scenarios are off track),
  • dismissal (the scenarios are off track, but it doesn’t matter),
  • diversion (the scenarios are off track, but saying so advances the agenda of those opposed to action) and,
  • displacement (the scenarios are off track but there are perhaps compensating errors elsewhere within scenario assumptions).

Such responses reinforce the momentum of outdated science and make it more difficult to implement a much needed course correction.

Responding to climate change is critically important. So too is upholding the integrity of the science which helps to inform those responses. Identification of a growing divergence between scenarios and the real-world should be seen as an opportunity — to improve both science and policy related to climate — but also to develop new ways for science to be more nimble in getting back on track when research is found to be outdated.

[A previous post is reprinted below since it demonstrates how the scenarios drive forecasting by CMIP6 models, including the example of the best performant model: INMCM5]

Background from Previous Post : Best Climate Model: Mild Warming Forecasted

Links are provided at the end to previous posts describing climate models 4 and 5 from the Institute of Numerical Mathematics in Moscow, Russia.  Now we have forecasts for the 21st Century published for INM-CM5 at Izvestiya, Atmospheric and Oceanic Physics volume 56, pages218–228(July 7, 2020). The article is Simulation of Possible Future Climate Changes in the 21st Century in the INM-CM5 Climate Model by E. M. Volodin & A. S. Gritsun.  Excerpts are in italics with my bolds, along with a contextual comment.

Abstract

Climate changes in 2015–2100 have been simulated with the use of the INM-CM5 climate model following four scenarios: SSP1-2.6, SSP2-4.5, and SSP5-8.5 (single model runs) and SSP3-7.0 (an ensemble of five model runs). Changes in the global mean temperature and spatial distribution of temperature and precipitation are analyzed. The global warming predicted by the INM-CM5 model in the scenarios considered is smaller than that in other CMIP6 models. It is shown that the temperature in the hottest summer month can rise more quickly than the seasonal mean temperature in Russia. An analysis of a change in Arctic sea ice shows no complete Arctic summer ice melting in the 21st century under any model scenario. Changes in the meridional stream function in atmosphere and ocean are studied.

Overview

The climate is understood as the totality of statistical characteristics of the instantaneous states of the atmosphere, ocean, and other climate system components averaged over a long time period.

Therefore, we restrict ourselves to an analysis of some of the most important climate parameters, such as average temperature and precipitation. A more detailed analysis of individual aspects of climate change, such as changes in extreme weather and climate situations, will be the subject of another work. This study is not aimed at a full comparison with the results of other climate models, where calculations follow the same scenarios, since the results of other models have not yet been published in peer reviewed journals by the time of this writing.

The INM-CM5 climate model [1, 2] is used for the numerical experiments. It differs from the previous version, INMCM4, which was also used for experiments on reproducing climate change in the 21st century [3], in the following:

  • an aerosol block has been added to the model, which allows inputting anthropogenic emissions of aerosols and their precursors;
  • the concentrations and optical properties of aerosols are calculated, but not specified, like in the previous version;
  • the parametrizations of cloud formation and condensation are changed in the atmospheric block;
  • the upper boundary in the atmospheric block is raised from 30 to 60 km;
  • the horizontal resolution in the ocean block is doubled along each coordinate; and,
  • the software related to adaptation to massively parallel computers is improved, which allows the effective use a larger number of compute cores.

The model resolution in the atmospheric and aerosol blocks is 2° × 1.5° in longitude and latitude and 73 levels and, in the ocean, 0.5° × 0.25° and 40 levels. The calculations were performed at supercomputers of the Joint Supercomputer Center, Russian Academy of Sciences, and Moscow State University, with the use of 360 to 720 cores. The model calculated 6–10 years per 24 h in the above configuration.

Four scenarios were used to model the future climate: SSP1-2.6, SSP2-4.5, SSP3-7.0, and SSP5-8.5. The scenarios are described in [4]. The figure after the abbreviation SSP (Shared Socioeconomic Pathway) is the number of the mankind development path (see the values in [4]). The number after the dash means the radiation forcing (W m–2) in 2100 compared to the preindustrial level. Thus, the SSP1-2.6 scenario is the most moderate and assumes rapid actions which sharply limit and then almost completely stop anthropogenic emissions. Within this scenario, greenhouse gas concentrations are maximal in the middle of the 21st century and then slightly decrease by the end of the century. The SSP5-8.5 scenario is the warmest and implies the fastest climate change. The scenarios are recommended for use in the project on comparing CMIP6 (Coupled Model Intercomparison Project, Phase 6, [5]) climate models.  Each scenario includes the time series of:

  • carbon dioxide, methane, nitrous oxide, and ozone concentrations;
  • emissions of anthropogenic aerosols and their precursors;
  • the concentration of volcanic sulfate aerosol; and
  • the solar constant. 

One model experiment was carried out for each of the above scenarios. It began at the beginning of 2015 and ended at the end of 2100. The initial state was taken from the so-called historical experiment with the same model, where climate changes were simulated for 1850–2014, and all impacts on the climate system were set according to observations. The results of the ensemble of historical experiments with the model under consideration are given in [6, 7]. For the SSP3-7.0 scenario, five model runs was performed differing in the initial data taken from different historical experiments. The ensemble of numerical experiments is required to increase the statistical confidence of conclusions about climate changes.

[My Contextual Comment inserted Prior to Consideration of Results]

Firstly, the INM-CM5 historical experiment can be read in detail by following a linked post (see Resources at the end), but this graphic summarizes the model hindcasting of past temperatures (GMT) compared to HadCrutv4.

Figure 1. The 5-year mean GMST (K) anomaly with respect to 1850–1899 for HadCRUTv4 (thick solid black); model mean (thick solid red). Dashed thin lines represent data from individual model runs: 1 – purple, 2 – dark blue, 3 – blue, 4 – green, 5 – yellow, 6 – orange, 7 – magenta. In this and the next figures numbers on the time axis indicate the first year of the 5-year mean.

Secondly, the scenarios are important to understand since they stipulate data inputs the model must accept as conditions for producing forecasts according to a particular scenario (set of assumptions).  The document with complete details referenced as [4] is The Scenario Model Intercomparison Project (ScenarioMIP) for CMIP6.

All the details are written there but one diagram suggests the implications for the results described below.

Figure 5. CO2 emissions (a) and concentrations (b), anthropogenic radiative forcing (c), and global mean temperature change (d) for the three long-term extensions. As in Fig. 3, concentration, forcing, and temperature outcomes are calculated with a simple climate model (MAGICC version 6.8.01 BETA; Meinshausen et al., 2011a, b). Outcomes for the CMIP5 versions of the long-term extensions of RCP2.6 and RCP8.5 (Meinshausen et al., 2011c), as calculated with the same model, are shown for comparison.

As shown, the SSP1-26 is virtually the same scenario as the former RCP2.6, while SSP5-85 is virtually the same as RCP8.5, the wildly improbable scenario (impossible according to some analysts).  Note that FF CO2 emissions are assumed to quadruple in the next 80 years, with atmospheric CO2 rising from 400 to 1000 ppm ( +150%).  Bear these suppositions in mind when considering the INMCM5 forecasts below.

Results [Continuing From Volodin and Gritsun]

Fig. 1. Changes in the global average surface temperature (K) with respect to the pre-industrial level in experiments according to the SSP1-2.6 (triangles), SSP2-4.5 (squares), SSP3-7.0 (crosses), and SSP5-8.5 (circles) scenarios.

Let us describe some simulation results of climate change in the 21st century. Figure 1 shows the change in the globally averaged surface air temperature with respect to the data of the corresponding historical experiment for 1850–1899. In the warmest SSP5-8.5 scenario (circles), the temperature rises by more than 4° by the end of the 21st century. In the SSP3-7.0 scenario (crosses), different members of the ensemble show warming by 3.4°–3.6°. In the SSP2-4.5 scenario (squares), the temperature increases by about 2.4°. According to the SSP1-2.6 scenario (triangles) , the maximal warming by ~1.7° occurs in the middle of the 21st century, and the temperature exceeds the preindustrial temperature by 1.4° by the end of the century.

[My comment: Note that the vertical scale starts with +1.0C as was seen in the historical experiment. Thus an anomaly of 1.4C by 2100 is an increase of only 0.4C, while the SSP2-4.5 result adds 1.4C to the present]. 

The results for other CMIP6 models have not yet been published in peer-reviewed journals. However, according to the preliminary analysis (see, e.g.  https://cmip6workshop19.sciencesconf.org/ data/Session1_PosterSlides.pdf, p.29), the INM-CM5 model shows the lowest temperature increase among the CMIP6 models considered for all the scenarios due to the minimal equilibrium sensitivity to the CO2 concentration doubling, which is ~2.1° for the current model version, like for the previous version, despite new condensation and cloud formation blocks. [For more on CMIP6 comparisons see post Climate Models: Good, Bad and Ugly]

Fig. 2. Differences between the annual average surface air temperatures (K) in 2071–2100 and 1981–2010 for the (a) SSP5-8.5 and (b) SSP1-2.6 scenarios.

The changes in the surface air temperature are similar for all scenarios; therefore, we analyze the difference between temperatures in 2071–2100 and 1981–2010 under the SSP5-8.5 and SSP1-2.6 scenarios (Fig. 2). The warming is maximal in the Arctic; it reaches 10° and 3°, respectively. Other features mainly correspond to CMIP5 data [8], including the INMCM4 model, which participates in the comparison. The warming on the continents of the Northern Hemisphere is about 2 times higher than the mean, and the warming in the Southern Hemisphere is noticeably less than in the Northern Hemisphere. The land surface is getting warmer than the ocean surface in all the scenarios except SSP1-2.6, because the greenhouse effect is expected to weaken in the second half of the 21st century in this scenario, and the higher heat capacity of the ocean prevents it from cooling as quickly as the land.

The changes in precipitation in December–February and June–August for the SSP3-7.0 scenario averaged over five members of the ensemble are shown in Fig. 4. All members of the ensemble show an increase in precipitation in the winter in a significant part of middle and high latitudes. In summer, the border between the increase and decrease in precipitation in Eurasia passes mainly around or to the north of 60°. In southern and central Europe, all members of the ensemble show a decrease in precipitation. Precipitation also increases in the region of the summer Asian monsoon, over the equatorial Pacific, due to a decrease in the upwelling and an increase in ocean surface temperature (OST). The distribution of changes in precipitation mainly corresponds to that given in [6, Fig. 12.22] for all CMIP5 models.

The change in the Arctic sea ice area in September, when the ocean ice cover is minimal over the year, is of interest. Figure 5 shows the sea ice area in September 2015–2019 to be 4–6 million km2 in all experiments, which corresponds to the estimate from observations in [11]. The Arctic sea ice does not completely melt in any of the experiments and under any scenario. However, according to [8, Figs. 12.28 and 12.31], many models participating in CMIP6, where the Arctic ice area is similar to that observed at the beginning of the 21st century, show the complete absence of ice by the end of the 21st century, especially under the RCP8.5 scenario, which is similar to SSP5-8.5.

The reason for these differences is the lower equilibrium sensitivity of the INM-CM5 model.

Note that the scatter of data between experiments under different scenarios in the first half of the 21st century is approximately the same as between different members of the ensemble under the SSP3-7.0 scenario and becomes larger only after 2070. The sea ice area values are sorted in accordance with the radiative forcing of the scenarios only after 2090. This indicates the large contribution of natural climate variability into the Arctic ice area. In the SSP1-2.6 experiment, the Arctic ice area at the end of the 21st century approximately corresponds to its area at the beginning of the experiment.

Climate changes can be also traced in the ocean circulation. Figure 6 shows the change in the 5-year averaged intensity of the Atlantic meridional circulation, defined as the maximum of the meridional streamfunction at 32° N. All experiments show a decrease in the intensity of meridional circulation in the 21st century and natural fluctuations against this decrease. The decrease is about 4.5–5 Sv for the SSP5-8.5 scenario, which is close to values obtained in the CMIP5 models [8, Fig. 12.35] under the RCP8.5 scenario. Under milder scenarios, the weakening of the meridional circulation is less pronounced. The reason for this weakening of the meridional circulation in the Atlantic, as far as we know, is not yet fully understood.

Conclusion

Numerical experiments have been carried out to reproduce climate changes in the 21st century according to four scenarios of the CMIP6 program [4, 5], including an ensemble of five experiments under the SSP3-7.0 scenario. The changes in the global mean surface temperature are analyzed. It is shown that the global warming predicted by the INM-CM5 model is the lowest among the currently published CMIP6 model data. The geographical distribution of changes in the temperature and precipitation is considered. According to the model, the temperature in the warmest summer month will increase faster than the summer average temperature in Russia.

None of the experiments show the complete melting of the Arctic ice cover by the end of the 21st century. Some changes in the ocean dynamics, including the flow velocity and the meridional stream function, are analyzed. The changes in the Hadley and Ferrel circulation in the atmosphere are considered.

Resources:

Climate Models: Good, Bad and Ugly

2018 Update: Best Climate Model INMCM5

Temperatures According to Climate Models

Climate Models Flagged for Running Out of Bounds

Roger Pielke Jr. explains that climate models projections are unreliable because they are based on scenarios no longer bounded by reality.  His article is The Unstoppable Momentum of Outdated Science.  Excerpts in italics with my bolds.

Much of climate research is focused on implausible scenarios of the future, but implementing a course correction will be difficult.

In 2020, climate research finds itself in a similar situation to that of breast cancer research in 2007. Evidence indicates the scenarios of the future to 2100 that are at the focus of much of climate research have already diverged from the real world and thus offer a poor basis for projecting policy-relevant variables like economic growth and carbon dioxide emissions. A course-correction is needed.

In a new paper of ours just out in Environmental Research Letters we perform the most rigorous evaluation to date of how key variables in climate scenarios compare with data from the real world (specifically, we look at population, economic growth, energy intensity of economic growth and carbon intensity of energy consumption). We also look at how these variables might evolve in the near-term to 2040.

We find that the most commonly-used scenarios in climate research have already diverged significantly from the real world, and that divergence is going to only get larger in coming decades. You can see this visualized in the graph above, which shows carbon dioxide emissions from fossil fuels from 2005, when many scenarios begin, to 2045. The graph shows emissions trajectories projected by the most commonly used climate scenarios (called SSP5-8.5 and RCP8.5, with labels on the right vertical axis), along with other scenario trajectories. Actual emissions to date (dark purple curve) and those of near-term energy outlooks (labeled as EIA, BP and ExxonMobil) all can be found at the very low end of the scenario range, and far below the most commonly used scenarios.

Our paper goes into the technical details, but in short, an important reason for the lower-than-projected carbon dioxide emissions is that economic growth has been slower than expected across the scenarios, and rather than seeing coal use expand dramatically around the world, it has actually declined in many regions.

It is even conceivable, if not likely, that in 2019 the world has passed “peak carbon dioxide emissions.” Crucially, the projections in the figure above are pre-Covid19, which means that actual emissions 2020 to 2045 will be even less than was projected in 2019.

While it is excellent news that the broader community is beginning to realize that scenarios are increasingly outdated, voluminous amounts of research have been and continue to be produced based on the outdated scenarios. For instance, O’Neill and colleagues find that “many studies” use scenarios that are “unlikely.” In fact, in their literature review such “unlikely” scenarios comprise more than 20% of all scenario applications from 2014 to 2019. They also call for “re-examining the assumptions underlying” the high-end emissions scenarios that are favored in physical climate research, impact studies and economic and policy analyses.

Make no mistake. The momentum of outdated science is powerful. Recognizing that a considerable amount of climate science to be outdated is, in the words of the late Steve Rayer, “uncomfortable knowledge” — that knowledge which challenges widely-held preconceptions. According to Rayner, in such a context we should expect to see reactions to uncomfortable knowledge that include:

  • denial (that scenarios are off track),
  • dismissal (the scenarios are off track, but it doesn’t matter),
  • diversion (the scenarios are off track, but saying so advances the agenda of those opposed to action) and,
  • displacement (the scenarios are off track but there are perhaps compensating errors elsewhere within scenario assumptions).

Such responses reinforce the momentum of outdated science and make it more difficult to implement a much needed course correction.

Responding to climate change is critically important. So too is upholding the integrity of the science which helps to inform those responses. Identification of a growing divergence between scenarios and the real-world should be seen as an opportunity — to improve both science and policy related to climate — but also to develop new ways for science to be more nimble in getting back on track when research is found to be outdated.

[A previous post is reprinted below since it demonstrates how the scenarios drive forecasting by CMIP6 models, including the example of the best performant model: INMCM5]

Background from Previous Post : Best Climate Model: Mild Warming Forecasted

Links are provided at the end to previous posts describing climate models 4 and 5 from the Institute of Numerical Mathematics in Moscow, Russia.  Now we have forecasts for the 21st Century published for INM-CM5 at Izvestiya, Atmospheric and Oceanic Physics volume 56, pages218–228(July 7, 2020). The article is Simulation of Possible Future Climate Changes in the 21st Century in the INM-CM5 Climate Model by E. M. Volodin & A. S. Gritsun.  Excerpts are in italics with my bolds, along with a contextual comment.

Abstract

Climate changes in 2015–2100 have been simulated with the use of the INM-CM5 climate model following four scenarios: SSP1-2.6, SSP2-4.5, and SSP5-8.5 (single model runs) and SSP3-7.0 (an ensemble of five model runs). Changes in the global mean temperature and spatial distribution of temperature and precipitation are analyzed. The global warming predicted by the INM-CM5 model in the scenarios considered is smaller than that in other CMIP6 models. It is shown that the temperature in the hottest summer month can rise more quickly than the seasonal mean temperature in Russia. An analysis of a change in Arctic sea ice shows no complete Arctic summer ice melting in the 21st century under any model scenario. Changes in the meridional stream function in atmosphere and ocean are studied.

Overview

The climate is understood as the totality of statistical characteristics of the instantaneous states of the atmosphere, ocean, and other climate system components averaged over a long time period.

Therefore, we restrict ourselves to an analysis of some of the most important climate parameters, such as average temperature and precipitation. A more detailed analysis of individual aspects of climate change, such as changes in extreme weather and climate situations, will be the subject of another work. This study is not aimed at a full comparison with the results of other climate models, where calculations follow the same scenarios, since the results of other models have not yet been published in peer reviewed journals by the time of this writing.

The INM-CM5 climate model [1, 2] is used for the numerical experiments. It differs from the previous version, INMCM4, which was also used for experiments on reproducing climate change in the 21st century [3], in the following:

  • an aerosol block has been added to the model, which allows inputting anthropogenic emissions of aerosols and their precursors;
  • the concentrations and optical properties of aerosols are calculated, but not specified, like in the previous version;
  • the parametrizations of cloud formation and condensation are changed in the atmospheric block;
  • the upper boundary in the atmospheric block is raised from 30 to 60 km;
  • the horizontal resolution in the ocean block is doubled along each coordinate; and,
  • the software related to adaptation to massively parallel computers is improved, which allows the effective use a larger number of compute cores.

The model resolution in the atmospheric and aerosol blocks is 2° × 1.5° in longitude and latitude and 73 levels and, in the ocean, 0.5° × 0.25° and 40 levels. The calculations were performed at supercomputers of the Joint Supercomputer Center, Russian Academy of Sciences, and Moscow State University, with the use of 360 to 720 cores. The model calculated 6–10 years per 24 h in the above configuration.

Four scenarios were used to model the future climate: SSP1-2.6, SSP2-4.5, SSP3-7.0, and SSP5-8.5. The scenarios are described in [4]. The figure after the abbreviation SSP (Shared Socioeconomic Pathway) is the number of the mankind development path (see the values in [4]). The number after the dash means the radiation forcing (W m–2) in 2100 compared to the preindustrial level. Thus, the SSP1-2.6 scenario is the most moderate and assumes rapid actions which sharply limit and then almost completely stop anthropogenic emissions. Within this scenario, greenhouse gas concentrations are maximal in the middle of the 21st century and then slightly decrease by the end of the century. The SSP5-8.5 scenario is the warmest and implies the fastest climate change. The scenarios are recommended for use in the project on comparing CMIP6 (Coupled Model Intercomparison Project, Phase 6, [5]) climate models.  Each scenario includes the time series of:

  • carbon dioxide, methane, nitrous oxide, and ozone concentrations;
  • emissions of anthropogenic aerosols and their precursors;
  • the concentration of volcanic sulfate aerosol; and
  • the solar constant. 

One model experiment was carried out for each of the above scenarios. It began at the beginning of 2015 and ended at the end of 2100. The initial state was taken from the so-called historical experiment with the same model, where climate changes were simulated for 1850–2014, and all impacts on the climate system were set according to observations. The results of the ensemble of historical experiments with the model under consideration are given in [6, 7]. For the SSP3-7.0 scenario, five model runs was performed differing in the initial data taken from different historical experiments. The ensemble of numerical experiments is required to increase the statistical confidence of conclusions about climate changes.

[My Contextual Comment inserted Prior to Consideration of Results]

Firstly, the INM-CM5 historical experiment can be read in detail by following a linked post (see Resources at the end), but this graphic summarizes the model hindcasting of past temperatures (GMT) compared to HadCrutv4.

Figure 1. The 5-year mean GMST (K) anomaly with respect to 1850–1899 for HadCRUTv4 (thick solid black); model mean (thick solid red). Dashed thin lines represent data from individual model runs: 1 – purple, 2 – dark blue, 3 – blue, 4 – green, 5 – yellow, 6 – orange, 7 – magenta. In this and the next figures numbers on the time axis indicate the first year of the 5-year mean.

Secondly, the scenarios are important to understand since they stipulate data inputs the model must accept as conditions for producing forecasts according to a particular scenario (set of assumptions).  The document with complete details referenced as [4] is The Scenario Model Intercomparison Project (ScenarioMIP) for CMIP6.

All the details are written there but one diagram suggests the implications for the results described below.

Figure 5. CO2 emissions (a) and concentrations (b), anthropogenic radiative forcing (c), and global mean temperature change (d) for the three long-term extensions. As in Fig. 3, concentration, forcing, and temperature outcomes are calculated with a simple climate model (MAGICC version 6.8.01 BETA; Meinshausen et al., 2011a, b). Outcomes for the CMIP5 versions of the long-term extensions of RCP2.6 and RCP8.5 (Meinshausen et al., 2011c), as calculated with the same model, are shown for comparison.

As shown, the SSP1-26 is virtually the same scenario as the former RCP2.6, while SSP5-85 is virtually the same as RCP8.5, the wildly improbable scenario (impossible according to some analysts).  Note that FF CO2 emissions are assumed to quadruple in the next 80 years, with atmospheric CO2 rising from 400 to 1000 ppm ( +150%).  Bear these suppositions in mind when considering the INMCM5 forecasts below.

Results [Continuing From Volodin and Gritsun]

Fig. 1. Changes in the global average surface temperature (K) with respect to the pre-industrial level in experiments according to the SSP1-2.6 (triangles), SSP2-4.5 (squares), SSP3-7.0 (crosses), and SSP5-8.5 (circles) scenarios.

Let us describe some simulation results of climate change in the 21st century. Figure 1 shows the change in the globally averaged surface air temperature with respect to the data of the corresponding historical experiment for 1850–1899. In the warmest SSP5-8.5 scenario (circles), the temperature rises by more than 4° by the end of the 21st century. In the SSP3-7.0 scenario (crosses), different members of the ensemble show warming by 3.4°–3.6°. In the SSP2-4.5 scenario (squares), the temperature increases by about 2.4°. According to the SSP1-2.6 scenario (triangles) , the maximal warming by ~1.7° occurs in the middle of the 21st century, and the temperature exceeds the preindustrial temperature by 1.4° by the end of the century.

[My comment: Note that the vertical scale starts with +1.0C as was seen in the historical experiment. Thus an anomaly of 1.4C by 2100 is an increase of only 0.4C, while the SSP2-4.5 result adds 1.4C to the present]. 

The results for other CMIP6 models have not yet been published in peer-reviewed journals. However, according to the preliminary analysis (see, e.g.  https://cmip6workshop19.sciencesconf.org/ data/Session1_PosterSlides.pdf, p.29), the INM-CM5 model shows the lowest temperature increase among the CMIP6 models considered for all the scenarios due to the minimal equilibrium sensitivity to the CO2 concentration doubling, which is ~2.1° for the current model version, like for the previous version, despite new condensation and cloud formation blocks. [For more on CMIP6 comparisons see post Climate Models: Good, Bad and Ugly]

Fig. 2. Differences between the annual average surface air temperatures (K) in 2071–2100 and 1981–2010 for the (a) SSP5-8.5 and (b) SSP1-2.6 scenarios.

The changes in the surface air temperature are similar for all scenarios; therefore, we analyze the difference between temperatures in 2071–2100 and 1981–2010 under the SSP5-8.5 and SSP1-2.6 scenarios (Fig. 2). The warming is maximal in the Arctic; it reaches 10° and 3°, respectively. Other features mainly correspond to CMIP5 data [8], including the INMCM4 model, which participates in the comparison. The warming on the continents of the Northern Hemisphere is about 2 times higher than the mean, and the warming in the Southern Hemisphere is noticeably less than in the Northern Hemisphere. The land surface is getting warmer than the ocean surface in all the scenarios except SSP1-2.6, because the greenhouse effect is expected to weaken in the second half of the 21st century in this scenario, and the higher heat capacity of the ocean prevents it from cooling as quickly as the land.

The changes in precipitation in December–February and June–August for the SSP3-7.0 scenario averaged over five members of the ensemble are shown in Fig. 4. All members of the ensemble show an increase in precipitation in the winter in a significant part of middle and high latitudes. In summer, the border between the increase and decrease in precipitation in Eurasia passes mainly around or to the north of 60°. In southern and central Europe, all members of the ensemble show a decrease in precipitation. Precipitation also increases in the region of the summer Asian monsoon, over the equatorial Pacific, due to a decrease in the upwelling and an increase in ocean surface temperature (OST). The distribution of changes in precipitation mainly corresponds to that given in [6, Fig. 12.22] for all CMIP5 models.

The change in the Arctic sea ice area in September, when the ocean ice cover is minimal over the year, is of interest. Figure 5 shows the sea ice area in September 2015–2019 to be 4–6 million km2 in all experiments, which corresponds to the estimate from observations in [11]. The Arctic sea ice does not completely melt in any of the experiments and under any scenario. However, according to [8, Figs. 12.28 and 12.31], many models participating in CMIP6, where the Arctic ice area is similar to that observed at the beginning of the 21st century, show the complete absence of ice by the end of the 21st century, especially under the RCP8.5 scenario, which is similar to SSP5-8.5.

The reason for these differences is the lower equilibrium sensitivity of the INM-CM5 model.

Note that the scatter of data between experiments under different scenarios in the first half of the 21st century is approximately the same as between different members of the ensemble under the SSP3-7.0 scenario and becomes larger only after 2070. The sea ice area values are sorted in accordance with the radiative forcing of the scenarios only after 2090. This indicates the large contribution of natural climate variability into the Arctic ice area. In the SSP1-2.6 experiment, the Arctic ice area at the end of the 21st century approximately corresponds to its area at the beginning of the experiment.

Climate changes can be also traced in the ocean circulation. Figure 6 shows the change in the 5-year averaged intensity of the Atlantic meridional circulation, defined as the maximum of the meridional streamfunction at 32° N. All experiments show a decrease in the intensity of meridional circulation in the 21st century and natural fluctuations against this decrease. The decrease is about 4.5–5 Sv for the SSP5-8.5 scenario, which is close to values obtained in the CMIP5 models [8, Fig. 12.35] under the RCP8.5 scenario. Under milder scenarios, the weakening of the meridional circulation is less pronounced. The reason for this weakening of the meridional circulation in the Atlantic, as far as we know, is not yet fully understood.

Conclusion

Numerical experiments have been carried out to reproduce climate changes in the 21st century according to four scenarios of the CMIP6 program [4, 5], including an ensemble of five experiments under the SSP3-7.0 scenario. The changes in the global mean surface temperature are analyzed. It is shown that the global warming predicted by the INM-CM5 model is the lowest among the currently published CMIP6 model data. The geographical distribution of changes in the temperature and precipitation is considered. According to the model, the temperature in the warmest summer month will increase faster than the summer average temperature in Russia.

None of the experiments show the complete melting of the Arctic ice cover by the end of the 21st century. Some changes in the ocean dynamics, including the flow velocity and the meridional stream function, are analyzed. The changes in the Hadley and Ferrel circulation in the atmosphere are considered.

Resources:

Climate Models: Good, Bad and Ugly

2018 Update: Best Climate Model INMCM5

Temperatures According to Climate Models

Best Climate Model: Mild Warming Forecasted

Links are provided at the end to previous posts describing climate models 4 and 5 from the Institute of Numerical Mathematics in Moscow, Russia.  Now we have forecasts for the 21st Century published for INM-CM5 at Izvestiya, Atmospheric and Oceanic Physics volume 56, pages218–228(July 7, 2020). The article is Simulation of Possible Future Climate Changes in the 21st Century in the INM-CM5 Climate Model by E. M. Volodin & A. S. Gritsun.  Excerpts are in italics with my bolds, along with a contextual comment.

Abstract

Climate changes in 2015–2100 have been simulated with the use of the INM-CM5 climate model following four scenarios: SSP1-2.6, SSP2-4.5, and SSP5-8.5 (single model runs) and SSP3-7.0 (an ensemble of five model runs). Changes in the global mean temperature and spatial distribution of temperature and precipitation are analyzed. The global warming predicted by the INM-CM5 model in the scenarios considered is smaller than that in other CMIP6 models. It is shown that the temperature in the hottest summer month can rise more quickly than the seasonal mean temperature in Russia. An analysis of a change in Arctic sea ice shows no complete Arctic summer ice melting in the 21st century under any model scenario. Changes in the meridional stream function in atmosphere and ocean are studied.

Overview

The climate is understood as the totality of statistical characteristics of the instantaneous states of the atmosphere, ocean, and other climate system components averaged over a long time period.

Therefore, we restrict ourselves to an analysis of some of the most important climate parameters, such as average temperature and precipitation. A more detailed analysis of individual aspects of climate change, such as changes in extreme weather and climate situations, will be the subject of another work. This study is not aimed at a full comparison with the results of other climate models, where calculations follow the same scenarios, since the results of other models have not yet been published in peer reviewed journals by the time of this writing.

The INM-CM5 climate model [1, 2] is used for the numerical experiments. It differs from the previous version, INMCM4, which was also used for experiments on reproducing climate change in the 21st century [3], in the following:

  • an aerosol block has been added to the model, which allows inputting anthropogenic emissions of aerosols and their precursors;
  • the concentrations and optical properties of aerosols are calculated, but not specified, like in the previous version;
  • the parametrizations of cloud formation and condensation are changed in the atmospheric block;
  • the upper boundary in the atmospheric block is raised from 30 to 60 km;
  • the horizontal resolution in the ocean block is doubled along each coordinate; and,
  • the software related to adaptation to massively parallel computers is improved, which allows the effective use a larger number of compute cores.

The model resolution in the atmospheric and aerosol blocks is 2° × 1.5° in longitude and latitude and 73 levels and, in the ocean, 0.5° × 0.25° and 40 levels. The calculations were performed at supercomputers of the Joint Supercomputer Center, Russian Academy of Sciences, and Moscow State University, with the use of 360 to 720 cores. The model calculated 6–10 years per 24 h in the above configuration.

Four scenarios were used to model the future climate: SSP1-2.6, SSP2-4.5, SSP3-7.0, and SSP5-5.8. The scenarios are described in [4]. The figure after the abbreviation SSP (Shared Socioeconomic Pathway) is the number of the mankind development path (see the values in [4]). The number after the dash means the radiation forcing (W m–2) in 2100 compared to the preindustrial level. Thus, the SSP1-2.6 scenario is the most moderate and assumes rapid actions which sharply limit and then almost completely stop anthropogenic emissions. Within this scenario, greenhouse gas concentrations are maximal in the middle of the 21st century and then slightly decrease by the end of the century. The SSP5-8.5 scenario is the warmest and implies the fastest climate change. The scenarios are recommended for use in the project on comparing CMIP6 (Coupled Model Intercomparison Project, Phase 6, [5]) climate models.  Each scenario includes the time series of:

  • carbon dioxide, methane, nitrous oxide, and ozone concentrations;
  • emissions of anthropogenic aerosols and their precursors;
  • the concentration of volcanic sulfate aerosol; and
  • the solar constant. 

One model experiment was carried out for each of the above scenarios. It began at the beginning of 2015 and ended at the end of 2100. The initial state was taken from the so-called historical experiment with the same model, where climate changes were simulated for 1850–2014, and all impacts on the climate system were set according to observations. The results of the ensemble of historical experiments with the model under consideration are given in [6, 7]. For the SSP3-7.0 scenario, five model runs was performed differing in the initial data taken from different historical experiments. The ensemble of numerical experiments is required to increase the statistical confidence of conclusions about climate changes.

[My Contextual Comment inserted Prior to Consideration of Results]

Firstly, the INM-CM5 historical experiment can be read in detail by following a linked post below, but this graphic summarizes the model hindcasting of past temperatures (GMT) compared to HadCrutv4.

Figure 1. The 5-year mean GMST (K) anomaly with respect to 1850–1899 for HadCRUTv4 (thick solid black); model mean (thick solid red). Dashed thin lines represent data from individual model runs: 1 – purple, 2 – dark blue, 3 – blue, 4 – green, 5 – yellow, 6 – orange, 7 – magenta. In this and the next figures numbers on the time axis indicate the first year of the 5-year mean.

Secondly, the scenarios are important to understand since they stipulate data inputs the model must accept as conditions for producing forecasts according to a particular scenario (set of assumptions).  The document with complete details referenced as [4] is The Scenario Model Intercomparison Project (ScenarioMIP) for CMIP6.

All the details are written there but one diagram suggests the implications for the results described below.

Figure 5. CO2 emissions (a) and concentrations (b), anthropogenic radiative forcing (c), and global mean temperature change (d) for the three long-term extensions. As in Fig. 3, concentration, forcing, and temperature outcomes are calculated with a simple climate model (MAGICC version 6.8.01 BETA; Meinshausen et al., 2011a, b). Outcomes for the CMIP5 versions of the long-term extensions of RCP2.6 and RCP8.5 (Meinshausen et al., 2011c), as calculated with the same model, are shown for comparison.

As shown, the SSP1-26 is virtually the same scenario as the former RCP2.6, while SSP5-85 is virtually the same as RCP8.5, the wildly improbable scenario (impossible according to some analysts).  Note that FF CO2 emissions are assumed to quadruple in the next 80 years, with atmospheric CO2 rising from 400 to 1000 ppm ( +150%).  Bear these suppositions in mind when considering the INMCM5 forecasts below.

Results [Continuing From Volodin and Gritsun]

Fig. 1. Changes in the global average surface temperature (K) with respect to the pre-industrial level in experiments according to the SSP1-2.6 (triangles), SSP2-4.5 (squares), SSP3-7.0 (crosses), and SSP5-8.5 (circles) scenarios.

Let us describe some simulation results of climate change in the 21st century. Figure 1 shows the change in the globally averaged surface air temperature with respect to the data of the corresponding historical experiment for 1850–1899. In the warmest SSP5-8.5 scenario, the temperature rises by more than 4° by the end of the 21st century. In the SSP3-7.0 scenario, different members of the ensemble show warming by 3.4°–3.6°. In the SSP2-4.5 scenario, the temperature increases by about 2.4°. According to the SSP1-2.6 scenario, the maximal warming by ~1.7° occurs in the middle of the 21st century, and the temperature exceeds the preindustrial temperature by 1.4° by the end of the century.

[My comment: Note that the vertical scale starts with +1.0C as was seen in the historical experiment. Thus an anomaly of 1.4C by 2100 is an increase of only 0.4C, while the SSP2-4.5 result adds 1.4C to the present].

The results for other CMIP6 models have not yet been published in peer-reviewed journals. However, according to the preliminary analysis (see, e.g.  https://cmip6workshop19.sciencesconf.org/ data/Session1_PosterSlides.pdf, p.29), the INM-CM5 model shows the lowest temperature increase among the CMIP6 models considered for all the scenarios due to the minimal equilibrium sensitivity to the CO2 concentration doubling, which is ~2.1° for the current model version, like for the previous version, despite new condensation and cloud formation blocks. [For more on CMIP6 comparisons see post Climate Models: Good, Bad and Ugly]

Fig. 2. Differences between the annual average surface air temperatures (K) in 2071–2100 and 1981–2010 for the (a) SSP5-8.5 and (b) SSP1-2.6 scenarios.

The changes in the surface air temperature are similar for all scenarios; therefore, we analyze the difference between temperatures in 2071–2100 and 1981–2010 under the SSP5-8.5 and SSP1-2.6 scenarios (Fig. 2). The warming is maximal in the Arctic; it reaches 10° and 3°, respectively. Other features mainly correspond to CMIP5 data [8], including the INMCM4 model, which participates in the comparison. The warming on the continents of the Northern Hemisphere is about 2 times higher than the mean, and the warming in the Southern Hemisphere is noticeably less than in the Northern Hemisphere. The land surface is getting warmer than the ocean surface in all the scenarios except SSP1-2.6, because the greenhouse effect is expected to weaken in the second half of the 21st century in this scenario, and the higher heat capacity of the ocean prevents it from cooling as quickly as the land.

The changes in precipitation in December–February and June–August for the SSP3-7.0 scenario averaged over five members of the ensemble are shown in Fig. 4. All members of the ensemble show an increase in precipitation in the winter in a significant part of middle and high latitudes. In summer, the border between the increase and decrease in precipitation in Eurasia passes mainly around or to the north of 60°. In southern and central Europe, all members of the ensemble show a decrease in precipitation. Precipitation also increases in the region of the summer Asian monsoon, over the equatorial Pacific, due to a decrease in the upwelling and an increase in ocean surface temperature (OST). The distribution of changes in precipitation mainly corresponds to that given in [6, Fig. 12.22] for all CMIP5 models.

The change in the Arctic sea ice area in September, when the ocean ice cover is minimal over the year, is of interest. Figure 5 shows the sea ice area in September 2015–2019 to be 4–6 million km2 in all experiments, which corresponds to the estimate from observations in [11]. The Arctic sea ice does not completely melt in any of the experiments and under any scenario. However, according to [8, Figs. 12.28 and 12.31], many models participating in CMIP6, where the Arctic ice area is similar to that observed at the beginning of the 21st century, show the complete absence of ice by the end of the 21st century, especially under the RCP8.5 scenario, which is similar to SSP5-8.5.

The reason for these differences is the lower equilibrium sensitivity of the INM-CM5 model.

Note that the scatter of data between experiments under different scenarios in the first half of the 21st century is approximately the same as between different members of the ensemble under the SSP3-7.0 scenario and becomes larger only after 2070. The sea ice area values are sorted in accordance with the radiative forcing of the scenarios only after 2090. This indicates the large contribution of natural climate variability into the Arctic ice area. In the SSP1-2.6 experiment, the Arctic ice area at the end of the 21st century approximately corresponds to its area at the beginning of the experiment.

Climate changes can be also traced in the ocean circulation. Figure 6 shows the change in the 5-year averaged intensity of the Atlantic meridional circulation, defined as the maximum of the meridional streamfunction at 32° N. All experiments show a decrease in the intensity of meridional circulation in the 21st century and natural fluctuations against this decrease. The decrease is about 4.5–5 Sv for the SSP5-8.5 scenario, which is close to values obtained in the CMIP5 models [8, Fig. 12.35] under the RCP8.5 scenario. Under milder scenarios, the weakening of the meridional circulation is less pronounced. The reason for this weakening of the meridional circulation in the Atlantic, as far as we know, is not yet fully understood.

Conclusion

Numerical experiments have been carried out to reproduce climate changes in the 21st century according to four scenarios of the CMIP6 program [4, 5], including an ensemble of five experiments under the SSP3-7.0 scenario. The changes in the global mean surface temperature are analyzed. It is shown that the global warming predicted by the INM-CM5 model is the lowest among the currently published CMIP6 model data. The geographical distribution of changes in the temperature and precipitation is considered. According to the model, the temperature in the warmest summer month will increase faster than the summer average temperature in Russia.

None of the experiments show the complete melting of the Arctic ice cover by the end of the 21st century. Some changes in the ocean dynamics, including the flow velocity and the meridional stream function, are analyzed. The changes in the Hadley and Ferrel circulation in the atmosphere are considered.

Resources:

Climate Models: Good, Bad and Ugly

2018 Update: Best Climate Model INMCM5

Temperatures According to Climate Models

IPCC Wants a 10-year Lockdown

You’ve seen the News:

While analysts agree the historic lockdowns will significantly lower emissions, some environmentalists argue the drop is nowhere near enough.–USA Today

Emissions Declines Will Set Records This Year. But It’s Not Good News.  An “unprecedented” fall in fossil fuel use, driven by the Covid-19 crisis, is likely to lead to a nearly 8 percent drop, according to new research.–New York Times

The COVID-19 pandemic cut carbon emissions down to 2006 levels.  Daily global CO2 emissions dropped 17 percent in April — but it’s not likely to last–The Verge

In fact, the drop is not even enough to get the world back on track to meet the target of the 2015 Paris Agreement, which aims for global temperature rise of no more than 1.5 degree above pre-industrial levels, said WHO S-G Taalas. That would require at least a 7% annual drop in emissions, he added.–Reuters

An article at Las Vegas Review-Journal draws the implications Environmentalists want 10 years of coronavirus-level emissions cuts.  Excerpts in italics with my bolds.

It’s always been difficult for the layman to comprehend what it would entail to reduce carbon emissions enough to satisfy environmentalists who fret over global warming. Then came coronavirus.

The once-in-a-century pandemic has devastated the U.S. economy. The April unemployment rate is 14.7 percent and rising. Nevada has the highest unemployment rate in the country at 28.2 percent. One-third of families with children report feeling “food insecure,” according to the Institute for Policy Research, Northwestern University. Grocery prices saw their largest one-month increase since 1974.

To keep the economy afloat, the U.S. government has spent $2.4 trillion on coronavirus programs. Another expensive relief bill seems likely. Unless the end goal is massive inflation, this level of spending can’t continue.

Amazingly, the United States has it good compared with many other places in the world. David Beasley, director of the U.N. World Food Program, has estimated that the ongoing economic slowdown could push an additional 130 million “to the brink of starvation.”

That’s the bad news. The good news for global warming alarmists is that economic shutdowns reduce carbon emissions. If some restrictions remain in place worldwide through the end of the year, researchers writing in Nature estimate emissions will drop in 2020 by 7 percent.

For some perspective, the U.N.’s climate panel calls for a 7.6 percent reduction in emissions — every year for a decade. And now we get a real-world glimpse of the cost.

What happens if we return to the course we were on before this pandemic?  A previous post shows that the past continuing into the future is not disasterous, and that panic is unwarranted.

I Want You Not to Panic

I’ve been looking into claims for concern over rising CO2 and temperatures, and this post provides reasons why the alarms are exaggerated. It involves looking into the data and how it is interpreted.

First the longer view suggests where to focus for understanding. Consider a long term temperature record such as Hadcrut4. Taking it at face value, setting aside concerns about revisions and adjustments, we can see what has been the pattern in the last 120 years following the Little Ice Age. Often the period between 1850 and 1900 is considered pre industrial since modern energy and machinery took hold later on. The graph shows that warming was not much of a factor until temperatures rose peaking in the 1940s, then cooling off into the 1970s, before ending the century with a rise matching the rate of earlier warming. Overall, the accumulated warming was 0.8C.

Then regard the record concerning CO2 concentrations in the atmosphere. It’s important to know that modern measurement of CO2 really began in 1959 with Mauna Loa observatory, coinciding with the mid-century cool period. The earlier values in the chart are reconstructed by NASA GISS from various sources and calibrated to reconcile with the modern record. It is also evident that the first 60 years saw minimal change in the values compared to the post 1959 rise after WWII ended and manufacturing was turned from military production to meet consumer needs. So again the mid-20th century appears as a change point.

It becomes interesting to look at the last 60 years of temperature and CO2 from 1959 to 2019, particularly with so much clamour about climate emergency and crisis. This graph puts together rising CO2 and temperatures for this period. Firstly note that the accumulated warming is about 0.8C after fluctuations. And remember that those decades witnessed great human flourishing and prosperity by any standard of life quality. The rise of CO2 was a monotonic steady rise with some acceleration into the 21st century.

Now let’s look at projections into the future, bearing in mind Mark Twain’s warning not to trust future predictions. No scientist knows all or most of the surprises that overturn continuity from today to tomorrow. Still, as weathermen well know, the best forecasts are built from present conditions and adding some changes going forward.

Here is a look to century end as a baseline for context. No one knows what cooling and warming periods lie ahead, but one scenario is that the next 80 years could see continued warming at the same rate as the last 60 years. That presumes that forces at play making the weather in the lifetime of many of us seniors will continue in the future. Of course factors beyond our ken may deviate from that baseline and humans will notice and adapt as they have always done. And in the back of our minds is the knowledge that we are 11,500 years into an interglacial period before the cold returns, being the greater threat to both humanity and the biosphere.

Those who believe CO2 causes warming advocate for reducing use of fossil fuels for fear of overheating, apparently discounting the need for energy should winters grow harsher. The graph shows one projection similar to that of temperature, showing the next 80 years accumulating at the same rate as the last 60. A second projection in green takes the somewhat higher rate of the last 10 years and projects it to century end. The latter trend would achieve a doubling of CO2.

What those two scenarios mean depends on how sensitive you think Global Mean Temperature is to changing CO2 concentrations. Climate models attempt to consider all relevant and significant factors and produce future scenarios for GMT. CMIP6 is the current group of models displaying a wide range of warming presumably from rising CO2. The one model closely replicating Hadcrut4 back to 1850 projects 1.8C higher GMT for a doubling of CO2 concentrations. If that held true going from 300 ppm to 600 ppm, the trend would resemble the red dashed line continuing the observed warming from the past 60 years: 0.8C up to now and another 1C the rest of the century. Of course there are other models programmed for warming 2 or 3 times the rate observed.

People who take to the streets with signs forecasting doom in 11 or 12 years have fallen victim to IPCC 450 and 430 scenarios.  For years activists asserted that warming from pre industrial can be contained to 2C if CO2 concentrations peak at 450 ppm.  Last year, the SR1.5 lowered the threshold to 430 ppm, thus the shortened timetable for the end of life as we know it.

For the sake of brevity, this post leaves aside many technical issues. Uncertainties about the temperature record, and about early CO2 levels, and the questions around Equilibrium CO2 Sensitivity (ECS) and Transient CO2 Sensitivity (TCS) are for another day. It should also be noted that GMT as an average hides huge variety of fluxes over the globe surface, and thus larger warming in some places such as Canada, and cooling in other places like Southeast US. Ross McKitrick pointed out that Canada has already gotten more than 1.5C of warming and it has been a great social, economic and environmental benefit.

So I want people not to panic about global warming/climate change. Should we do nothing? On the contrary, we must invest in robust infrastructure to ensure reliable affordable energy and to protect against destructive natural events. And advanced energy technologies must be developed for the future since today’s wind and solar farms will not suffice.

It is good that Greta’s demands were unheeded at the Davos gathering. Panic is not useful for making wise policies, and as you can see above, we have time to get it right.

Media Turn Math Dopes into Dupes

Those who have investigated global warming/climate change discovered that the numbers don’t add up. But if you don’t do the math you wouldn’t know that, because in the details is found the truth (the devilish contradictions to sweeping claims). Those without numerical literacy (including apparently most journalists) are at the mercy of the loudest advocates. Social policy then becomes a matter of going along with herd popularity. Shout out to AOC!

Now we get the additional revelation regarding pandemic math and the refusal to correct over-the-top predictions. It’s the same dynamic but accelerated by the more immediate failure of models to forecast contagious reality. Sean Trende writes at Real Clear Politics The Costly Failure to Update Sky-Is-Falling Predictions. Excerpts in italics with my bolds.

On March 6, Liz Specht, Ph.D., posted a thread on Twitter that immediately went viral. As of this writing, it has received over 100,000 likes and almost 41,000 retweets, and was republished at Stat News. It purported to “talk math” and reflected the views of “highly esteemed epidemiologists.” It insisted it was “not a hypothetical, fear-mongering, worst-case scenario,” and that, while the predictions it contained might be wrong, they would not be “orders of magnitude wrong.” It was also catastrophically incorrect.

The crux of Dr. Specht’s 35-tweet thread was that the rapid doubling of COVID-19 cases would lead to about 1 million cases by May 5, 4 million by May 11, and so forth. Under this scenario, with a 10% hospitalization rate, we would expect approximately 400,000 hospitalizations by mid-May, which would more than overwhelm the estimated 330,000 available hospital beds in the country. This would combine with a lack of protective equipment for health care workers and lead to them “dropping from the workforce for weeks at a time,” to shortages of saline drips and so forth. Half the world would be infected by the summer, and we were implicitly advised to buy dry goods and to prepare not to leave the house.

Interestingly, this thread was wrong not because we managed to bend the curve and stave off the apocalypse; for starters, Dr. Specht described the cancellation of large events and workplace closures as something that would shift things by only days or weeks.

Instead, this thread was wrong because it dramatically understated our knowledge of the way the virus worked; it fell prey to the problem, common among experts, of failing to address adequately the uncertainty surrounding its point estimates. It did so in two opposing ways. First, it dramatically understated the rate of spread. If serological tests are to be remotely believed, we likely hit the apocalyptic milestone of 2 million cases quite some time ago. Not in the United States, mind you, but in New York City, where 20% of residents showed positive COVID-19 antibodies on April 23. Fourteen percent of state residents showed antibodies, suggesting 2.5 million cases in the Empire State alone; since antibodies take a while to develop, this was likely the state of affairs in mid-April or earlier.

But in addition to being wrong about the rate of spread, the thread was also very wrong about the rate of hospitalization. While New York City found its hospital system stretched, it avoided catastrophic failure, despite having within its borders the entire number of cases predicted for the country as a whole, a month earlier than predicted. Other areas of the United States found themselves with empty hospital beds and unused emergency capacity.

One would think that, given the amount of attention this was given in mainstream sources, there would be some sort of revisiting of the prediction. Of course, nothing of the sort occurred.

This thread has been absolutely memory-holed, along with countless other threads and Medium articles from February and March. We might forgive such forays on sites like Twitter and Medium, but feeding frenzies from mainstream sources are also passed over without the media ever revisiting to see how things turned out.

Consider Florida. Gov. Ron DeSantis was castigated for failing to close the beaches during spring break, and critics suggested that the state might be the next New York. I’ve written about this at length elsewhere, but Florida’s new cases peaked in early April, at which point it was a middling state in terms of infections per capita. The virus hasn’t gone away, of course, but the five-day rolling average of daily cases in Florida is roughly where it was in late March, notwithstanding the fact that testing has increased substantially. Taking increased testing into account, the positive test rate has gradually declined since late March as well, falling from a peak of 11.8% on April 1 to a low of 3.6% on May 12.

Notwithstanding this, the Washington Post continues to press stories of public health officials begging state officials to close beaches (a more interesting angle at this point might be why these health officials were so wrong), while the New York Times noted a few days ago (misleadingly, and grossly so) that “Florida had a huge spike in cases around Miami after spring break revelry,” without providing the crucial context that the caseload mimicked increases in other states that did not play host to spring break. Again, perhaps the real story is that spring breakers passed COVID-19 among themselves and seeded it when they got home. I am sure some of this occurred, but it seems exceedingly unlikely that they would have spread it widely among themselves and not also spread it widely to bartenders, wait staff, hotel staff, and the like in Florida.

Florida was also one of the first states to experiment with reopening. Duval County (Jacksonville) reopened its beaches on April 19 to much national skepticism. Yet daily cases are lower today than they were they day that it reopened; there was a recent spike in cases associated with increased testing, but it is now receding.

Or consider Georgia, which one prominent national magazine claimed was engaging in “human sacrifice” by reopening. Yet, after nearly a month, a five-day average of Georgia’s daily cases looks like this:

What about Wisconsin, which was heavily criticized for holding in-person voting? It has had an increased caseload, but that is largely due to increased testing (up almost six-fold since early April) and an idiosyncratic outbreak in its meatpacking plants. The latter is tragic, but it is not related to the election; in fact, a Milwaukee Journal-Sentinel investigation failed to link any cases to the election; this has largely been ignored outside of conservative media sites such as National Review.

We could go on – after being panned for refusing to issue a stay-at-home order, South Dakota indeed suffered an outbreak (once again, in its meatpacking plants), but deaths there have consistently averaged less than three per day, to little fanfare – but the point is made. Some “feeding frenzies” have panned out, but many have failed to do so; rather than acknowledging this failure, the press typically moves on.

This is an unwelcome development, for a few reasons. First, not everyone follows this pandemic closely, and so a failure to follow up on how feeding frenzies end up means that many people likely don’t update their views as often as they should. You’d probably be forgiven if you suspected hundreds of cases and deaths followed the Wisconsin election.

Second, we obviously need to get policy right here, and to be sure, reporting bad news is important for producing informed public opinion. But reporting good news is equally as important. Third, there are dangers to forecasting with incredible certitude, especially with a virus that was detected less than six months ago. There really is a lot we still don’t know, and people should be reminded of this. Finally, among people who do remember things like this, a failure to acknowledge errors foments cynicism and further distrust of experts.

The damage done to this trust is dangerous, for at this time we desperately need quality expert opinions and news reporting that we can rely upon.

Addendum:  Tilak Doshi makes the comparison to climate crisis claims Coronavirus And Climate Change: A Tale Of Two Hysterias writing at Forbes.  Excerpts in italics with my bolds.

It did not take long after the onset of the global pandemic for people to observe the many parallels between the covid-19 pandemic and climate change. An invisible novel virus of the SARS family now represents an existential threat to humanity. As does CO2, a colourless trace gas constituting 0.04% of the atmosphere which allegedly serves as the control knob of climate change. Lockdowns are to the pandemic what decarbonization is to climate change. Indeed, lockdowns and decarbonization share much in common, from tourism and international travel to shopping and having a good time. It would seem that Greta Thunberg’s dreams have come true, and perhaps that is why CNN announced on Wednesday that it is featuring her on a coronavirus town-hall panel alongside health experts.

But, beyond being a soundbite and means of obtaining political cover, ‘following the science’ is neither straightforward nor consensual. The diversity of scientific views on covid-19 became quickly apparent in the dramatic flip-flop of the UK government. In the early stages of the spread in infection, Boris Johnson spoke of “herd immunity”, protecting the vulnerable and common sense (à la Sweden’s leading epidemiologist Professor Johan Giesecke) and rejected banning mass gatherings or imposing social distancing rules. Then, an unpublished bombshell March 16th report by Professor Neil Ferguson of Imperial College, London, warned of 510,000 deaths in the country if the country did not immediately adopt a suppression strategy. On March 23, the UK government reversed course and imposed one of Europe’s strictest lockdowns. For the US, the professor had predicted 2.2 million deaths absent similar government controls, and here too, Ferguson’s alarmism moved the federal government into lockdown mode.

Unlike climate change models that predict outcomes over a period of decades, however, it takes only days and weeks for epidemiological model forecasts to be falsified by data. Thus, by March 25th, Ferguson’s predicted half a million fatalities in the UK was adjusted downward to “unlikely to exceed 20,000”, a reduction by a factor of 25. This drastic reduction was credited to the UK’s lockdown which, however, was imposed only 2 days previously, before any social distancing measures could possibly have had enough time to work.

For those engaged in the fraught debates over climate change over the past few decades, the use of alarmist models to guide policy has been a familiar point of contention. Much as Ferguson’s model drove governments to impose Covid-19 lockdowns affecting nearly 3 billion people on the planet, Professor Michael Mann’s “hockey stick” model was used by the IPCC, mass media and politicians to push the man-made global warming (now called climate change) hysteria over the past two decades.

As politicians abdicate policy formulation to opaque expertise in highly specialized fields such as epidemiology or climate science, a process of groupthink emerges as scientists generate ‘significant’ results which reinforce confirmation bias, affirm the “scientific consensus” and marginalize sceptics.

Rather than allocating resources and efforts towards protecting the vulnerable old and infirm while allowing the rest of the population to carry on with their livelihoods with individuals taking responsibility for safe socializing, most governments have opted to experiment with top-down economy-crushing lockdowns. And rather than mitigating real environmental threats such as the use of traditional biomass for cooking indoors that is a major cause of mortality in the developing world or the trade in wild animals, the climate change establishment advocates decarbonisation (read de-industrialization) to save us from extreme scenarios of global warming.

Taking the wheels off of entire economies on the basis of wildly exaggerated models is not the way to go.

Footnote: Mark Hemingway sees how commonplace is the problem of uncorrected media falsity in his article When Did the Media Stop Running Corrections? Excerpts in italics with my bolds.

Vanity Fair quickly recast Sherman’s story without acknowledging its error: “This post has been updated to include a denial from Blackstone, and to reflect comments received after publication by Charles P. Herring, president of Herring Networks, OANN’s parent company.” In sum, Sherman based his piece on a premise that was wrong, and Vanity Fair merely acted as if all the story needed was a minor update.

Such post-publication “stealth editing” has become the norm. Last month, The New York Times published a story on the allegation that Joe Biden sexually assaulted a former Senate aide. After publication, the Times deleted the second half of this sentence: “The Times found no pattern of sexual misconduct by Mr. Biden, beyond the hugs, kisses and touching that women previously said made them uncomfortable.”

In an interview with Times media columnist Ben Smith, Times’ Executive Editor Dean Baquet admitted the sentence was altered at the request of Biden’s presidential campaign. However, if you go to the Times’ original story on the Biden allegations, there’s no note saying how the story was specifically altered or why.

It’s also impossible not to note how this failure to issue proper corrections and penchant for stealth editing goes hand-in-hand with the media’s ideological preferences.

In the end the media’s refusal to run corrections is a damnable practice for reasons that have nothing to do with Christianity. In an era when large majorities of the public routinely tell pollsters they don’t trust the media, you don’t have to be a Bible-thumper to see that admitting your mistakes promptly, being transparent about trying to correct them, and when appropriate, apologizing and asking for forgiveness – are good secular, professional ethics.

 

 

 

On Following the Science

H/T to Luboc Motl for posting at his blog Deborah Cohen, BBC, and models vs theories  Excerpts in italics with my bolds

Dr Deborah Cohen is an award-winning health journalist who has a doctor degree – which actually seems to be related to medical sciences – and who is working for the BBC Newsnight now. I think that the 13-minute-long segment above is an excellent piece of journalism.

It seems to me that she primarily sees that the “models” predicting half a million of dead Britons have spectacularly failed and it is something that an honest health journalist simply must be interested in. And she seems to be an achieved and award-winning journalist. Second, she seems to see through some of the “more internal” defects of bad medical (and not only medical) science. Her PhD almost certainly helps in that. Someone whose background is purely in humanities or the PR-or-communication gibberish simply shouldn’t be expected to be on par with a real PhD.

So she has talked to the folks at the “Oxford evidence-based medicine” institute and others who understand the defect of the “computer models” as the basis of science or policymaking. Unsurprisingly, she is more or less led to the conclusion that the lockdown (in the U.K.) was a mistake.

If your equation – or computer model – assumes that 5% of those who contract the virus die (i.e. the probability is 5% that they die in a week if they get the virus), then your predicted fatality count may be inflated by a factor of 25 assuming that the case fatality rate is 0.2% – and it is something comparable to that. It should be a common sense that if someone makes a factor-of-25 error in the choice of this parameter, his predictions may be wrong by a factor-of-25, too. It doesn’t matter if the computer program looks like SimCity with 66.666 million Britons represented by a piece of a giant RAM memory of a supercomputer. This brute force obviously cannot compensate for a fundamental ignorance or error in your choice of the fatality rate.

I would think that most 3-year-old kids get this simple point and maybe this opinion is right. Nevertheless, most adults seem to be completely braindead today and they don’t get this point. When they are told that something was calculated by a computer, they worship the predictions. They don’t ask “whether the program was based on a realistic or scientifically supported theory”. Just the brute power of the pile of silicon seems to amaze them.

So we always agreed e.g. with Richard Lindzen that an important part of the degeneration of the climate science was the drift away from the proper “theory” to “modeling”. A scientist may be more leaning towards doing experiments and finding facts and measuring parameters with her hands (and much of the experimental climate science remained OK, after all, Spencer and Christy are still measuring the temperature by satellites etc.); and a theorist for whom the brain is (even) more important than for the experimenter. Experimenters sort of continued to do their work. However, it’s mainly the “theorists” who hopelessly degenerated in the climate science, under the influence of toxic ideology, politics, and corruption.

The real problem is that proper theorists – those who actually understand the science – can solve basic equations on the top of their heads, and are aware of all the intricacies in the process of finding the right equations, equivalence and unequivalence of equations, universal behavior, statistical effects etc. – were replaced by “modelers” i.e. people who don’t really have a clue about science, who write a computer-game-like code, worship their silicon, and mindlessly promote what comes out of this computer game. It is a catastrophe for the field – and the same was obviously happening to “theoretical epidemiology”, too.

“Models” and “good theory” aren’t just orthogonal. The culture of “models” is actively antiscientific because it comes with the encouragement to mindlessly trust in what happens in computer games. This isn’t just “different and independent from” the genuine scientific method. It just directly contradicts the scientific method. In science, you just can’t ever mindlessly trust something just because expensive hardware was used or a high number of operations was made by the CPU. These things are really negative for the trustworthiness and expected accuracy of the science, not positive. In science, you want to make things as simple as possible (because the proliferation of moving parts increases the probability of glitches) but not simpler; and you want to solve a maximum fraction of the issues analytically, not numerically or by a “simulation”.

Science is a systematic framework to figure out which statements about Nature are correct and which are incorrect.

And according to quantum mechanics, the truth values of propositions must be probabilistic. Quantum mechanics only predicts the “similarity [of propositions] to the truth” which is the translation of the Czech word for probability (pravděpodobnost).

It is the truth values (or probabilities) that matter in science – the separation of statements to right and wrong ones (or likely and unlikely ones). Again, I think that I am saying something totally elementary, something that I understood before I was 3 and so did many of you. But it seems obvious that the people who need to ask whether Leo’s or Stephen’s pictures are “theories of everything” must totally misunderstand even this basic point – that science is about the truth, not just representation of objects.

See also: The Deadly Lockdowns and Covid19 Linked to Affluence

Footnote:  Babylon Bee Has Some Fun with this Topic.

‘The Science On Climate Change Is Settled,’ Says Man Who Does Not Believe The Settled Science On Gender, Unborn Babies, Economics

PORTLAND, OR—Local man Trevor J. Gavyn pleaded with his conservative coworker to “believe the science on climate change,” though he himself does not believe the science on the number of genders there are, the fact that unborn babies are fully human, and that socialism has failed every time it has been tried.

“It’s just like, the science is settled, man,” he said in between puffs on his vape. “We just need to believe the scientists and listen to the experts here.”

“Facts don’t care about your feelings on the climate, bro,” he added, though he ignores the fact that there are only two biological genders. He also hand-waves away the science that an unborn baby is 100% biologically human the moment it is conceived and believes economics is a “conservative hoax foisted on us by the Illuminati and Ronald Reagan.”

“That whole thing is, like, a big conspiracy, man,” he said.

The conservative coworker, for his part, said he will trust the science on gender, unborn babies, and economics while simply offering “thoughts and prayers” for the climate.