Virus Models Are Accountable. Climate Models Not.

Paul Driessen and David Legates write at the Hill about what we are learning from coronavirus epidemic models, and why we should remain skeptical about forecasts from climate models. His article is Fauci-Birx Climate Models? Excerpts in italics with my bolds and images.

President Trump and his Coronavirus Task Force presented some frightening numbers during their March 31 White House briefing. Based on now two-week-old data and models, as many as 100,000 Americans at the models’ low end, to 2.2 million at their high end, could die from the fast-spreading virus, they said.

However, the president, vice president, and Drs. Anthony Fauci and Deborah Birx hastened to add that those high-end numbers are based on computer models. And they are “unlikely” if Americans keep doing what they are doing now to contain, mitigate and treat the virus. Although that worst-case scenario “is possible,” it is “unlikely if we do the kinds of things that we’re essentially outlining right now.”

On March 31, Dr. Fauci said, the computer models were saying that, even with full mitigation, it is “likely” that America could still suffer at least 100,000 deaths. But he then added a very important point:

“The question is, are the models really telling us what’s going on? When someone creates a model, they put in various assumptions. And the models are only as good and as accurate as the assumptions you put into them. As we get more data, as the weeks go by, that might change. We feed the data back into the models and relook at the models.”

The data can change the assumptions – and thus the models’ forecasts.

“If we have more data like the NY-NJ metro area, the numbers could go up,” Dr. Birx added. But if the numbers coming in are more like Washington or California, which reacted early and kept their infection and death rates down – then the models would likely show lower numbers. “We’re trying to prevent that logarithmic increase in New Orleans and Detroit and Chicago – trying to make sure those cities work more like California than like the New York metro area.” That seems to be happening, for the most part.

If death rates from corona are misattributed or inflated, if other model assumptions should now change, if azithromycin, hydroxychloroquine and other treatments, and people’s immunities are reducing infections – then business shutdowns and stay-home orders could (and should) end earlier, and we can go back to work and life, rebuild America and the world’s economies …

And avoid different disasters, like these:

    • Millions of businesses that never reopen.
    • Tens of millions of workers with no paychecks.
    • Tens of trillions of dollars vanished from our economy.
    • Millions of families with lost homes and savings.
    • Millions of cases of depression, stroke, heart attack, domestic violence, suicide, murder-suicide, and early death due to depression, obesity and alcoholism, due to unemployment, foreclosure and destroyed dreams.

In other words, numerous deaths because of actions taken to prevent infections and deaths from COVID-19.

It is vital that they recheck the models and assumptions – and distinguish between COVID-19 deaths actually due to the virus … and not just associated with or compounded by it, but primarily due to age, obesity, pneumonia or other issues. We can’t afford a cure that’s worse than the disease – or a prolonged and deadly national economic shutdown that could have been shortened by updated and corrected models.

Now just imagine: What if we could have that same honest, science-based approach to climate models?

What if the White House, EPA, Congress, UN, EU and IPCC acknowledged that climate models are only as good and as accurate as the assumptions built into them? What if – as the months and years went by and we got more real-world temperature, sea level and extreme weather data – we used that information to honestly refine the models? Would the assumptions and therefore the forecasts change dramatically?

What if we use real science to help us understand Earth’s changing climate and weather? And base energy and other policies on real science that honestly examines manmade and natural influences on climate?

Many climate modelers claim we face existential manmade climate cataclysms caused by our use of fossil fuels. They use models to justify calls to banish fossil fuels that provide 80% of US and global energy; close down countless industries, companies and jobs; totally upend our economy; give trillions of dollars in subsidies to fossil fuel replacement companies; and drastically curtail our travel and lifestyles.

Shouldn’t we demand that these models be verified against real-world evidence?

Natural forces have caused climate changes and extreme weather events throughout history. What proof is there that what we see today is due to fossil fuel emissions, and not to those same natural forces? We certainly don’t want energy “solutions” that don’t work and are far worse than the supposed manmade climate and weather ‘virus.’

And we have the climate data. We’ve got years of data. The data show the models don’t match reality.

Model-predicted temperatures are more than 0.5 degrees F above actual satellite-measured average global temperatures – and “highest ever” records are mere hundredths of a degree above previous records from 50 to 80 years ago. Actual hurricane, tornado, sea level, flood, drought, and other historic records show no unprecedented trends or changes, no looming crisis, no evidence that humans have replaced the powerful natural forces that have always driven climate and weather in the real world outside the modelers’ labs.

Real science – and real scientists – seek to understand natural phenomena and processes. They pose hypotheses that they think best explain what they have witnessed, then test them against actual evidence, observations and data. If the hypotheses (and predictions based on them) are borne out by their subsequent observations or findings, the hypotheses become theories, rules or laws of nature – at least until someone finds new evidence that pokes holes in their assessments, or devises better explanations.

Real scientists often employ computers to analyze data more quickly and accurately, depict or model complex natural systems, or forecast future events or conditions. But they test their models against real-world evidence. If the models, observations and predictions don’t match up, real scientists modify or discard the models, and the hypotheses behind them. They engage in robust discussion and debate.

Real scientists don’t let models or hypotheses become substitutes for real-world data, evidence and observations.

They don’t alter or “homogenize” raw or historic data to make it look like the models actually work. They don’t tweak their models after comparing predictions to actual subsequent observations, to make it look like the models “got it right.” They don’t “lose” or hide data and computer codes, restrict peer review to closed circles of like-minded colleagues who protect one another’s reputations and funding, claim “the debate is over,” or try to silence anyone who asks inconvenient questions or criticizes their claims or models. Climate modelers have done all of this – and more.

Climate models have always overstated the warming. But even though modelers have admitted that their models are “tuned” – revised after the fact to make it look like they predicted temperatures accurately – the modelers have made no attempt to change the climate sensitivity to match reality. Why not?

They know disaster scenarios sell. Disaster forecasts keep them employed, swimming in research money – and empowered to tell legislators and regulators that humanity must we take immediate, draconian action to eliminate all fossil fuel use – the economic, human and environmental consequences be damned. And they probably will never admit their mistakes or duplicity, much less be held accountable.

“Wash your hands! You could save millions of lives!” has far more impact than “You could save your own life, your kids’ lives, dozens of lives.” When it comes to climate change, you’re saving the planet.

With ‘Mann-made’ climate change, we are always shown the worst-case scenario: RCP 8.5, the “business-as-usual” … ten times more coal use in 2100 than now … “total disaster.” Alarmist climatologists know their scenario has maybe a 0.1 percent likelihood, and assumes no new energy technologies over the next 80 years. But energy technologies have evolved incredibly over the last 80 years – since 1940, the onset of World War II! Who could possibly think technologies won’t change at least as much going forward?

Disaster scenarios are promoted because most people don’t know any better – and voters and citizens won’t accept extreme measures and sacrifices unless they are presented with extreme disaster scenarios.

The Fauci-Birx team has been feeding updated data into their models, and forecasts for infections and deaths from the ChiCom-WHO coronavirus are going down. They team is doing data-based science. Let’s start demanding a similarly honest, factual, evidence-based approach to climate models and “dangerous manmade climate change.” Our energy, economy, livelihoods, lives and liberties depend on it.

Paul Driessen is senior policy analyst for the Committee For A Constructive Tomorrow ( and author of books and articles on energy, environment, climate and human rights issues. David R. Legates is a Professor of Climatology at the University of Delaware.

Top Climate Model Gets Better

Figure S7. Contributions of forcing and feedbacks to ECS in each model and for the multimodel means. Contributions from the tropical and extratropical portion of the feedback are shown in light and dark shading, respectively. Black dots indicate the ECS in each model, while upward and downward pointing triangles indicate contributions from non-cloud and cloud feedbacks, respectively. Numbers printed next to the multi-model mean bars indicate the cumulative sum of each plotted component. Numerical values are not printed next to residual, extratropical forcing, and tropical albedo terms for clarity. Models within each collection are ordered by ECS.

A previous post here discussed discovering that INMCM4 was the best CMIP5 model in replicating historical temperature records. Additional posts described improvements built into INMCM5, the next generation model included for CMIP6 testing. Later on is a reprint of the temperature history replication and the parameters included in the revised model. This post focuses on a recent report of additional enhancements by the modelers in order to better represent precipitation and extreme rainfall events.

The paper is Influence of various parameters of INM RAS climate model on the results of extreme precipitation simulation by M A Tarasevich and E M Volodin 2019. Excerpts in italics with my bolds.

Modern models of the Earth’s climate can reproduce not only the average climate condition, but also extreme weather and climate phenomena. Therefore, there arises the problem of comparing climate models for observable extreme weather events.

In [1, 2], various extreme weather and climatic situations are considered. According to the paper,27 extreme indices are defined, characterizing different situations with high and low temperatures, with heavy precipitation or with absence of precipitation.

The results of simulation of the extreme indices with the INMCM4 [3] climate model were compared with the results of other models which took part in the CMIP5 project (Coupled Model Intercomparison Project, Phase 5) [2]. The comparison demonstrates that this model performs well for most indices except for those related to daily minimum temperature. For those indices the model shows one of the worst results.

The parameterizations of physical processes in the next model version, INMCM5, were replaced or tuned [4, 5], so that changes in the extreme indices simulation are expected.

The simulation results were compared to the ERA-Interim [6] reanalysis data, which were considered as the observational data for this study. Indices averaged for the 1981–2010 year range were compared. Mann-Whitney test with 1% significance level was used to examine where changes are significant.

To evaluate the quality of simulation of extreme weather phenomena, the extreme indices were calculated [7] using the results of computations performed by two versions of the INM RAS climate model (INMCM4 and INMCM5) and the ERA Interim reanalysis. We took the root mean square deviation of the index value computed from the modeled and reanalysis data as the measure of simulation quality. The mean is averaged over the land.

Tables 1 and 2 present the names of extreme indices related to temperature and precipitation, their labels and measurement units, as well as the land only averaged standard deviations for these indices between the ERA-Interim reanalysis and different versions of the INM RAS climate model.

Table 1 shows that the simulation of almost all temperature indices has improved in the INMCM5 compared to INMCM4. In particular, the simulation of the following extreme indices related to the minimum daily temperature improved significantly (by 37–56%): the annual daily minimum temperature (TNn), the number of frost days (FD) and tropical nights (TR), the diurnal temperature range (DTR), and the growing season length (GSL).

[Comment: Note that values in these tables are standard deviations from observations as presented by ERA reanalysis. So for example, growing season length (GSL) varied from mean ERA values by 24 days in INMCM4, but improved to a 15 day differential in INMCM5.]

Table 2 shows that the simulation of the number of heavy (R10mm) and very heavy (R20mm) precipitation days, consecutive wet days (CWD), simple daily intensity (SDII), and total wet-day precipitation (PRCPTOT) noticeably improved in INMCM5. At the same time, the simulation of indices related to the intensity (RX5day) and the amount (R95p) of precipitation on very rainy days became worse.

Improvements Added to INMCM5

To improve the simulation of extreme precipitation by the INMCM5 model, the following physical  processes were considered: evaporation of precipitation in the upper atmosphere; mixing of horizontal velocity components due to large-scale condensation and deep convection; air resistance acting on falling precipitation particles.

Both large-scale condensation and deep convection cause vertical motion, which redistributes the horizontal momentum between the nearby air layers. The implementation of mixing due to large-scale condensation was added to the model. For short we will refer to the INMCM5 version with these changes as INMCM5VM (INMCM5 Velocities Mixing).

Since precipitation particles (water droplets or ice crystals) move in the surrounding air, a drag force arises that carries the air along with the particles. This resistance force can be included in the right hand side of the momentum balance equation, which is part of the atmosphere hydrothermodynamic system of equations. Accurate accounting for the effect of this force requires numerical solving of an additional Poisson-type equation. For short, we will refer to the INMCM5 model version with the air resistance and vertical mixing of the horizontal velocity components as INMCM5AR (INMCM5 Air Resistance).

Figure 3. (a) RX5day index values averaged over 1981–2010 according to ERA-Interim data. (b-d)  Deviations of the same average obtained from INMCM5, INMCM5VM, and INMCM5AR data. Statistically insignificant deviations are presented as white.

Table 2 shows that the quality of simulation of all precipitation-related extreme indices in INMCM5AR either improved by 3–21 % compared to INMCM5 or remained unchanged.

Figures 2d, 3d show the spatial distribution of the deviations for max 1 day (RX1day) and 5 day (RX5day) precipitation according to INMCM5AR compared to INMCM5. The model with air resistance acting on falling precipitation particles compared to INMCM5 significantly underestimates RX1day and RX5day in South Africa, South and East Asia, and slightly underestimates the indicated extreme indices in Tibet.

Taking into account the air resistance acting on falling precipitation particles significantly reduces  the overestimation of RX1day and RX5day observed in INMCM5 in South Africa, South and East Asia, and leads to an improvement in the quality of extreme indices associated with the precipitation amount on very rainy days and their intensity simulation by 9–21 %. At the same time, a significant overestimation of the RX1day and RX5day indices in the Amazon basin and Southeast Asia, as well as their underestimation in West Africa, still remain.


A simple analysis shows if the climate sensitivity estimated by INMCM5 (1.8C per doubling of CO2) would be realized over the next 80 years, it would mean a continuation of the warming over the last 60 years.  The accumulated rise in GMT would be 1.2C for the 21st Century, well below the IPCC 1.5C aspiration.  See I Want You Not to Panic

Update February 4, 2020

A recent comparison of INMCM5 and other CMIP6 climate models is discussed in the post
Climate Models: Good, Bad and Ugly

Updated with October 25, 2018 Report

A previous analysis Temperatures According to Climate Models showed that only one of 42 CMIP5 models was close to hindcasting past temperature fluctuations. That model was INMCM4, which also projected an unalarming 1.4C warming to the end of the century, in contrast to the other models programmed for future warming five times the past.

In a recent comment thread, someone asked what has been done recently with that model, given that it appears to be “best of breed.” So I went looking and this post summarizes further work to produce a new, hopefully improved version by the modelers at the Institute of Numerical Mathematics of the Russian Academy of Sciences.

Institute of Numerical Mathematics, Russian Academy of Sciences, Moscow, Russia

A previous post a year ago went into the details of improvements made in producing the latest iteration INMCM5 for entry into the CMIP6 project.  That text is reprinted below.

Now a detailed description of the model’s global temperature outputs has been published October 25, 2018 in Earth System Dynamics Simulation of observed climate changes in 1850–2014 with climate model INM-CM5   (Title is link to pdf) Excerpts below with my bolds.

Figure 1. The 5-year mean GMST (K) anomaly with respect to 1850–1899 for HadCRUTv4 (thick solid black); model mean (thick solid red). Dashed thin lines represent data from individual model runs: 1 – purple, 2 – dark blue, 3 – blue, 4 – green, 5 – yellow, 6 – orange, 7 – magenta. In this and the next figures numbers on the time axis indicate the first year of the 5-year mean.


Climate changes observed in 1850-2014 are modeled and studied on the basis of seven historical runs with the climate model INM-CM5 under the scenario proposed for Coupled Model Intercomparison Project, Phase 6 (CMIP6). In all runs global mean surface temperature rises by 0.8 K at the end of the experiment (2014) in agreement with the observations. Periods of fast warming in 1920-1940 and 1980-2000 as well as its slowdown in 1950-1975 and 2000-2014 are correctly reproduced by the ensemble mean. The notable change here with respect to the CMIP5 results is correct reproduction of the slowdown of global warming in 2000-2014 that we attribute to more accurate description of the Solar constant in CMIP6 protocol. The model is able to reproduce correct behavior of global mean temperature in 1980-2014 despite incorrect phases of  the Atlantic Multidecadal Oscillation and Pacific Decadal Oscillation indices in the majority of experiments. The Arctic sea ice loss in recent decades is reasonably close to the observations just in one model run; the model underestimates Arctic sea ice loss by the factor 2.5. Spatial pattern of model mean surface temperature trend during the last 30 years looks close the one for the ERA Interim reanalysis. Model correctly estimates the magnitude of stratospheric cooling.

Additional Commentary

Observational data of GMST for 1850-2014 used for verification of model results were produced by HadCRUT4 (Morice et al 2012). Monthly mean sea surface temperature (SST) data ERSSTv4 (Huang et al 2015) are used for comparison of the AMO and PDO indices with that of the model. Data of Arctic sea ice extent for 1979-2014 derived from satellite observations are taken from Comiso and Nishio (2008). Stratospheric temperature trend and geographical distribution of near surface air temperature trend for 1979-2014 are calculated from ERA Interim reanalysis data (Dee et al 2011).

Keeping in mind the arguments that the GMST slowdown in the beginning of 21st 6 century could be due to the internal variability of the climate system let us look at the behavior of the AMO and PDO climate indices. Here we calculated the AMO index in the usual way, as the SST anomaly in Atlantic at latitudinal band 0N-60N minus anomaly of the GMST. Model and observed 5 year mean AMO index time series are presented in Fig.3. The well known oscillation with a period of 60-70 years can be clearly seen in the observations. Among the model runs, only one (dashed purple line) shows oscillation with a period of about 70 years, but without significant maximum near year 2000. In other model runs there is no distinct oscillation with a period of 60-70 years but period of 20-40 years prevails. As a result none of seven model trajectories reproduces behavior of observed AMO index after year 1950 (including its warm phase at the turn of the 20th and 21st centuries). One can conclude that anthropogenic forcing is unable to produce any significant impact on the AMO dynamics as its index averaged over 7 realization stays around zero within one sigma interval (0.08). Consequently, the AMO dynamics is controlled by internal variability of the climate system and cannot be predicted in historic experiments. On the other hand the model can correctly predict GMST changes in 1980-2014 having wrong phase of the AMO (blue, yellow, orange lines on Fig.1 and 3).


Seven historical runs for 1850-2014 with the climate model INM-CM5 were analyzed. It is shown that magnitude of the GMST rise in model runs agrees with the estimate based on the observations. All model runs reproduce stabilization of GMST in 1950-1970, fast warming in 1980-2000 and a second GMST stabilization in 2000-2014 suggesting that the major factor for predicting GMST evolution is the external forcing rather than system internal variability. Numerical experiments with the previous model version (INMCM4) for CMIP5 showed unrealistic gradual warming in 1950-2014. The difference between the two model results could be explained by more accurate modeling of stratospheric volcanic and tropospheric anthropogenic aerosol radiation effect (stabilization in 1950-1970) due to the new aerosol block in INM-CM5 and more accurate prescription of Solar constant scenario (stabilization in 2000-2014) in CMIP6 protocol. Four of seven INM-CM5 model runs simulate acceleration of warming in 1920-1940 in a correct way, other three produce it earlier or later than in reality. This indicates that for the year warming of 1920-1940 the climate system natural variability plays significant role. No model trajectory reproduces correct time behavior of AMO and PDO indices. Taking into account our results on the GMST modeling one can conclude that anthropogenic forcing does not produce any significant impact on the dynamics of AMO and PDO indices, at least for the INM-CM5 model. In turns, correct prediction of the GMST changes in the 1980-2014 does not require correct phases of the AMO and PDO as all model runs have correct values of the GMST while in at least three model experiments the phases of the AMO and PDO are opposite to the observed ones in that time. The North Atlantic SST time series produced by the model correlates better with the observations in 1980-2014. Three out of seven trajectories have strongly positive North Atlantic SST anomaly as the observations (in the other four cases we see near-to-zero changes for this quantity). The INMCM5 has the same skill for prediction of the Arctic sea ice extent in 2000-2014 as CMIP5 models including INMCM4. It underestimates the rate of sea ice loss by a factor between the two and three. In one extreme case the magnitude of this decrease is as large as in the observations while in the other the sea ice extent does not change compared to the preindustrial ages. In part this could be explained by the strong internal variability of the Arctic sea ice but obviously the new version of INMCM model and new CMIP6 forcing protocol does not improve prediction of the Arctic sea ice extent response to anthropogenic forcing.

Previous Post:  Climate Model Upgraded: INMCM5 Under the Hood

Earlier in 2017 came this publication Simulation of the present-day climate with the climate model INMCM5 by E.M. Volodin et al. Excerpts below with my bolds.

In this paper we present the fifth generation of the INMCM climate model that is being developed at the Institute of Numerical Mathematics of the Russian Academy of Sciences (INMCM5). The most important changes with respect to the previous version (INMCM4) were made in the atmospheric component of the model. Its vertical resolution was increased to resolve the upper stratosphere and the lower mesosphere. A more sophisticated parameterization of condensation and cloudiness formation was introduced as well. An aerosol module was incorporated into the model. The upgraded oceanic component has a modified dynamical core optimized for better implementation on parallel computers and has two times higher resolution in both horizontal directions.

Analysis of the present-day climatology of the INMCM5 (based on the data of historical run for 1979–2005) shows moderate improvements in reproduction of basic circulation characteristics with respect to the previous version. Biases in the near-surface temperature and precipitation are slightly reduced compared with INMCM4 as  well as biases in oceanic temperature, salinity and sea surface height. The most notable improvement over INMCM4 is the capability of the new model to reproduce the equatorial stratospheric quasi-biannual oscillation and statistics of sudden stratospheric warmings.


The family of INMCM climate models, as most climate system models, consists of two main blocks: the atmosphere general circulation model, and the ocean general circulation model. The atmospheric part is based on the standard set of hydrothermodynamic equations with hydrostatic approximation written in advective form. The model prognostic variables are wind horizontal components, temperature, specific humidity and surface pressure.

Atmosphere Module

The INMCM5 borrows most of the atmospheric parameterizations from its previous version. One of the few notable changes is the new parameterization of clouds and large-scale condensation. In the INMCM5 cloud area and cloud water are computed prognostically according to Tiedtke (1993). That includes the formation of large-scale cloudiness as well as the formation of clouds in the atmospheric boundary layer and clouds of deep convection. Decrease of cloudiness due to mixing with unsaturated environment and precipitation formation are also taken into account. Evaporation of precipitation is implemented according to Kessler (1969).

In the INMCM5 the atmospheric model is complemented by the interactive aerosol block, which is absent in the INMCM4. Concentrations of coarse and fine sea salt, coarse and fine mineral dust, SO2, sulfate aerosol, hydrophilic and hydrophobic black and organic carbon are all calculated prognostically.

Ocean Module

The oceanic module of the INMCM5 uses generalized spherical coordinates. The model “South Pole” coincides with the geographical one, while the model “North Pole” is located in Siberia beyond the ocean area to avoid numerical problems near the pole. Vertical sigma-coordinate is used. The finite-difference equations are written using the Arakawa C-grid. The differential and finite-difference equations, as well as methods of solving them can be found in Zalesny etal. (2010).

The INMCM5 uses explicit schemes for advection, while the INMCM4 used schemes based on splitting upon coordinates. Also, the iterative method for solving linear shallow water equation systems is used in the INMCM5 rather than direct method used in the INMCM4. The two previous changes were made to improve model parallel scalability. The horizontal resolution of the ocean part of the INMCM5 is 0.5 × 0.25° in longitude and latitude (compared to the INMCM4’s 1 × 0.5°).

Both the INMCM4 and the INMCM5 have 40 levels in vertical. The parallel implementation of the ocean model can be found in (Terekhov etal. 2011). The oceanic block includes vertical mixing and isopycnal diffusion parameterizations (Zalesny et al. 2010). Sea ice dynamics and thermodynamics are parameterized according to Iakovlev (2009). Assumptions of elastic-viscous-plastic rheology and single ice thickness gradation are used. The time step in the oceanic block of the INMCM5 is 15 min.

Note the size of the human emissions next to the red arrow.

Carbon Cycle Module

The climate model INMCM5 has а carbon cycle module (Volodin 2007), where atmospheric CO2 concentration, carbon in vegetation, soil and ocean are calculated. In soil, а single carbon pool is considered. In the ocean, the only prognostic variable in the carbon cycle is total inorganic carbon. Biological pump is prescribed. The model calculates methane emission from wetlands and has a simplified methane cycle (Volodin 2008). Parameterizations of some electrical phenomena, including calculation of ionospheric potential and flash intensity (Mareev and Volodin 2014), are also included in the model.

Surface Temperatures

When compared to the INMCM4 surface temperature climatology, the INMCM5 shows several improvements. Negative bias over continents is reduced mainly because of the increase in daily minimum temperature over land, which is achieved by tuning the surface flux parameterization. In addition, positive bias over southern Europe and eastern USA in summer typical for many climate models (Mueller and Seneviratne 2014) is almost absent in the INMCM5. A possible reason for this bias in many models is the shortage of soil water and suppressed evaporation leading to overestimation of the surface temperature. In the INMCM5 this problem was addressed by the increase of the minimum leaf resistance for some vegetation types.

Nevertheless, some problems migrate from one model version to the other: negative bias over most of the subtropical and tropical oceans, and positive bias over the Atlantic to the east of the USA and Canada. Root mean square (RMS) error of annual mean near surface temperature was reduced from 2.48 K in the INMCM4 to 1.85 K in the INMCM5.


In mid-latitudes, the positive precipitation bias over the ocean prevails in winter while negative bias occurs in summer. Compared to the INMCM4, the biases over the western Indian Ocean, Indonesia, the eastern tropical Pacific and the tropical Atlantic are reduced. A possible reason for this is the better reproduction of the tropical sea surface temperature (SST) in the INMCM5 due to the increase of the spatial resolution in the oceanic block, as well as the new condensation scheme. RMS annual mean model bias for precipitation is 1.35mm day−1 for the INMCM5 compared to 1.60mm day−1 for the INMCM4.

Cloud Radiation Forcing

Cloud radiation forcing (CRF) at the top of the atmosphere is one of the most important climate model characteristics, as errors in CRF frequently lead to an incorrect surface temperature.

In the high latitudes model errors in shortwave CRF are small. The model underestimates longwave CRF in the subtropics but overestimates it in the high latitudes. Errors in longwave CRF in the tropics tend to partially compensate errors in shortwave CRF. Both errors have positive sign near 60S leading to warm bias in the surface temperature here. As a result, we have some underestimation of the net CRF absolute value at almost all latitudes except the tropics. Additional experiments with tuned conversion of cloud water (ice) to precipitation (for upper cloudiness) showed that model bias in the net CRF could be reduced, but that the RMS bias for the surface temperature will increase in this case.


A table from another paper provides the climate parameters described by INMCM5.

Climate Parameters Observations INMCM3 INMCM4 INMCM5
Incoming solar radiation at TOA 341.3 [26] 341.7 341.8 341.4
Outgoing solar radiation at TOA   96–100 [26] 97.5 ± 0.1 96.2 ± 0.1 98.5 ± 0.2
Outgoing longwave radiation at TOA 236–242 [26] 240.8 ± 0.1 244.6 ± 0.1 241.6 ± 0.2
Solar radiation absorbed by surface 154–166 [26] 166.7 ± 0.2 166.7 ± 0.2 169.0 ± 0.3
Solar radiation reflected by surface     22–26 [26] 29.4 ± 0.1 30.6 ± 0.1 30.8 ± 0.1
Longwave radiation balance at surface –54 to 58 [26] –52.1 ± 0.1 –49.5 ± 0.1 –63.0 ± 0.2
Solar radiation reflected by atmosphere      74–78 [26] 68.1 ± 0.1 66.7 ± 0.1 67.8 ± 0.1
Solar radiation absorbed by atmosphere     74–91 [26] 77.4 ± 0.1 78.9 ± 0.1 81.9 ± 0.1
Direct hear flux from surface     15–25 [26] 27.6 ± 0.2 28.2 ± 0.2 18.8 ± 0.1
Latent heat flux from surface     70–85 [26] 86.3 ± 0.3 90.5 ± 0.3 86.1 ± 0.3
Cloud amount, %     64–75 [27] 64.2 ± 0.1 63.3 ± 0.1 69 ± 0.2
Solar radiation-cloud forcing at TOA         –47 [26] –42.3 ± 0.1 –40.3 ± 0.1 –40.4 ± 0.1
Longwave radiation-cloud forcing at TOA          26 [26] 22.3 ± 0.1 21.2 ± 0.1 24.6 ± 0.1
Near-surface air temperature, °С 14.0 ± 0.2 [26] 13.0 ± 0.1 13.7 ± 0.1 13.8 ± 0.1
Precipitation, mm/day 2.5–2.8 [23] 2.97 ± 0.01 3.13 ± 0.01 2.97 ± 0.01
River water inflow to the World Ocean,10^3 km^3/year 29–40 [28] 21.6 ± 0.1 31.8 ± 0.1 40.0 ± 0.3
Snow coverage in Feb., mil. Km^2 46 ± 2 [29] 37.6 ± 1.8 39.9 ± 1.5 39.4 ± 1.5
Permafrost area, mil. Km^2 10.7–22.8 [30] 8.2 ± 0.6 16.1 ± 0.4 5.0 ± 0.5
Land area prone to seasonal freezing in NH, mil. Km^2 54.4 ± 0.7 [31] 46.1 ± 1.1 48.3 ± 1.1 51.6 ± 1.0
Sea ice area in NH in March, mil. Km^2 13.9 ± 0.4 [32] 12.9 ± 0.3 14.4 ± 0.3 14.5 ± 0.3
Sea ice area in NH in Sept., mil. Km^2 5.3 ± 0.6 [32] 4.5 ± 0.5 4.5 ± 0.5 6.1 ± 0.5

Heat flux units are given in W/m^2; the other units are given with the title of corresponding parameter. Where possible, ± shows standard deviation for annual mean value.  Source: Simulation of Modern Climate with the New Version Of the INM RAS Climate Model (Bracketed numbers refer to sources for observations)

Ocean Temperature and Salinity

The model biases in potential temperature and salinity averaged over longitude with respect to WOA09 (Antonov et al. 2010) are shown in Fig.12. Positive bias in the Southern Ocean penetrates from the surface downward for up to 300 m, while negative bias in the tropics can be seen even in the 100–1000 m layer.

Nevertheless, zonal mean temperature error at any level from the surface to the bottom is small. This was not the case for the INMCM4, where one could see negative temperature bias up to 2–3 K from 1.5 km to the bottom nearly al all latitudes, and 2–3 K positive bias at levels of 700–1000 m. The reason for this improvement is the introduction of a higher background coefficient for vertical diffusion at high depth (3000 m and higher) than at intermediate depth (300–500m). Positive temperature bias at 45–65 N at all depths could probably be explained by shortcomings in the representation of deep convection [similar errors can be seen for most of the CMIP5 models (Flato etal. 2013, their Fig.9.13)].

Another feature common for many present day climate models (and for the INMCM5 as well) is negative bias in southern tropical ocean salinity from the surface to 500 m. It can be explained by overestimation of precipitation at the southern branch of the Inter Tropical Convergence zone. Meridional heat flux in the ocean (Fig.13) is not far from available estimates (Trenberth and Caron 2001). It looks similar to the one for the INMCM4, but maximum of northward transport in the Atlantic in the INMCM5 is about 0.1–0.2 × 1015 W higher than the one in the INMCM4, probably, because of the increased horizontal resolution in the oceanic block.

Sea Ice

In the Arctic, the model sea ice area is just slightly overestimated. Overestimation of the Arctic sea ice area is connected with negative bias in the surface temperature. In the same time, connection of the sea ice area error with the positive salinity bias is not evident because ice formation is almost compensated by ice melting, and the total salinity source for these pair of processes is not large. The amplitude and phase of the sea ice annual cycle are reproduced correctly by the model. In the Antarctic, sea ice area is underestimated by a factor of 1.5 in all seasons, apparently due to the positive temperature bias. Note that the correct simulation of sea ice area dynamics in both hemispheres simultaneously is a difficult task for climate modeling.

The analysis of the model time series of the SST anomalies shows that the El Niño event frequency is approximately the same in the model and data, but the model El Niños happen too regularly. Atmospheric response to the El Niño vents is also underestimated in the model by a factor of 1.5 with respect to the reanalysis data.


Based on the CMIP5 model INMCM4 the next version of the Institute of Numerical Mathematics RAS climate model was developed (INMCM5). The most important changes include new parameterizations of large scale condensation (cloud fraction and cloud water are now the prognostic variables), and increased vertical resolution in the atmosphere (73 vertical levels instead of 21, top model level raised from 30 to 60 km). In the oceanic block, horizontal resolution was increased by a factor of 2 in both directions.

The climate model was supplemented by the aerosol block. The model got a new parallel code with improved computational efficiency and scalability. With the new version of climate model we performed a test model run (80 years) to simulate the present-day Earth climate. The model mean state was compared with the available datasets. The structures of the surface temperature and precipitation biases in the INMCM5 are typical for the present climate models. Nevertheless, the RMS error in surface temperature, precipitation as well as zonal mean temperature and zonal wind are reduced in the INMCM5 with respect to its previous version, the INMCM4.

The model is capable of reproducing equatorial stratospheric QBO and SSWs.The model biases for the sea surface height and surface salinity are reduced in the new version as well, probably due to increasing spatial resolution in the oceanic block. Bias in ocean potential temperature at depths below 700 m in the INMCM5 is also reduced with respect to the one in the INMCM4. This is likely because of the tuning background vertical diffusion coefficient.

Model sea ice area is reproduced well enough in the Arctic, but is underestimated in the Antarctic (as a result of the overestimated surface temperature). RMS error in the surface salinity is reduced almost everywhere compared to the previous model except the Arctic (where the positive bias becomes larger). As a final remark one can conclude that the INMCM5 is substantially better in almost all aspects than its previous version and we plan to use this model as a core component for the coming CMIP6 experiment.


One the one hand, this model example shows that the intent is simple: To represent dynamically the energy balance of our planetary climate system.  On the other hand, the model description shows how many parameters are involved, and the complexity of processes interacting.  The attempt to simulate operations of the climate system is a monumental task with many outstanding challenges, and this latest version is another step in an iterative development.

Note:  Regarding the influence of rising CO2 on the energy balance.  Global warming advocates estimate a CO2 perturbation of 4 W/m^2.  In the climate parameters table above, observations of the radiation fluxes have a 2 W/m^2 error range at best, and in several cases are observed in ranges of 10 to 15 W/m^2.

We do not yet have access to the time series temperature outputs from INMCM5 to compare with observations or with other CMIP6 models.  Presumably that will happen in the future.

Early Schematic: Flows and Feedbacks for Climate Models

I Want You Not to Panic


I’ve been looking into claims for concern over rising CO2 and temperatures, and this post provides reasons why the alarms are exaggerated. It involves looking into the data and how it is interpreted.

First the longer view suggests where to focus for understanding. Consider a long term temperature record such as Hadcrut4. Taking it at face value, setting aside concerns about revisions and adjustments, we can see what has been the pattern in the last 120 years following the Little Ice Age. Often the period between 1850 and 1900 is considered pre industrial since modern energy and machinery took hold later on. The graph shows that warming was not much of a factor until temperatures rose peaking in the 1940s, then cooling off into the 1970s, before ending the century with a rise matching the rate of earlier warming. Overall, the accumulated warming was 0.8C.

Then regard the record concerning CO2 concentrations in the atmosphere. It’s important to know that modern measurement of CO2 really began in 1959 with Mauna Loa observatory, coinciding with the mid-century cool period. The earlier values in the chart are reconstructed by NASA GISS from various sources and calibrated to reconcile with the modern record. It is also evident that the first 60 years saw minimal change in the values compared to the post 1959 rise after WWII ended and manufacturing was turned from military production to meet consumer needs. So again the mid-20th century appears as a change point.

It becomes interesting to look at the last 60 years of temperature and CO2 from 1959 to 2019, particularly with so much clamour about climate emergency and crisis. This graph puts together rising CO2 and temperatures for this period. Firstly note that the accumulated warming is about 0.8C after fluctuations. And remember that those decades witnessed great human flourishing and prosperity by any standard of life quality. The rise of CO2 was a monotonic steady rise with some acceleration into the 21st century.

Now let’s look at projections into the future, bearing in mind Mark Twain’s warning not to trust future predictions. No scientist knows all or most of the surprises that overturn continuity from today to tomorrow. Still, as weathermen well know, the best forecasts are built from present conditions and adding some changes going forward.

Here is a look to century end as a baseline for context. No one knows what cooling and warming periods lie ahead, but one scenario is that the next 80 years could see continued warming at the same rate as the last 60 years. That presumes that forces at play making the weather in the lifetime of many of us seniors will continue in the future. Of course factors beyond our ken may deviate from that baseline and humans will notice and adapt as they have always done. And in the back of our minds is the knowledge that we are 11,500 years into an interglacial period before the cold returns, being the greater threat to both humanity and the biosphere.

Those who believe CO2 causes warming advocate for reducing use of fossil fuels for fear of overheating, apparently discounting the need for energy should winters grow harsher. The graph shows one projection similar to that of temperature, showing the next 80 years accumulating at the same rate as the last 60. A second projection in green takes the somewhat higher rate of the last 10 years and projects it to century end. The latter trend would achieve a doubling of CO2.

What those two scenarios mean depends on how sensitive you think Global Mean Temperature is to changing CO2 concentrations. Climate models attempt to consider all relevant and significant factors and produce future scenarios for GMT. CMIP6 is the current group of models displaying a wide range of warming presumably from rising CO2. The one model closely replicating Hadcrut4 back to 1850 projects 1.8C higher GMT for a doubling of CO2 concentrations. If that held true going from 300 ppm to 600 ppm, the trend would resemble the red dashed line continuing the observed warming from the past 60 years: 0.8C up to now and another 1C the rest of the century. Of course there are other models programmed for warming 2 or 3 times the rate observed.

People who take to the streets with signs forecasting doom in 11 or 12 years have fallen victim to IPCC 450 and 430 scenarios.  For years activists asserted that warming from pre industrial can be contained to 2C if CO2 concentrations peak at 450 ppm.  Last year, the SR1.5 lowered the threshold to 430 ppm, thus the shortened timetable for the end of life as we know it.

For the sake of brevity, this post leaves aside many technical issues. Uncertainties about the temperature record, and about early CO2 levels, and the questions around Equilibrium CO2 Sensitivity (ECS) and Transient CO2 Sensitivity (TCS) are for another day. It should also be noted that GMT as an average hides huge variety of fluxes over the globe surface, and thus larger warming in some places such as Canada, and cooling in other places like Southeast US. Ross McKitrick pointed out that Canada has already gotten more than 1.5C of warming and it has been a great social, economic and environmental benefit.

So I want people not to panic about global warming/climate change. Should we do nothing? On the contrary, we must invest in robust infrastructure to ensure reliable affordable energy and to protect against destructive natural events. And advanced energy technologies must be developed for the future since today’s wind and solar farms will not suffice.

It is good that Greta’s demands were unheeded at the Davos gathering. Panic is not useful for making wise policies, and as you can see above, we have time to get it right.

Climate Models: Good, Bad and Ugly

Several posts here discuss INM-CM4, the Good CMIP5 climate model since it alone closely replicates the Hadcrut temperature record, as well as approximating BEST and satellite datasets. This post is prompted by recent studies comparing various CMIP6 models, the new generation intending to hindcast history through 2014, and forecast to 2100.


Much revealing information is provided in an AGU publication Causes of Higher Climate Sensitivity in CMIP6 Models by Mark D. Zelinka et al. (2019). H/T Judith Curry.  Excerpts in italics with my bolds.

The severity of climate change is closely related to how much the Earth warms in response to greenhouse gas increases. Here we find that the temperature response to an abrupt quadrupling of atmospheric carbon dioxide has increased substantially in the latest generation of global climate models. This is primarily because low cloud water content and coverage decrease more strongly with global warming, causing enhanced planetary absorption of sunlight—an amplifying feedback that ultimately results in more warming. Differences in the physical representation of clouds in models drive this enhanced sensitivity relative to the previous generation of models. It is crucial to establish whether the latest models, which presumably represent the climate system better than their predecessors, are also providing a more realistic picture of future climate warming.

The objective is to understand why the models are getting badder and uglier, and whether the increased warming is realistic. This issue was previously noted by John Christy last summer:

Figure 8: Warming in the tropical troposphere according to the CMIP6 models.
Trends 1979–2014 (except the rightmost model, which is to 2007), for 20°N–20°S, 300–200 hPa.

Christy’s comment: We are just starting to see the first of the next generation of climate models, known as CMIP6. These will be the basis of the IPCC assessment report, and of climate and energy policy for the next 10 years. Unfortunately, as Figure 8 shows, they don’t seem to be getting any better. The observations are in blue on the left. The CMIP6 models, in pink, are also warming faster than the real world. They actually have a higher sensitivity than the CMIP5 models; in other words, they’re apparently getting worse! This is a big problem.

Why CMIP6 Models Are More Sensitive

Zelinka et al. (2019) delve into the issue by comparing attributes of the CMIP6 models currently available for diagnostics.

1 Introduction

Determining the sensitivity of Earth’s climate to changes in atmospheric carbon dioxide (CO2) is a fundamental goal of climate science. A typical approach for doing so is to consider the planetary energy balance at the top of the atmosphere (TOA), represented as


where urn:x-wiley:grl:media:grl60047:grl60047-math-0005 is the net TOA radiative flux anomaly,
urn:x-wiley:grl:media:grl60047:grl60047-math-0006 is the radiative forcing,
urn:x-wiley:grl:media:grl60047:grl60047-math-0007 is the radiative feedback parameter, and
urn:x-wiley:grl:media:grl60047:grl60047-math-0008 is the global mean surface air temperature anomaly.

The sign convention is that urn:x-wiley:grl:media:grl60047:grl60047-math-0009 is positive down and urn:x-wiley:grl:media:grl60047:grl60047-math-0010 is negative for a stable system.

Conceptually, this equation states that the TOA energy imbalance can be expressed as the sum of the radiative forcing and the radiative response of the system to a global surface temperature anomaly. The assumption that the radiative damping can be expressed as a product of a time‐invariant and global mean surface temperature anomaly is useful but imperfect (Armour et al., 2013; Ceppi & Gregory, 2019). Under this assumption, one can estimate the effective climate sensitivity (ECS), the ultimate global surface temperature change that would restore TOA energy balance.


where urn:x-wiley:grl:media:grl60047:grl60047-math-0015 is the radiative forcing due to doubled CO urn:x-wiley:grl:media:grl60047:grl60047-math-0016. .

ECS therefore depends on the magnitude of the CO2 radiative forcing and on how strongly the climate system radiatively damps planetary warming. A climate system that more effectively radiates thermal energy to space or more strongly reflects sunlight back to space as it warms (larger magnitude urn:x-wiley:grl:media:grl60047:grl60047-math-0007 )  will require less warming to restore planetary energy balance in response to a positive radiative forcing, and vice versa.

Because GCMs attempt to represent all relevant processes governing Earth’s response to CO2, they provide the most direct means of estimating ECS. ECS values diagnosed from CO2 quadrupling experiments performed in fully coupled GCMs as part of the fifth phase of the Coupled Model Intercomparison Project ranged from 2.1 to 4.7 K. It is already known that several models taking part in CMIP6 have values of ECS exceeding the upper limit of this range. These include CanESM5.0.3 , CESM2, CNRM‐CM6‐1, E3SMv1, and both HadGEM3‐GC3.1 and UKESM1.

In all of these models, high ECS values are at least partly attributed to larger cloud feedbacks than their predecessors.

In this study, we diagnose the forcings, feedbacks, and ECS values in all available CMIP6 models. We assess in each model the individual components that make up the climate feedback parameter and quantify the contributors to intermodel differences in ECS. We also compare these results with those from CMIP5 to determine whether the multimodel mean or spread in ECS, feedbacks, and forcings have changed.

The range of ECS values across models has widened in CMIP6, particularly on the high end, and now includes nine models with values exceeding the CMIP5 maximum (Figure 1a). Specifically, the range has increased from 2.1–4.7 K in CMIP5 to 1.8–5.6 K in CMIP6, and the intermodel variance has significantly increased (p = 0.04).

One model’s ECS is below the CMIP5 minimum (INM‐CM4‐8).

This increased population of high ECS models has caused the multimodel mean ECS to increase from 3.3 K in CMIP5 to 3.9 K in CMIP6. Though substantial, this increase is not statistically significant (p = 0.16).  ERF urn:x-wiley:grl:media:grl60047:grl60047-math-0039 has increased slightly on average in CMIP6 and its intermodel standard deviation has been reduced by nearly 30% from 0.50 Wm^2 in CMIP5 to 0.36 Wm^2 in CMIP6 (Figure 1b).

This ECS increase is primarily attributable to an increased multimodel mean feedback parameter due to strengthened positive cloud feedbacks, as all noncloud feedbacks are essentially unchanged on average in CMIP6. However, it is the unique combination of weak overall negative feedback and moderate radiative forcing that allows several CMIP6 models to achieve high ECS values beyond the CMIP5 range.

The increase in cloud feedback arises solely from the strengthened SW low cloud component, while the non‐low cloud feedback has slightly decreased. The SW low cloud feedback is larger on average in CMIP6 due to larger reductions in low cloud cover and weaker increases in cloud liquid water path with warming. Both of these changes are much more dramatic in the extratropics, such that the CMIP6 mean low cloud amount feedback is now stronger in the extratropics than in the tropics, and the fraction of multimodel mean ECS attributable to extratropical cloud feedback has roughly tripled.

The aforementioned increase in CMIP6 mean cloud feedback is related to changes in model representation of clouds. Specifically, both low cloud cover and water content increase less dramatically with SST in the middle latitudes as estimated from unforced climate variability in CMIP6.

Figure 1. INM-CM5 representation of temperature history. The 5-year mean GMST (K) anomaly with respect to 1850–1899 for HadCRUTv4 (thick solid black); model mean (thick solid red). Dashed thin lines represent data from individual model runs: 1 – purple, 2 – dark blue, 3 – blue, 4 – green, 5 – yellow, 6 – orange, 7 – magenta. In this and the next figures numbers on the time axis indicate the first year of the 5-year mean

The Nitty Gritty

Open image in new tab to enlarge.

The details are shown in Supporting Information for “Causes of higher climate
sensitivity in CMIP6 models”. Here we can seen how specific models stack up on the key variables driving ECS attributes.

Open image in new tab to enlarge.

Figure S1. Gregory plots showing global and annual mean TOA net radiation anomalies
plotted against global and annual mean surface air temperature anomalies. Best-fit ordinary linear least squares lines are shown. The y-intercept of the line (divided by 2) provides an estimate of the effective radiative forcing from CO2 doubling (ERF2x), the slope of the line provides an estimate of the net climate feedback parameter (λ), and the x-intercept of the line (divided by 2) provides an estimate of the effective climate sensitivity (ECS). These values are printed in each panel. Models are ordered by ECS.

Open image in new tab to enlarge.

Figure S7. Contributions of forcing and feedbacks to ECS in each model and for the multimodel means. Contributions from the tropical and extratropical portion of the feedback are shown in light and dark shading, respectively. Black dots indicate the ECS in each model, while upward and downward pointing triangles indicate contributions from non-cloud and cloud feedbacks, respectively. Numbers printed next to the multi-model mean bars indicate the cumulative sum of each plotted component. Numerical values are not printed next to residual, extratropical forcing, and tropical albedo terms for clarity. Models within each collection are ordered by ECS.

Open image in new tab to enlarge.

Figure S8. Cloud feedbacks due to low and non-low clouds in the (light shading) tropics and (dark shading) extratropics in each model and for the multi-model means. Non-low cloud feedbacks are separated into LW and SW components, and SW low cloud feedbacks are separated into amount and scattering components. “Others” represents the sum of LW low cloud feedbacks and the small difference between kernel- and APRP-derived SW low cloud feedback. Insufficient diagnostics are available to compute SW cloud amount and scattering feedbacks for the FGOALSg2 and CAMS-CSM1-0 models. Black dots indicate the global mean net cloud feedback in each model, while upward and downward pointing triangles indicate total contributions from non-low and low clouds, respectively. Models within each collection are ordered by global mean net cloud feedback.

My Summary

Once again the Good Model INM-CM4-8 is bucking the model builders’ consensus. The new revised INM model has a reduced ECS and it flipped its cloud feedback from positive to negative.The description of improvements made to the INM modules includes how clouds are handled:

One of the few notable changes is the new parameterization of clouds and large-scale condensation. In the INMCM5 cloud area and cloud water are computed prognostically according to Tiedtke (1993). That includes the formation of large-scale cloudiness as well as the formation of clouds in the atmospheric boundary layer and clouds of deep convection. Decrease of cloudiness due to mixing with unsaturated environment and precipitation formation are also taken into account. Evaporation of precipitation is implemented according to Kessler (1969).

Cloud radiation forcing (CRF) at the top of the atmosphere is one of the most important climate model characteristics, as errors in CRF frequently lead to an incorrect surface temperature.

In the high latitudes model errors in shortwave CRF are small. The model underestimates longwave CRF in the subtropics but overestimates it in the high latitudes. Errors in longwave CRF in the tropics tend to partially compensate errors in shortwave CRF. Both errors have positive sign near 60S leading to warm bias in the surface temperature here. As a result, we have some underestimation of the net CRF absolute value at almost all latitudes except the tropics. Additional experiments with tuned conversion of cloud water (ice) to precipitation (for upper cloudiness) showed that model bias in the net CRF could be reduced, but that the RMS bias for the surface temperature will increase in this case.


Temperatures According to Climate Models  Initial Discovery of the Good Model INM-CM4 within CMIP5

Latest Results from First-Class Climate Model INMCM5 The new version improvements and historical validation


Climate Models Argue from False Premises

Roger Pielke Jr. Explains at Forbes If Climate Scenarios Are Wrong For 2020, Can They Get 2100 Right? Excerpts in italics with my bolds.

How we think and talk about climate policy is profoundly shaped by 31 different computer models which produce a wide range of scenarios of the future, starting from a base year of 2005. With 2020 right around the corner, we now have enough experience to ask how well these models are doing. Based on my preliminary analysis reported below, the answer appears to be not so well.


Climate policy discussions are framed by the assessment reports of the Intergovernmental Panel on Climate Change (IPCC). There are of course discussions that occur outside the boundaries of the IPCC, but the IPCC analyses carry enormous influence. At the center of the IPCC approach to climate policy analyses are scenarios of the future. The IPCC reports that its database contains 1,184 scenarios from 31 models.

Some of these scenarios are the basis for projecting future changes in climate (typically using what are called Global Climate Models or GCMs). Scenarios are also the basis for projecting future impacts of climate change, as well as the consequences of climate policy for the economy and environment (often using what are called Integrated Assessment Models or IAMs).

Chain of suppositions comprising Integrated Assessment Models.

Here I focus on two key metrics directly relevant to climate policy that come from the scenarios of fifth assessment report (AR5) of the IPCC: economic growth and atmospheric carbon dioxide concentrations. The scenarios of the AR5 begin in 2005 and most project futures to 2100, with some looking only to 2050. We now have almost 15 years of data to compare against projections, allowing us to assess how they are doing.

Economic Growth Scenarios Way Too High

Economic growth is important because it is one of the elements of the so-called Kaya Identity, the basis for projecting future carbon dioxide emissions, and a key input to GCMs that produce projections of future climate change. Economic growth, in the context of climate change is a double-edged sword. On the one hand, high rates of growth can mean more individual and societal wealth, which is generally viewed to be a good thing. On the other hand, high rates of economic growth, all else equal, means greater amounts of carbon dioxide emitted into the atmosphere, decidedly not a good thing.

The vast majority of scenarios reported by the IPCC AR5 include rates of economic growth (measured as GDP per capita using market exchange rates) that are greater than what has been observed since 2010. Specifically, more than 99.5% of IPCC AR5 scenarios – all but 5 of about 1,100 — have GDP growth rates for the period 2010 to 2020 in excess of that which has been observed in the real world from 2010 to 2018. The International Monetary Fund recently lowered its expectations for global economic growth in 2019 and 2020 to below that of 2018. So it seems highly unlikely that the real-world will “catch up.”

What is clear is that, to date, the vast majority of IPCC scenarios are far more aggressive in their projections of economic growth than has been observed. For the scenarios to “catch up” would require growth rates in future years even more aggressive than those built into the scenarios. If the IPCC projections are indeed too aggressive, then this has implications for the results of analyses that depend upon such assumptions for projecting future climate changes, impacts and the costs and benefits of policy action.

Models Overstate CO2 Concentrations in 2020

We see a similar aggressiveness in scenarios when looking at the concentration of carbon dioxide in the atmosphere. Based on data from the National Oceanic and Atmospheric Administration, in 2020 global carbon dioxide concentrations will be at about 413 parts per million (ppm). To put this into context, the oft-cited 2 degree Celsius temperature target is sometimes associated with a carbon dioxide concentration level of 450 ppm, and concentration levels are currently increasing by about 2-3 ppm per year.

All of the scenarios in the IPCC database that assume no climate policy (called reference scenarios) have carbon dioxide concentrations above 413 ppm. Across all scenarios, including those that assume successful implementation of climate policies such as a globally harmonized carbon price, 86% have concentrations levels above 413 ppm.

There is little evidence to suggest that climate policies have accelerated rates of decarbonization, leading to lower carbon dioxide concentrations than previously expected. One reason for this is that the world has not actually adopted climate policies of the sort assumed in policy scenarios. Thus, the fact carbon dioxide concentrations in 2020 will be at the lower end of scenarios almost certainly says something about what is going on in the models rather than unexpected good news about climate policy success.

Flawed Scenarios Give False Projections

It seems obvious that we should ask hard questions of scenarios initiated in 2005 to project outcomes for 2050 or 2100 that fail to accurately describe what is observed in 2020. Individual scenarios are not predictions, but they can certainly be more or less consistent with how the world actually evolves. We should also ask questions when an entire set of scenarios collectively fails to encompass real-world observations – such as is the case with the reference scenarios of the IPCC AR5 database and actual atmospheric concentrations of carbon dioxide.

To the extent that flawed scenarios make their way into GCMs, we would be using misleading projections of climate futures and their probabilities, of possible future climate impacts and their likelihoods, and, crucially, of the costs and benefits of alternative approaches to climate policy. It is thus imperative that the forthcoming sixth IPCC assessment – or a separate group — ensures that its scenario database is consistent with real-world evidence, and that we understand why many scenarios have fared so poorly since 2005 with respect to key metrics.

More generally, it is important that the knowledge base that informs climate policy discussions be opened up to a broader diversity of methodologies and perspectives, and that all approaches are rigorously scrutinized. Climate policy is too important for anything less.

See also Models Wrong About the Past Produce Unbelievable Futures

And Unbelievable Climate Models

Beware getting sucked into any model, climate or otherwise.

El Nino’s Cold Tongue Baffles Climate Models

This post is prompted by an article published by Richard Seager et al. at AMS Journal Is There a Role for Human-Induced Climate Change in the Precipitation Decline that Drove the California Drought? Excerpts in italics with my bolds.


The recent California drought was associated with a persistent ridge at the west coast of North America that has been associated with, in part, forcing from warm SST anomalies in the tropical west Pacific. Here it is considered whether there is a role for human-induced climate change in favoring such a west coast ridge. The models from phase 5 of the Coupled Model Intercomparison Project do not support such a case either in terms of a shift in the mean circulation or in variance that would favor increased intensity or frequency of ridges. The models also do not support shifts toward a drier mean climate or more frequent or intense dry winters or to tropical SST states that would favor west coast ridges. However, reanalyses do show that over the last century there has been a trend toward circulation anomalies over the Pacific–North American domain akin to those during the height of the California drought.

Position of the Warm Pool in the western Pacific under La Niña conditions, and the convergence zone where the Warm Pool meets nutrient-enriched waters of the eastern equatorial Pacific. Tuna and their prey are most abundant in this convergence zone 21,48 (source: HadISST) 109 .

First we plot together the history of California winter precipitation and Arctic sea ice anomaly in terms of area covered by ice at the annual minimum month of September and also as the November through April winter average (Fig. 9, top). While all three are of course negative during the drought years there is no year to year relationship between these quantities. Next we composite 200-mb height anomalies, U.S. precipitation, and sea ice concentration for, during the period covered by sea ice data, the driest 15% of California winters and subtract the climatological winter values (Fig. 9, bottom). As in Seager et al. (2015), the composites show that when California is dry the entire western third of the United States tends to be dry and that there is a high pressure ridge located immediately off the west coast, which does not appear to be connected to a tropically sourced wave train. There also tends to be a trough over the North Atlantic, similar to winter 2013/14. There are notable localized sea ice concentration anomalies with increased ice in the Sea of Okohtsk, reduced ice in the Bering Sea, and increased ice in Hudson Bay and Labrador Sea, though the anomalies are small. These ice anomalies are consistent with atmospheric forcing. The Sea of Okhotsk and Hudson Bay/Labrador Sea anomalies appear under northerly flow that would favor cold advection and increased ice. The Bering Sea anomaly appears under easterly flow that would drive ice offshore. As shown by Seager et al. (2015), the dry California winters are also associated with North Pacific SST anomalies forced by the atmospheric wave train and the sea ice anomalies appear part of this feature rather than as causal drivers of the atmospheric circulation anomalies.

These analyses do not support the idea that variations in sea ice extent influence the prevalence of west coast ridges or dry winters in California.

Source: NASA

On the basis of the above analysis we conclude that the occurrence of persistent ridges at the west coast is more connected to SST anomalies than it is to sea ice anomalies. The CMIP5 model ensemble lends no support to the idea that ridge-inducing SST patterns become more likely as a result of rising GHGs. However, the models could be wrong so we next examine whether trends in observed SSTs lend any support to this idea. Trends were computed by straightforward linear least squares regression.

A number of features stand out in these trends regardless of the time period used.

  • Amid near-ubiquitous warming of the oceans the central equatorial Pacific stands out as a place that has not warmed.
  • The west–east SST gradient across the tropical Pacific has strengthened as the west Pacific has warmed.
  • Increased reanalysis precipitation over the Indian Ocean–Maritime Continent–tropical west Pacific and reduced reanalysis precipitation over the central equatorial Pacific Ocean were found.
  • Tropical geopotential heights have increased at all longitudes.
  • A trend toward a localized high pressure ridge extending from the subtropics toward Alaska across western North America.

These associations in the trends—a strengthened west–east SST gradient across the tropical Pacific and localized high pressure at the North American west coast—are in line with every piece of evidence based on observations and SST-forced models presented so far that there is a connection between drought-inducing circulation anomalies and tropical Pacific SSTs. The mediating influence is seen in the precipitation trends that show enhanced zonal gradients of tropical Indo-Pacific precipitation and a marked increase centered over the Maritime Continent region.

Conclusions and discussion

We have examined whether there is any evidence, observational and/or model based, that the precipitation decline that drove the California drought was contributed to by human-driven climate change. Findings are as follows:

  • The CMIP5 model ensemble provides no evidence for mean drying or increased prevalence of dry winters for California or a shift toward a west coast ridge either in the mean or as a more common event. They also provide no evidence of a shift in tropical SSTs toward a state with an increased west–east SST gradient that has been invoked as capable of forcing a west coast ridge and drought.
  • Analysis of observations-based reanalyses shows that west coast ridges, akin to that in winter 2013/14, are related to an increased west–east SST gradient across the tropical Pacific Ocean and have repeatedly occurred over past decades though as imperfect analogs.
  • SST-forced models can reproduce such ridges and their connection to tropical SST anomalies.  Century-plus-long reanalyses and SST-forced models indicate a long-term trend toward circulation anomalies more akin to that of winter 2013/14.
  • The trends of heights and SSTs in the reanalyses also show both an increased west–east SST gradient and a 200-mb ridge over western North America that, in terms of association between ocean and atmospheric circulation, matches those found via the other analyses on interannual time scales.
  • However, SST-forced models when provided the trends in SSTs create a 200-mb ridge over the central North Pacific and, in general, a circulation pattern that cannot be said to truly match that in reanalyses.

So can a case be made that human-driven climate change contributed to the precipitation drop that drives the drought? Not from the simulations of historical climate and projections of future climate of the CMIP5 multimodel ensemble.

These simulations show no current or future increase in the likelihood or extremity of negative precipitation, precipitation minus evaporation, west coast ridges, or ridge-forcing tropical SST patterns. However, when examining the observational record a case can be made that the climate system has been moving in a direction that favors both a ridge over the west coast, which has a limited similarity to that observed in winter 2013/14, the driest winter of the drought, and a ridge-generating pattern of increased west–east SST gradient across the tropical Pacific Ocean with warm SSTs in the Indo–west Pacific region. This observations-based argument then gets tripped up by SST-forced models, which know about the trends in SST but fail to simulate a trend toward a west coast ridge. On the other hand, idealized modeling indicates that preferential warming in the Indo–west Pacific region does generate a west coast ridge.

To make the argument we outline above requires rejecting the CMIP5 ensemble as a guide to how tropical climate responds to increased radiative forcing since this tropical ocean response is at odds with what they do. To do so follows in the footsteps of Kohyama and Hartmann (2017, p. 4248), who correctly point out that “El Niño–like mean-state warming is only a ‘majority decision’ based on currently available GCMs, most of which exhibit unrealistic nonlinearity of the ENSO dynamics” (see also Kohyama et al. 2017). The implications of changing tropical SST gradients would extend far beyond just California and include most regions of the world sensitive to ENSO-generated climate anomalies.

We believe that the current state of observational information, analysis of it, and climate modeling does not allow a confident rejection of the CMIP5 model responses and/or a confident assertion of human role in the precipitation drop of the California drought. We also believe that for the same reasons a human role cannot be excluded.


The researchers set out to prove man-made global warming contributes to droughts in California, but their findings put them in a quandry.  The models include CO2 forcings, yet do not predict the conditions resulting in west coast droughts,   They have to admit the models are wrong in this respect (what else do the models get wrong?).  They cling to the hope that global warming can be tied to droughts, but have to admit there is no evidence from the failed models.


(a) Annual variation (Annual RMSE) of SST and Chl-a globally (units are °C/decade for SST and log(mg/m3/decade) for Chl-a). (b) The pattern of annual variation in the Bonney Upwelling, Southern Australia. (c) The pattern of annual variation in the the Florida Current, South East USA.


A separate study is Global patterns of change and variation in sea surface temperature and chlorophyll by Piers K. Dunstan et. al.

The blue tongue shows up as an equatorial pacific region that shows little variability over the 14 year period of study.  From the article:

The interaction between annual variation in SST and Chl-a provides insights into how and where linkages occur on annual time scales. Our analysis shows strong latitudinal bands associated with variation in seasonal warming (Fig. 4a). The equatorial Pacific, Indian and Atlantic Oceans are all characterised by very low annual RMSE for both SST and Chl-a. The mid latitudes of each ocean basin have higher variance in SST and/or Chl-a.

It’s Models All the Way Down

In Rapanos v. United States, Justice Antonin Scalia offered a version of the traditional tale of how the Earth is carried on the backs of animals. In this version of the story, an Eastern guru affirms that the earth is supported on the back of a tiger. When asked what supports the tiger, he says it stands upon an elephant; and when asked what supports the elephant he says it is a giant turtle. When asked, finally, what supports the giant turtle, he is briefly taken aback, but quickly replies “Ah, after that it is turtles all the way down.”  By this analogy, Scalia was showing how other judges were substituting the “purpose” of a law for the actual text written by congress.

The moral of the story is that our perceptions of reality are built upon assumptions. The facts from our experience are organized by means of a framework that provides a worldview, a mental model or paradigm of the way things are. Through the history of science, various pieces of that paradigm have been challenged and have collapsed when contradicted by fresh observations and measurements from experience. Today a small group of scientists have declared themselves climate experts and claim their computer models predict a dangerous future for the planet because of our energy choices.

The Climate Alarmist paradigm is described and refuted in an essay by John Christy published by GWPF The Tropical Skies: Falsifying climate alarm. The content comes from his presentation 23 May 2019 to a meeting in the Palace of Westminster in London, England. Excerpts in italics with my bolds

At the global level a significant discrepancy has been confirmed between empirical measurements and computer predictions.

“The global warming trend for the last 40 years, starting in 1979 when satellite measurements began, is +0.13C per decade or about half of what climate models predicted.”

Figure 3: Updating the estimate.
Redrawn from Christy and McNider 2017.

The top line is the actual temperature of the global troposphere, with the range of original 1994 study shown as the shaded area. We were able to calculate and remove the El Niño effect, which accounts for a lot of the variance, but has no trend to it. Then there are these two dips in global temperature after the El Chichón and Mt Pinatubo eruptions. Volcanic eruptions send aerosols up into the stratophere, and these reflect sunlight, so fewer units of energy get in and the earth cools. I developed a mathematical function to simulate this, as shown in Figure 3d. 

After eliminating the effect of volcanoes, we were left with a line that was approximately straight, apart from some noise. The trend, the dark line in Figure 3e, was 0.095◦C per decade, almost exactly the same as in our earlier study, 25 years before.

Our result is that the transient climate response – the short-term warming – in the troposphere is 1.1◦C at the point in time when carbon dioxide levels double. This is not a very alarming number. If we perform the same calculation on the climate models, you get a figure of 2.31◦C, which is significantly different. The models’ response to carbon dioxide is twice what we see in the real world. So the evidence indicates the consensus range for climate sensitivity is incorrect.

Almost all climate models have predicted rapid warming at high altitudes in the tropics due to greenhouse gas forcing.

They all have rapid warming above 30,000 feet in the tropics – it’s effectively a diagnostic signal of greenhouse warming. But in reality it’s just not happening. It’s warming up there, but at only about one third of the rate predicted by the models.”

Figure 5: The hot spot in the Canadian model.
The y-axis is denominated in units of pressure, but the scale makes it linear in altitude.

Almost all of the models show such a warming, and none show it when extra greenhouse gas forcing is not included. Figure 6 shows the warming trends from 102 climate models, and the average trend is 0.44◦C per decade. This is quite fast: over 40 years, it amounts to almost 2◦C, although some models have slower warming and some faster. However, the real-world warming is much lower; around one third of the model average.

Christy 2019 fig7

Figure 7: Tropical mid-tropospheric temperatures, models vs. observations.
Models in pink, against various observational datasets in shades of blue. Five-year averages
1979–2017. Trend lines cross zero at 1979 for all series.

Figure 7 shows the model projections in pink and different observational datasets in shades of blue. You can also easily see the difference in warming rates: the models are warming too fast. The exception is the Russian model, which has much lower sensitivity to carbon dioxide, and therefore gives projections for the end of the century that are far from alarming. The rest of them are already falsified, and their predictions for 2100 can’t be trusted.

The next generation of climate models show that lessons are not being learned.

“An early look at some of the latest generation of climate models reveals they are predicting even faster warming. This is simply not credible.”

Figure 8: Warming in the tropical troposphere according to the CMIP6 models.
Trends 1979–2014 (except the rightmost model, which is to 2007), for 20°N–20°S, 300–200 hPa.

We are just starting to see the first of the next generation of climate models, known as CMIP6. These will be the basis of the IPCC assessment report, and of climate and energy policy for the next 10 years. Unfortunately, as Figure 8 shows, they don’t seem to be getting any better. The observations are in blue on the left. The CMIP6 models, in pink, are also warming faster than the real world. They actually have a higher sensitivity than the CMIP5 models; in other words, they’re apparently getting worse! This is a big problem.

Figure 9: (b) Enlargement and simplification of the tropical troposphere
The tropical troposphere in the Fifth Assessment Report.
The coloured bands represent the range of warming trends. Red is the model runs incorporating natural and anthropogenic forcings, blue is natural forcings only. The range of the observations is in grey


So the rate of accumulation of joules of energy in the tropical troposphere is significantly less than predicted by the CMIP5 climate models. Will the next IPCC report discuss this long running mismatch? There are three possible ways they could handle the problem:
• The observations are wrong, the models are right.
• The forcings used in the models were wrong.
• The models are failed hypotheses.

I predict that the ‘failed hypothesis’ option will not be chosen. Unfortunately, that’s exactly what you should do when you follow the scientific method.

Models Wrong About the Past Produce Unbelievable Futures

Models vs. Observations. Christy and McKitrick (2018) Figure 3

The title of this post is the theme driven home by Patrick J. Michaels in his critique of the most recent US National Climate Assessment (NA4). The failure of General Circulation Models (GCMs) is the focal point of his presentation February 14, 2018. Comments on the Fourth National Climate Assessment. Excerpts in italics with my bolds.

NA4 uses a flawed ensemble of models that dramatically overforecast warming of the lower troposphere, with even larger errors in the upper tropical troposphere. The model ensemble also could not accommodate the “pause” or “slowdown” in warming between the two large El Niños of 1997-8 and 2015-6. The distribution of warming rates within the CMIP5 ensemble is not a true indication of a statistical range of prospective warming, as it is a collection of systematic errors. Despite a glib statement about this Assessment fulfilling the terms of the federal Data Quality Act, that is fatuous. The use of systematically failing models does not fulfill the “maximizing the quality, objectivity, utility, and integrity of information” provision of the Act.

USGCRP should produce a reset Assessment, relying on a model or models that work in four dimensions for future guidance and ignoring the ones that don’t.

Why wasn’t this done to begin with? The model INM-CM4 is spot on, both at the surface and in the vertical, but using it would have largely meant the end of warming as a significant issue. Under a realistic emission scenario (which USGCRP also did not use), INM-CM4 strongly supports the “lukewarm” synthesis of global warming. Given the culture of alarmism that has infected the global change community since before the first (2000) Assessment, using this model would have been a complete turnaround with serious implications.

The new Assessment should employ best scientific practice, and one that weather forecasters use every day. In the climate sphere, billions of dollars are at stake, and reliable forecasts are also critical.

The theme is now picked up in the latest NIPCC report on Fossil Fuels. Chapter 2 is the Climate Science background and the statements below in italics with my bolds come from there.

Chapter 2 Climate Science Climate Change Reconsidered II: Fossil Fuels

Of the 102 model runs considered by Christy and McKitrick, only one comes close to accurately hindcasting temperatures since 1979: the INM-CM4 model produced by the Institute for Numerical Mathematics of the Russian Academy of Sciences (Volodin and Gritsun, 2018). That model projects only 1.4°C warming by the end of the century, similar to the forecast made by the Nongovernmental International Panel on Climate Change (NIPCC, 2013) and many scientists, a warming only one-third as much as the IPCC forecasts. Commenting on the success of the INM-CM model compared to the others (as shown in an earlier version of the Christy graphic), Clutz (2015) writes,

(1) INM-CM4 has the lowest CO2 forcing response at 4.1K for 4xCO2. That is 37% lower than multi-model mean.

(2) INM-CM4 has by far the highest climate system inertia: Deep ocean heat capacity in INM-CM4 is 317 W yr m-2 K -1 , 200% of the mean (which excluded INM-CM4 because it was such an outlier).

(3)INM-CM4 exactly matches observed atmospheric H2O content in lower troposphere (215 hPa), and is biased low above that. Most others are biased high.

So the model that most closely reproduces the temperature history has high inertia from ocean heat capacities, low forcing from CO2 and less water for feedback. Why aren’t the other models built like this one?

The outputs of GCMs are only as reliable as the data and theories “fed” into them, which scientists widely recognize as being seriously deficient (Bray and von Storch, 2016; Strengers, et al., 2015). The utility and skillfulness of computer models are dependent on how well the processes they model are understood, how faithfully those processes are simulated in the computer code, and whether the results can be repeatedly tested so the models can be refined (Loehle, 2018). To date, GCMs have failed to deliver on each of these counts.

The reference above is to a study published in July 2018 by John Christy and Ross McKitrick  A Test of the Tropical 200‐ to 300‐hPa Warming Rate in Climate Models. Excerpts in italics with my bolds.


Overall climate sensitivity to CO2 doubling in a general circulation model results from a complex system of parameterizations in combination with the underlying model structure. We refer to this as the model’s major hypothesis, and we assume it to be testable. We explain four criteria that a valid test should meet: measurability, specificity, independence, and uniqueness. We argue that temperature change in the tropical 200‐ to 300‐hPa layer meets these criteria. Comparing modeled to observed trends over the past 60 years using a persistence‐robust variance estimator shows that all models warm more rapidly than observations and in the majority of individual cases the discrepancy is statistically significant. We argue that this provides informative evidence against the major hypothesis in most current climate models.


All series‐specific trends and confidence intervals are reported in the supporting information Table S1. The mean restricted trend (without a break term) is 0.325 ± 0.132°C per decade in the models and 0.173 ± 0.056°C per decade in the observations. With a break term included they are 0.389 ± 0.173°C per decade (models) and 0.142 ± 0.115°C per decade (observed). Figure 4 shows the individual trend magnitudes. The red circles and confidence interval whiskers are from models, and the blue are observed.  Trend magnitudes and 95% confidence intervals. Number in upper left corner indicates number of model trends (out of 102) that exceed observed average trend.

If models accurately represented the magnitude of 200‐ to 300‐hPa warming with only nonsystematic errors contributing noise, these distributions would be centered on zero. Clearly, they are centered above zero, in fact in both the restricted and general cases, the entire distribution is above zero.

Table S2 presents individual run test results. In the restricted case, 62 of the 102 divergence terms are significant, while in the general case, 87 of 102 are. The model‐observational discrepancy is not simple uncertainty or random noise but represents a structural bias shared across models.

Worst and Best Models (Table S2) No Break With Break
bcc‐csm1‐1 220.1 593.3
CanESM2 410.3 534.4
CCSM4 258.1 430.6
EC‐EARTH 296.0 222.5
FIO‐ESM 129.2 310.9
GISS‐E2‐H 157.3 444.8
GISS‐E2‐H‐CC 139.0 468.5
GISS‐E2‐R 382.4 237.7
HadGEM2‐ES 50.0 575.4
INMCM4 0.0 2.9

Note. First column: test score for restricted case (no break). Score is significant at 5% if it exceeds 41.53. Second column: test score for unrestricted case (with break at 1979). Score is significant at 5% if it exceeds 50.48.


Comparing observed trends to those predicted by models over the past 60 years reveals a clear and significant tendency on the part of models to overstate warming. All 102 CMIP5 model runs warm faster than observations, in most individual cases the discrepancy is significant, and on average the discrepancy is significant. The test of trend equivalence rejects whether or not we include a break at 1979 for the PCS, though the rejections are stronger when we control for its influence. Measures of series divergence are centered at a positive mean and the entire distribution is above zero. While the observed analogue exhibits a warming trend over the test interval it is significantly smaller than that shown in models, and the difference is large enough to reject the null hypothesis that models represent it correctly, within the bounds of random uncertainty.


The reference to Clutz (2015) is the post Temperatures According to Climate Models

See also: 2018 Update: Best Climate Model INMCM5

On Thermodynamic Climate Modelling



Some years ago I wrote a post called Climate Thinking Out of the Box (reprinted later on) which was prompted by a conclusion from Lucarini et al. 2014:

“In particular, it is not obvious, as of today, whether it is more efficient to approach the problem of constructing a theory of climate dynamics starting from the framework of hamiltonian mechanics and quasi-equilibrium statistical mechanics or taking the point of view of dissipative chaotic dynamical systems, and of non-equilibrium statistical mechanics, and even the authors of this review disagree. The former approach can rely on much more powerful mathematical tools, while the latter is more realistic and epistemologically more correct, because, obviously, the climate is, indeed, a non-equilibrium system.”

Now we have a publication discussing progress in applying the latter approach using thermodynamic concepts in the effort to model climate processes.. The article is A new diagnostic tool for water, energy and entropy budgets in climate models by Valerio Lembo, Frank Lunkeit, and Valerio Lucarini February 14, 2019.  Overview in italics with my bolds.

Abstract: This work presents a novel diagnostic tool for studying the thermodynamics of the climate systems with a wide range of applications,from sensitivity studies to model tuning. It includes a number of modules for assessing the internal energy budget, the hydrological cycle,the Lorenz Energy Cycle and the material entropy production, respectively.

The routine receives as inputs energy fluxes at surface and at the Top-of-Atmosphere (TOA), for the computation of energy budgets at Top-of-Atmosphere (TOA), at the surface, and in the atmosphere as a residual. Meridional enthalpy transports are also computed from the divergence of the zonal mean energy budget fluxes; location and intensity of peaks in the two hemispheres are then provided as outputs. Rainfall, snowfall and latent heat fluxes are received as inputs for computing the water mass and latent energy budgets. If a land-sea mask is provided, the required quantities are separately computed over continents and oceans. The diagnostic tool also computes the Lorenz Energy Cycle (LEC) and its storage/conversion terms as annual mean global and hemispheric values.

In order to achieve this, one needs to provide as input three-dimensional daily fields of horizontal wind velocity and temperature in the troposphere. Two methods have been implemented for the computation of the material entropy production, one relying on the convergence of radiative heat fluxes in the atmosphere (indirect method), one combining the irreversible processes occurring in the climate system, particularly heat fluxes in the boundary layer, the hydrological cycle and the kinetic energy dissipation as retrieved from the residuals of the LEC.

A version of the diagnostic tool is included in the Earth System Model eValuation Tool (ESMValTool) community diagnostics, in order to assess the performances of soon available CMIP6 model simulations. The aim of this software is to provide a comprehensive picture of the thermodynamics of the climate system as reproduced in the state-of-the-art coupled general circulation models. This can prove useful for better understanding anthropogenic and natural climate change, paleoclimatic climate variability, and climatic tipping points.

Energy: Rather than a proxy of a changing climate, surface temperatures and precipitation changes should be better viewed as a consequence of a non-equilibrium steady state system which is responding to a radiative energy imbalance through a complex interaction of feedbacks. A changing climate, under the effect of an external transient forcing, can only be properly addressed if the energy imbalance, and the way it is transported within the system and converted into different forms is taken into account. The models’ skill to represent the history of energy and heat exchanges in the climate system has been assessed by comparing numerical simulations against available observations, where available, including the fundamental problem of ocean heat uptake.

Heat Transport: In order to understand how the heat is transported by the geophysical fluids, one should clarify what sets them into motion. We focus here on the atmosphere. A comprehensive view of the energetics fuelling the general circulation is given by the Lorenz Energy Cycle (LEC) framework. This provides a picture of the various processes responsible for conversion of available potential energy (APE), i.e. the excess of potential energy with respect to a state of thermodynamic equilibrium, into kinetic energy and dissipative heating. Under stationary conditions, the dissipative heating exactly equals the mechanical work performed by the atmosphere. In other words, the LEC formulation allows to constrain the atmosphere to the first law of thermodynamics, and the system as a whole can be seen as a pure thermodynamic heat engine under dissipative non-equilibrium conditions.

Water: On one hand the energy budget is relevantly affected by semi-empirical formulations of the water vapor spectrum, on the other hand the energy budget influences the moisture budget by means of uncertainties in aerosol-cloud interactions and mechanisms of tropical deep convection. A global scale evaluation of the hydrological cycle, both from a moisture and energetic perspective, is thus considered an integral part of an overall diagnostics for the thermodynamics of climate system.

Entropy: From a macroscopic point of view, one usually refers to “material entropy production” as the entropy produced by the geophysical fluids in the climate system, which are not related to the properties of the radiative fields, but rather to the irreversible processes related to the motion of these fluids. Mainly, this has to do with phase changes and water vapor diffusion. Lucarini (2009) underlined the link between entropy production and efficiency of the climate engine, which were then used to understand climatic tipping points, and, in particular, the snowball/warm Earth critical transition, to define a wider class of climate response metrics, and to study planetary circulation regimes. A constraint has also been proposed to the entropy production of the atmospheric heat engine, given by the emerging importance of non-viscous processes in a warming climate.

The goal here is to look at models through the lens of their dynamics and thermodynamics, in the view of enunciated above ideas about complex non-equilibrium systems. The metrics that we here propose are based on the analysis of the energy and water budgets and transports, of the energy transformations, and of the entropy production.

Previous Post: Climate Thinking Out of the Box 


It seems that climate modelers are dealing with a quandary: How can we improve on the unsatisfactory results from climate modeling?

Shall we:
A.Continue tweaking models using classical maths though they depend on climate being in quasi-equilibrium; or,
B.Start over from scratch applying non-equilibrium maths to the turbulent climate, though this branch of math is immature with limited expertise.

In other words, we are confident in classical maths, but does climate have features that disqualify it from their application? We are confident that non-equilibrium maths were developed for systems such as the climate, but are these maths robust enough to deal with such a complex reality?

It appears that some modelers are coming to grips with the turbulent quality of climate due to convection dominating heat transfer in the lower troposphere. Heretofore, models put in a parameter for energy loss through convection, and proceeded to model the system as a purely radiative dissipative system. Recently, it seems that some modelers are striking out in a new, possibly more fruitful direction. Herbert et al 2013 is one example exploring the paradigm of non-equilibrium steady states (NESS). Such attempts are open to criticism from a classical position, but may lead to a breakthrough for climate modeling.

That is my layman’s POV. Here is the issue stated by practitioners, more elegantly with bigger words:

“In particular, it is not obvious, as of today, whether it is more efficient to approach the problem of constructing a theory of climate dynamics starting from the framework of hamiltonian mechanics and quasi-equilibrium statistical mechanics or taking the point of view of dissipative chaotic dynamical systems, and of non-equilibrium statistical mechanics, and even the authors of this review disagree. The former approach can rely on much more powerful mathematical tools, while the latter is more realistic and epistemologically more correct, because, obviously, the climate is, indeed, a non-equilibrium system.”

Lucarini et al 2014

Click to access 1311.1190.pdf

Here’s how Herbert et al address the issue of a turbulent, non-equilibrium atmosphere. Their results show that convection rules in the lower troposphere and direct warming from CO2 is quite modest, much less than current models project.

“Like any fluid heated from below, the atmosphere is subject to vertical instability which triggers convection. Convection occurs on small time and space scales, which makes it a challenging feature to include in climate models. Usually sub-grid parameterizations are required. Here, we develop an alternative view based on a global thermodynamic variational principle. We compute convective flux profiles and temperature profiles at steady-state in an implicit way, by maximizing the associated entropy production rate. Two settings are examined, corresponding respectively to the idealized case of a gray atmosphere, and a realistic case based on a Net Exchange Formulation radiative scheme. In the second case, we are also able to discuss the effect of variations of the atmospheric composition, like a doubling of the carbon dioxide concentration.

The response of the surface temperature to the variation of the carbon dioxide concentration — usually called climate sensitivity — ranges from 0.24 K (for the sub-arctic winter profile) to 0.66 K (for the tropical profile), as shown in table 3. To compare these values with the literature, we need to be careful about the feedbacks included in the model we wish to compare to. Indeed, if the overall climate sensitivity is still a subject of debate, this is mainly due to poorly understood feedbacks, like the cloud feedback (Stephens 2005), which are not accounted for in the present study.”

Abstract from:
Vertical Temperature Profiles at Maximum Entropy Production with a Net Exchange Radiative Formulation
Herbert et al 2013

Click to access 1301.1550.pdf

In this modeling paradigm, we have to move from a linear radiative Energy Budget to a dynamic steady state Entropy Budget. As Ozawa et al explains, this is a shift from current modeling practices, but is based on concepts going back to Carnot.

“Entropy of a system is defined as a summation of “heat supplied” divided by its “temperature” [Clausius, 1865].. Heat can be supplied by conduction, by convection, or by radiation. The entropy of the system will increase by equation (1) no matter which way we may choose. When we extract the heat from the system, the entropy of the system will decrease by the same amount. Thus the entropy of a diabatic system, which exchanges heat with its surrounding system, can either increase or decrease, depending on the direction of the heat exchange. This is not a violation of the second law of thermodynamics since the entropy increase in the surrounding system is larger.

Carnot regarded the Earth as a sort of heat engine, in which a fluid like the atmosphere acts as working substance transporting heat from hot to cold places, thereby producing the kinetic energy of the fluid itself. His general conclusion about heat engines is that there is a certain limit for the conversion rate of the heat energy into the kinetic energy and that this limit is inevitable for any natural systems including, among others, the Earth’s atmosphere.

Thus there is a flow of energy from the hot Sun to cold space through the Earth. In the Earth’s system the energy is transported from the warm equatorial region to the cool polar regions by the atmosphere and oceans. Then, according to Carnot, a part of the heat energy is converted into the potential energy which is the source of the kinetic energy of the atmosphere and oceans.

Thus it is likely that the global climate system is regulated at a state with a maximum rate of entropy production by the turbulent heat transport, regardless of the entropy production by the absorption of solar radiation This result is also consistent with a conjecture that entropy of a whole system connected through a nonlinear system will increase along a path of evolution, with a maximum rate of entropy production among a manifold of possible paths [Sawada, 1981]. We shall resolve this radiation problem in this paper by providing a complete view of dissipation processes in the climate system in the framework of an entropy budget for the globe.

The hypothesis of the maximum entropy production (MEP) thus far seems to have been dismissed by some as coincidence. The fact that the Earths climate system transports heat to the same extent as a system in a MEP state does not prove that the Earths climate system is necessarily seeking such a state. However, the coincidence argument has become harder to sustain now that Lorenz et al. [2001] have shown that the same condition can reproduce the observed distributions of temperatures and meridional heat fluxes in the atmospheres of Mars and Titan, two celestial bodies with atmospheric conditions and radiative settings very different from those of the Earth.”

Hisashi Ozawa et al 2003

Click to access Ozawa.pdf

Climate Models Cover Up

Making Climate Models Look Good

Clive Best dove into climate models temperature projections and discovered how the data can be manipulated to make model projections look closer to measurements than they really are. His first post was A comparison of CMIP5 Climate Models with HadCRUT4.6 January 21, 2019. Excerpts in italics with my bolds.

Overview: Figure 1. shows a comparison of the latest HadCRUT4.6 temperatures with CMIP5 models for Representative Concentration Pathways (RCPs). The temperature data lies significantly below all RCPs, which themselves only diverge after ~2025.

Modern Climate models originate from Global Circulation models which are used for weather forecasting. These simulate the 3D hydrodynamic flow of the atmosphere and ocean on earth as it rotates daily on its tilted axis, and while orbiting the sun annually. The meridional flow of energy from the tropics to the poles generates convective cells, prevailing winds, ocean currents and weather systems. Energy must be balanced at the top of the atmosphere between incoming solar energy and out going infra-red energy. This depends on changes in the solar heating, water vapour, clouds , CO2, Ozone etc. This energy balance determines the surface temperature.

Weather forecasting models use live data assimilation to fix the state of the atmosphere in time and then extrapolate forward one or more days up to a maximum of a week or so. Climate models however run autonomously from some initial state, stepping far into the future assuming that they correctly simulate a changing climate due to CO2 levels, incident solar energy, aerosols, volcanoes etc. These models predict past and future surface temperatures, regional climates, rainfall, ice cover etc. So how well are they doing?

Fig 2. Global Surface temperatures from 12 different CMIP5 models run with RCP8.5

The disagreement on the global average surface temperature is huge – a spread of 4C. This implies that there must still be a problem relating to achieving overall energy balance at the TOA. Wikipedia tells us that the average temperature should be about 288K or 15C. Despite this discrepancy in reproducing net surface temperature the model trends in warming for RCP8.5 are similar.

Likewise weather station measurements of temperature have changed with time and place, so they too do not yield a consistent absolute temperature average. The ‘solution’ to this problem is to use temperature ‘anomalies’ instead, relative to some fixed normal monthly period (baseline). I always use the same baseline as CRU 1961-1990. Global warming is then measured by the change in such global average temperature anomalies. The implicit assumption of this is that nearby weather station and/or ocean measurements warm or cool coherently, such that the changes in temperature relative to the baseline can all be spatially averaged together. The usual example of this is that two nearby stations with different altitudes will have different temperatures but produce the similar ‘anomalies’. A similar procedure is used on the model results to produce temperature anomalies. So how do they compare to the data?

Fig 4. Model comparisons to data 1950-2050

Figure 4 shows a close up detail from 1950-2050. This shows how there is a large spread in model trends even within each RCP ensemble. The data falls below the bulk of model runs after 2005 except briefly during the recent el Nino peak in 2016.  Figure 4. shows that the data are now lower than the mean of every RCP, furthermore we won’t be able to distinguish between RCPs until after ~2030.

Zeke Hausfather’s Tricks to Make the Models Look Good

Clive’s second post is Zeke’s Wonder Plot January 25,2019. Excerpts in italics with my bolds.

Zeke Hausfather who works for Carbon Brief and Berkeley Earth has produced a plot which shows almost perfect agreement between CMIP5 model projections and global temperature data. This is based on RCP4.5 models and a baseline of 1981-2010. First here is his original plot.

I have reproduced his plot and  essentially agree that it is correct. However, I also found some interesting quirks.

The apples to apples comparison (model SSTs blended with model land 2m temperatures) reduces the model mean by about 0.06C. Zeke has also smoothed out the temperature data by using a 12 month running average. This has the effect of exaggerating peak values as compared to using the annual averages.

Effect of changing normalisation period. Cowtan & Way uses kriging to interpolate Hadcrut4.6 coverage into the Arctic and elsewhere.

Shown above is the result for a normalisation from 1961-1990. Firstly look how the lowest 2 model projections now drop further down while the data seemingly now lies below both the blended (thick black) and the original CMIP average (thin black). HadCRUT4 2016 is now below the blended value.

This improved model agreement has nothing to do with the data itself but instead is due to a reduction in warming predicted by the models. So what exactly is meant by ‘blending’?

Measurements of global average temperature anomalies use weather stations on land and sea surface temperatures (SST) over oceans. The land measurements are “surface air temperatures”(SAT) defined as the temperature 2m above ground level. The CMIP5 simulations however used SAT everywhere. The blended model projections use simulated SAT over land and TOS (temperature at surface) over oceans. This reduces all model predictions slightly, thereby marginally improving agreement with data. See also Climate-lab-book

The detailed blending calculations were done by Kevin Cowtan using a land mask and ice mask to define where TOS and SAT should be used in forming the global average. I downloaded his python scripts and checked all the algorithm, and they look good to me. His results are based on the RCP8.5 ensemble

The solid blue curve is the CMIP5 RCP4.6 ensemble average after blending. The dashed curve is the original. Click to expand.

Again the models mostly lie above the data after 1999.

This post is intended to demonstrate just how careful you must be when interpreting plots that seemingly demonstrate either full agreement of climate models with data, or else total disagreement.

In summary, Zeke Hausfather writing for Carbon Brief 1) used a clever choice of baseline, 2) of RCP for blended models and 3) by using a 12 month running average, was able to show an almost perfect agreement between data and models. His plot is 100% correct. However exactly the same data plotted with a different baseline and using annual values (exactly like those in the models), instead of 12 monthly running averages shows instead that the models are still lying consistently above the data. I know which one I think best represents reality.

Moral to the Story:
There are lots of ways to make computer models look good.Try not to be distracted.