Dissenters from the catastrophe consensus on warming are worth listening to.
Stop with all the existential-crisis talk. President Biden said, “Climate change is literally an existential threat to our nation and to the world.” Defense Secretary Lloyd Austin also talks about the “existential threat” of climate change. National security adviser Jake Sullivan identifies an “accelerating climate crisis” as one reason for a “new consensus” for government picking winners and losers in the economy. Be wary of those touting consensus.
But what if the entire premise is wrong?What if the Earth is self-healing? Before you hurl the “climate denier” invective at me, let’s think this through. Earth has been around for 4.5 billion years— living organisms for 3.7 billion. Surely, an enlightened engineer might think, the planet’s creator built in a mechanism to regulate heat, or we wouldn’t still be here to worry about it.
The theory of climate change is that excess carbon dioxide and methane trap the sun’s radiation in the atmosphere, and these man-made greenhouse gases reflect more of that heat back to Earth, warming the planet. Pretty simple. Eventually, we reach a tipping point when positive feedback loops form—less ice to reflect sunlight, warm oceans that can no longer absorb carbon dioxide—and then we fry, existentially. So lose those gas stoves and carbon spewing Suburbans.
Note nearly half incoming solar energy is not absorbed by Earth’s surface.
But nothing is simple. What about negative feedback loops? Examples: human sweat and its cooling condensation or our irises dilating or constricting based on the amount of light coming in. Clouds, which can block the sun or trap its radiation, are rarely mentioned in climate talk.
Why? Because clouds are notoriously difficult to model in climate simulations. Steven Koonin, a New York University professor and author of “Unsettled,” tells me that today’s computing power can typically model the Earth’s atmosphere in grids 60 miles on a side. Pretty coarse. So, Mr. Koonin says, “the properties of clouds in climate models are often adjusted or ‘tuned’ to match observations.” Tuned!
Last month the coddling modelers at the United Nations’ World Meteorological Organization stated that “warming El Niño” and “human-induced climate change” mean there is a “66% likelihood that annual average global temperatures will exceed the threshold of 1.5 degrees Celsius above preindustrial levels by 2027.” Notice that El Niño is mentioned first.
To enlarge open image in new tab.
Richard Lindzen, a professor at the Massachusetts Institute of Technology and lead author of an early Intergovernmental Panel on Climate Change report, told me, “Temperatures in the tropics remain relatively constant compared with changes in the tropics-to-pole temperatures. The tropics-polar difference is about 40 degrees Celsius today but was 20 degrees during the warm Eocene Epoch and 60 degrees during Ice Ages.” This difference has more to do with changes in the Earth’s rotation, like wobbling, than anything else. According to Mr. Lindzen, this effect is some 70 times as great as human-made greenhouse gases.
OK, back to clouds. Cumulus clouds, the puffy ones often called thunderclouds, are an important convection element, carrying heat from the Earth’s surface to the upper atmosphere. Above them are high-altitude cirrus clouds, which can reflect heat back toward the surface. A 2001 Lindzen paper, however, suggests that high-level cirrus clouds in the tropics dissipate as temperatures rise. These thinning cirrus clouds allow more heat to escape. It’s called the Iris Effect, like a temperature-controlled vent opener for an actual greenhouse so you don’t (existentially) fry your plants. Yes, Earth has a safety valve.
Mr. Lindzen says, “This more than offsets the effect of greenhouse gases.” As you can imagine, theories debunking the climate consensus are met with rebuttals and more papers. Often, Mr. Lindzen points out, critics, “to maintain the warming narrative, adjust their models, especially coverage and reflection or albedo of clouds in the tropics.” More tuning.
A 2021 paper co-authored by Mr. Lindzen shows strong support for an Iris Effect. Maybe Earth really was built by an engineer. Proof? None other than astronomer Carl Sagan described the Faint Young Sun Paradox that, 2.5 billion years ago, the sun’s energy was 30% less, but Earth’s climate was basically the same as today. Cirrus clouds likely formed to trap heat—a closed Iris and a negative feedback loop at work.
Figure 2: At higher temperatures there are more thunderstorms over the ocean and the area without high level clouds (dry and clear) expands further and thus allows more heat to radiate off into space (strong OLR) than when temperatures are lower, i.e. when the iris is smaller. Source: Figure 1 from MS15.
In a 2015 Nature Geoscience paper, Thorsten Mauritsen and Bjorn Stephen at the Max Planck Institute for Meteorology reran climate models using the Iris Effect and found them better at modeling historic observations. No need for tuning. Wouldn’t it be nice if the U.N. used realistic cloud and climate models?
Earth has warmed, but I’m convinced negative feedback loops will save us. Dismissing the Iris Effect or detuning it isn’t science. Sadly, climate science has morphed into climate rhetoric. And note, Treasury Secretary Janet Yellen explained in April that green spending “is, at its core, about turning the climate crisis into an economic opportunity.” Hmmm. “Catastrophic,” “existential” and “crisis” are cloudy thinking. Negative feedback is welcome. Dissenters from the catastrophe consensus on warming are worth listening to.
Footnote–Phanerozoic Temperatures
Maurice Lavigne commented that the best evidence of our self-regulating climate is found in the Phanerozoic temperature record. I had to find out what he meant, which led me to discover this:
Cosmic radiation and temperature through Phanerozoic according to Nir Shaviv and Jan Veizer. The vertical axis on the left represents the temperature as deviations from present temperature. The vertical axis on the right shows the cosmic radiation as multiples of radiation today – today’s radiation is set to 1. Note that the right scale is inverted so that strong radiation can be compared to low temperature. The red curve represents the temperature and the blue radiation. Temperature and cosmic radiation appear to have a very good correlation. The horizontal axis represents time through Phanerozoic’s more than 500 million years. Note that the Carboniferous is divided into “Missisipian” and “Pennsylvanian”, that is an American custom, referring to different types of coal from the coal mines.
The image above comes from Christopher Scotese PaleoMAP project, showing the dramatic temperature and climate shifts, hothouse to icehouse and everything in between. Finally, a graph showing these temperature cycles unrelated to CO2 concentrations.
There are various answers to the title question. IPCC doctrine asserts that not only does more CO2 induce warming, it also triggers a water vapor positive feedback that triples the warming. Many other scientists, including some skeptical of any climate “emergency,” agree some CO2 warming is likely, but doubt the positive feedback, with the possibility the sign is wrong. Still others point out that increases of CO2 lag temperature increases on all time scales, from ice core data to last month’s observations. CO2 can hardly be claimed to cause warming, when CO2 changes do not precede the effect. [See Temps Cause CO2 Changes, Not the Reverse. ]
Below is a post describing how CO2 warming is not only lacking, but more CO2 actually increases planetary cooling. The mathematical analysis reveals a fundamental error in the past and only now subjected to correction.
In 1896, Svante Arrhenius proposed a model predicting that increased concentration of carbon dioxide and water vapour in the atmosphere would result in a warming of the planet. In his model, the warming effects of atmospheric carbon dioxide and water vapour in preventing heat flow from the Earth’ s surface (now known as the “Greenhouse Effect”) are counteracted by a cooling effect where the same gasses are responsiblefor the radiation of heat to space from the atmosphere. His analysis found that there was a net warming effect and his model has remained the foundation of the Enhanced Greenhouse Effect—Global Warming hypothesis.
This paper attempts to quantify the parameters in his equations but on evaluation his model cannot produce thermodynamic equilibrium. A modified model is proposed which reveals that increased atmospheric emissivity enhances the ability of the atmosphere to radiate heat to space overcoming the cooling effect resulting in a net cooling of the planet. In consideration of this result, there is a need for greenhouse effect—global warming models to be revised.
1. Introduction
In 1896 Arrhenius proposed that changes in the levels of “carbonic acid” (carbon dioxide) in the atmosphere could substantially alter the surface temperature of the Earth. This has come to be known as the greenhouse effect. Arrhenius’ paper, “On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground”, was published in Philosophical Magazine. Arrhenius concludes:
“If the quantity of carbonic acid in the air should sink to one-half its present percentage, the temperature would fall by about 4˚; a diminution to one-quarter would reduce the temperature by 8˚. On the other hand, any doubling of the percentage of carbon dioxide in the air would raise the temperature of the earth’s surface by 4˚; and if the carbon dioxide were increased fourfold, the temperature would rise by 8˚ ” [ 2 ].
It is interesting to note that Arrhenius considered this greenhouse effect a positive thing if we were to avoid the ice ages of the past. Nevertheless, Arrhenius’ theory has become the foundation of the enhanced greenhouse effect―global warming hypothesis in the 21st century. His model remains the basis for most modern energy equilibrium models.
2. Arrhenius’ Energy Equilibrium Model
Arrhenius’ proposed a two-part energy equilibrium model in which the atmosphere radiates the same amount of heat to space as it receives and, likewise, the ground transfers the same amount of heat to the atmosphere and to space as it receives. The model contains the following assumptions:
• Heat conducted from the center of the Earth is neglected.
• Heat flow by convection between the surface and the atmosphere and throughout the atmosphere remains constant.
• Cloud cover remains constant. This is questionable but allows the model to be quantified.
Part 1: Equilibrium of the Air
The balance of heat flow to and from the air (or atmosphere) has four components as shown in Figure 1. The arrow labelled S1 indicates the solar energy absorbed by the atmosphere. R indicates the infra-red radiation from the surface of the Earth to the atmosphere, M is the quantity of heat “conveyed” to the atmosphere by convection and Q1 represents heat loss from the atmosphere to space by radiation. All quantities are measured in terms of energy per unit area per unit time (W/m2).
Figure 1. Model of the energy balance of the atmosphere. The heat received by the atmosphere ( R+M+S1 ) equals the heat lost to space (Q1). In this single layer atmospheric model, the absorbing and emitting layers are one and the same.
Part 2: Thermal Equilibrium of the Ground
In the second part of his model, Arrhenius describes the heat flow equilibrium at the “ground” or surface of the Earth. There are four contributions to the surface heat flow as shown in Figure 2. S2 is the solar energy absorbed by the surface, R is the infra-red radiation emitted from the surface and transferred to the atmosphere, N is the heat conveyed to the atmosphere by convection and Q2 is the heat radiated to space from the surface. Note: Here Arrhenius uses the term N for the convective heat flow. It is equivalent to the term M used in the air equilibrium model.
Figure 2. The energy balance at the surface of the Earth. The energy received by the ground is equal to the energy lost.
3. Finding the Temperature of the Earth
Arrhenius combined these equations and, by eliminating the temperature of the atmosphere which according to Arrhenius “has no considerable interest”, he arrived at the following relationship:
ΔTg is the expected change in the temperature of the Earth for a change in atmospheric emissivity from ε1 to ε2. Arrhenius determined that the current transparency of the atmosphere was 0.31 and, therefore the emissivity/absorptivity ε1 = 0.69. The current mean temperature for the surface of the Earth can be assumed to be To = 288 K.
Figure 3. Arrhenius’ model is used to determine the mean surface temperature of the Earth as a function of atmospheric emissivity ε. For initial conditions, ε = 0.69 and the surface temperature is 288 K. An increase in atmospheric emissivity produces an increase in the surface temperature of the Earth.
Arrhenius estimated that a doubling of carbon dioxide concentration in the atmosphere would produce a change in emissivity from 0.69 to 0.78 raising the temperature of the surface by approximately 6 K. This value would be considered high by modern climate researchers; however, Arrhenius’ modelhas become the foundation of the greenhouse-global warming theory today. Arrhenius made no attempt to quantify the specific heat flow values in his model. At the time of his paper there was little quantitative data available relating to heat flow for the Earth.
4. Evaluation of Arrhenius’ Model under Present Conditions
More recently, Kiehl and Trenberth (K & T) [ 3 ] and others have quantified the heat flow values used in Arrhenius’ model. K & T’s data are summarised in Figure 4.
The reflected solar radiation, which plays no part in the energy balance described in this model, is ignored. R is the net radiative transfer from the ground to the atmosphere derived from K & T’s diagram. The majority of the heat radiated to space originates from the atmosphere (Q1 > Q2). And the majority of the heat lost from the ground is by means of convection to the atmosphere (M > R + Q2).
Figure 4. Model of the mean energy budget of the earth as determined by Kiehl and Trenberth.
Equation (5) Q2=(1−ε)σνT4e(5)
Substituting ε = 0.567, ν = 1.0 and Tg = 288 K we get: Q2=149.2 W/m2
Using Arrhenius value of 0.69 for the atmospheric emissivity Q2 = 120.9 W/m2.
Both values are significantly more than the 40 W/m2 determined by K & T.
The equation will not balance, something is clearly wrong.
Figure 5 illustrates the problem.
Equation (5) is based on the Stefan-Boltzmann law which is an empirical relationship which describes the amount of radiation from a hot surface passing through a vacuum to a region of space at a temperature of absolute zero. This is clearly not the case for radiation passing through the Earth’s atmosphere and as a result the amount of heat lost by radiation has been grossly overestimated.
No amount of adjusting parameters will allow this relationship to produce
sensible quantities and the required net heat flow of 40 W/m2.
This error affects the equilibrium heat flow values in Arrhenius’ model and the model is not able to produce a reasonable approximation of present day conditions as shown in Table 1. In particular, the convective heat flow takes on very different values from the two parts of the model. The values M and N in the table should be equivalent.
5. A New Energy Equilibrium Model
A modified model is proposed which will determine the change in surface temperature of the Earth caused by a change in the emissivity of the atmosphere (as would occur when greenhouse gas concentrations change). The model incorporates the following ideas:
1) The total heat radiated from the Earth ( Q1+Q2Q1+Q2 ) will remain constant and equal to the total solar radiation absorbed by the Earth ( S1+S2S1+S2 ).
2) Convective heat flow M remains constant. Convective heat flow between two regions is dependent on their temperature difference, as expressed by Newton’s Law of cooling1. The temperature difference between the atmosphere and the ground is maintained at 8.9 K (see Equation 7(a)). M = 102 W/m2 (K & T).
3) A surface temperature of 288 K and an atmospheric emissivity of 0.567 (Equation (7b)) is assumed for initial or present conditions.
Equation (9) represents the new model relating the emissivity of the atmosphere ε to the surface temperature Tg. Results from this model are shown in Table 2. The table shows the individual heat flow quantities and the temperature of the surface of the Earth that is required to maintain equilibrium:
The table shows that as the value of the atmospheric emissivity ε is increased less heat flows from the Earth’s surface to space, Q2 decreases. This is what would be expected. As well, more heat is radiated to space from the atmosphere; Q1 increases. This is also expected. The total energy radiated to space Q1+Q2=235 W/m2 . A plot of the resultant surface temperature Tg versus the atmospheric emissivity ε is shown below Figure 6.
Figure 6. Plot of the Earth’s mean surface temperature as a function of the atmospheric emissivity. This model predicts that the temperature of the Earth will decrease as the emissivity of the atmosphere increases.
6. Conclusion
Arrhenius identified the fact that the emissivity/absorptivity of the atmosphere increased with increasing greenhouse gas concentrations and this would affect the temperature of the Earth. He understood that infra-red active gases in the atmosphere contribute both to the absorption of radiation from the Earth’s surface and to the emission of radiation to space from the atmosphere. These were competing processes; one trapped heat, warming the Earth; the other released heat, cooling the Earth. He derived a relationship between the surface temperature and the emissivity of the atmosphere and deduced that an increase in emissivity led to an increase in the surface temperature of the Earth.
However, his model is unable to produce sensible results for the heat flow quantities as determined by K & T and others. In particular, his model and all similar recent models, grossly exaggerate the quantity of radiative heat flow from the Earth’s surface to space. A new energy equilibrium model has been proposed which is consistent with the measured heat flow quantities and maintains thermal equilibrium. This model predicts the changes in the heat flow quantities in response to changes in atmospheric emissivity and reveals that Arrhenius’ prediction is reversed. Increasing atmospheric emissivity due to increased greenhouse gas concentrations will have a net cooling effect.
It is therefore proposed by the author that any attempt to curtail emissions of CO2
will have no effect in curbing global warming.
Summary:
If Stannard is right, then the unthinkable, inconvenient truth is: More CO2 cools, rather than warms the planet. As noted before, we have enjoyed a modern warming period with the recovery of temperatures ending the Little Ice Age. But cold is the greater threat to human life and prosperity, and as well to the biosphere. Society’s priorities should be to ensure reliable affordable energy, and robust infrastructure to meet the demands of future cooling, which will eventually bring down CO2 concentrations in its wake.
Footnote:
A comment below refers to the cartoon image at the top which was an older version of K & T. The more recent version was used by the author and has slightly different numbers. Below is the actual model he analyzed:
I agree that these energy budgets oversimplify the real world, and the author’s intention is not to correct the details, but to show that the models fail when taken at face value. He is focusing on the imbalance arising from applying Stefan-Boltzmann law to an atmospheric planet. As noted below there are other challenging issues such as using the average frequency of visual light for calculating W/m^2, which is not realistic for earth’s LW radiation.
In the end the two (satellite) series were similar but RSS has consistently exhibited more warming than UAH. Then a little more than a decade ago, the group at NOAA headed by Zou produced a new data product called STAR (Satellite Applications and Research). They used the same underlying microwave retrievals but produced a temperature record showing much more warming than either UAH or RSS, as well as all the weather balloon records. It came close to validating the climate models, although in my paper with Christy we included the STAR data in the satellite average and the models still ran too hot. Nonetheless it was possible to point to the coolest of the models and compare them to the STAR data and find a match, which was a lifeline for those arguing that climate models are within the uncertainty range of the data.
Until now. In their new paper Zou and his co-authors rebuilt the STAR series based on a new empirical method for removing time-of-day observation drift and a more stable method of merging satellite records. Now STAR agrees with the UAH series very closely — in fact it has a slightly smaller warming trend. The old STAR series had a mid-troposphere warming trend of 0.16 degrees Celsius per decade, but it’s now 0.09 degrees per decade, compared to 0.1 in UAH and 0.14 in RSS. For the troposphere as a whole they estimate a warming trend of 0.14 C/decade.
Figure 14 Global mean Temperature Total Troposphere (TTT, TMT adjusted by TLS) time series (blue lines) and its smoothed time series (red lines). The locally weighted regression method (Cleveland, 1979) is used for the smoothing. Both TMT and TLS for TTT generation are from STAR V5.0.
In 1896, Svante Arrhenius proposed a model predicting that increased concentration of carbon dioxide and water vapour in the atmosphere would result in a warming of the planet. In his model, the warming effects of atmospheric carbon dioxide and water vapour in preventing heat flow from the Earth’ s surface (now known as the “Greenhouse Effect”) are counteracted by a cooling effect where the same gasses are responsiblefor the radiation of heat to space from the atmosphere. His analysis found that there was a net warming effect and his model has remained the foundation of the Enhanced Greenhouse Effect—Global Warming hypothesis.
This paper attempts to quantify the parameters in his equations but on evaluation his model cannot produce thermodynamic equilibrium. A modified model is proposed which reveals that increased atmospheric emissivity enhances the ability of the atmosphere to radiate heat to space overcoming the cooling effect resulting in a net cooling of the planet. In consideration of this result, there is a need for greenhouse effect—global warming models to be revised.
1. Introduction
In 1896 Arrhenius proposed that changes in the levels of “carbonic acid” (carbon dioxide) in the atmosphere could substantially alter the surface temperature of the Earth. This has come to be known as the greenhouse effect. Arrhenius’ paper, “On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground”, was published in Philosophical Magazine. Arrhenius concludes:
“If the quantity of carbonic acid in the air should sink to one-half its present percentage, the temperature would fall by about 4˚; a diminution to one-quarter would reduce the temperature by 8˚. On the other hand, any doubling of the percentage of carbon dioxide in the air would raise the temperature of the earth’s surface by 4˚; and if the carbon dioxide were increased fourfold, the temperature would rise by 8˚ ” [ 2 ].
It is interesting to note that Arrhenius considered this greenhouse effect a positive thing if we were to avoid the ice ages of the past. Nevertheless, Arrhenius’ theory has become the foundation of the enhanced greenhouse effect―global warming hypothesis in the 21st century. His model remains the basis for most modern energy equilibrium models.
2. Arrhenius’ Energy Equilibrium Model
Arrhenius’ proposed a two-part energy equilibrium model in which the atmosphere radiates the same amount of heat to space as it receives and, likewise, the ground transfers the same amount of heat to the atmosphere and to space as it receives. The model contains the following assumptions:
• Heat conducted from the center of the Earth is neglected.
• Heat flow by convection between the surface and the atmosphere and throughout the atmosphere remains constant.
• Cloud cover remains constant. This is questionable but allows the model to be quantified.
Part 1: Equilibrium of the Air
The balance of heat flow to and from the air (or atmosphere) has four components as shown in Figure 1. The arrow labelled S1 indicates the solar energy absorbed by the atmosphere. R indicates the infra-red radiation from the surface of the Earth to the atmosphere, M is the quantity of heat “conveyed” to the atmosphere by convection and Q1 represents heat loss from the atmosphere to space by radiation. All quantities are measured in terms of energy per unit area per unit time (W/m2).
Figure 1. Model of the energy balance of the atmosphere. The heat received by the atmosphere ( R+M+S1 ) equals the heat lost to space (Q1). In this single layer atmospheric model, the absorbing and emitting layers are one and the same.
Part 2: Thermal Equilibrium of the Ground
In the second part of his model, Arrhenius describes the heat flow equilibrium at the “ground” or surface of the Earth. There are four contributions to the surface heat flow as shown in Figure 2. S2 is the solar energy absorbed by the surface, R is the infra-red radiation emitted from the surface and transferred to the atmosphere, N is the heat conveyed to the atmosphere by convection and Q2 is the heat radiated to space from the surface. Note: Here Arrhenius uses the term N for the convective heat flow. It is equivalent to the term M used in the air equilibrium model.
Figure 2. The energy balance at the surface of the Earth. The energy received by the ground is equal to the energy lost.
3. Finding the Temperature of the Earth
Arrhenius combined these equations and, by eliminating the temperature of the atmosphere which according to Arrhenius “has no considerable interest”, he arrived at the following relationship:
ΔTg is the expected change in the temperature of the Earth for a change in atmospheric emissivity from ε1 to ε2. Arrhenius determined that the current transparency of the atmosphere was 0.31 and, therefore the emissivity/absorptivity ε1 = 0.69. The current mean temperature for the surface of the Earth can be assumed to be To = 288 K.
Figure 3. Arrhenius’ model is used to determine the mean surface temperature of the Earth as a function of atmospheric emissivity ε. For initial conditions, ε = 0.69 and the surface temperature is 288 K. An increase in atmospheric emissivity produces an increase in the surface temperature of the Earth.
Arrhenius estimated that a doubling of carbon dioxide concentration in the atmosphere would produce a change in emissivity from 0.69 to 0.78 raising the temperature of the surface by approximately 6 K. This value would be considered high by modern climate researchers; however, Arrhenius’ modelhas become the foundation of the greenhouse-global warming theory today. Arrhenius made no attempt to quantify the specific heat flow values in his model. At the time of his paper there was little quantitative data available relating to heat flow for the Earth.
4. Evaluation of Arrhenius’ Model under Present Conditions
More recently, Kiehl and Trenberth (K & T) [ 3 ] and others have quantified the heat flow values used in Arrhenius’ model. K & T’s data are summarised in Figure 4.
The reflected solar radiation, which plays no part in the energy balance described in this model, is ignored. R is the net radiative transfer from the ground to the atmosphere derived from K & T’s diagram. The majority of the heat radiated to space originates from the atmosphere (Q1 > Q2). And the majority of the heat lost from the ground is by means of convection to the atmosphere (M > R + Q2).
Figure 4. Model of the mean energy budget of the earth as determined by Kiehl and Trenberth.
Q2=(1−ε)σνT4e(5)
Substituting ε = 0.567, ν = 1.0 and Tg = 288 K we get: Q2=149.2 W/m2
Using Arrhenius value of 0.69 for the atmospheric emissivity Q2 = 120.9 W/m2.
Both values are significantly more than the 40 W/m2 determined by K & T.
The equation will not balance, something is clearly wrong.
Figure 5 illustrates the problem.
Equation (5) is based on the Stefan-Boltzmann law which is an empirical relationship which describes the amount of radiation from a hot surface passing through a vacuum to a region of space at a temperature of absolute zero. This is clearly not the case for radiation passing through the Earth’s atmosphere and as a result the amount of heat lost by radiation has been grossly overestimated.
No amount of adjusting parameters will allow this relationship to produce
sensible quantities and the required net heat flow of 40 W/m2.
This error affects the equilibrium heat flow values in Arrhenius’ model and the model is not able to produce a reasonable approximation of present day conditions as shown in Table 1. In particular, the convective heat flow takes on very different values from the two parts of the model. The values M and N in the table should be equivalent.
5. A New Energy Equilibrium Model
A modified model is proposed which will determine the change in surface temperature of the Earth caused by a change in the emissivity of the atmosphere (as would occur when greenhouse gas concentrations change). The model incorporates the following ideas:
1) The total heat radiated from the Earth ( Q1+Q2Q1+Q2 ) will remain constant and equal to the total solar radiation absorbed by the Earth ( S1+S2S1+S2 ).
2) Convective heat flow M remains constant. Convective heat flow between two regions is dependent on their temperature difference, as expressed by Newton’s Law of cooling1. The temperature difference between the atmosphere and the ground is maintained at 8.9 K (see Equation 7(a)). M = 102 W/m2 (K & T).
3) A surface temperature of 288 K and an atmospheric emissivity of 0.567 (Equation (7b)) is assumed for initial or present conditions.
Equation (9) represents the new model relating the emissivity of the atmosphere ε to the surface temperature Tg. Results from this model are shown in Table 2. The table shows the individual heat flow quantities and the temperature of the surface of the Earth that is required to maintain equilibrium:
The table shows that as the value of the atmospheric emissivity ε is increased less heat flows from the Earth’s surface to space, Q2 decreases. This is what would be expected. As well, more heat is radiated to space from the atmosphere; Q1 increases. This is also expected. The total energy radiated to space Q1+Q2=235 W/m2 . A plot of the resultant surface temperature Tg versus the atmospheric emissivity ε is shown below Figure 6.
Figure 6. Plot of the Earth’s mean surface temperature as a function of the atmospheric emissivity. This model predicts that the temperature of the Earth will decrease as the emissivity of the atmosphere increases.
6. Conclusion
Arrhenius identified the fact that the emissivity/absorptivity of the atmosphere increased with increasing greenhouse gas concentrations and this would affect the temperature of the Earth. He understood that infra-red active gases in the atmosphere contribute both to the absorption of radiation from the Earth’s surface and to the emission of radiation to space from the atmosphere. These were competing processes; one trapped heat, warming the Earth; the other released heat, cooling the Earth. He derived a relationship between the surface temperature and the emissivity of the atmosphere and deduced that an increase in emissivity led to an increase in the surface temperature of the Earth.
However, his model is unable to produce sensible results for the heat flow quantities as determined by K & T and others. In particular, his model and all similar recent models, grossly exaggerate the quantity of radiative heat flow from the Earth’s surface to space. A new energy equilibrium model has been proposed which is consistent with the measured heat flow quantities and maintains thermal equilibrium. This model predicts the changes in the heat flow quantities in response to changes in atmospheric emissivity and reveals that Arrhenius’ prediction is reversed. Increasing atmospheric emissivity due to increased greenhouse gas concentrations will have a net cooling effect.
It is therefore proposed by the author that any attempt to curtail emissions of CO2
will have no effect in curbing global warming.
BizNews TV interviewed Dr. John Christy last week as shown in the video above. For those who prefer to read what was said, I provide a lightly edited transcript below in italics with my bolds and added images. BN refers to questions from the interviewer and JC refers to responses from Christy.
BN: Joining me today is Dr John Christy, climate scientist at the University of Alabama in Huntsville and Alabama State climatologist since 2000. Dr Christy, thank you so much for your time. You’ve described yourself as a climate nerd and apparently you were 12 when your unwavering desire to understand weather and climate started. Why climate?
JC: Well I think it was more like 10 years old when I was fascinated with some unusual weather events that happened in my home area of California. So that began a fascination for me, and I wanted to try to figure out why things happen the way they did. Why did one year have more rain–that’s a big story in California, does it rain or not–and another year would be very dry. Why were the mountains covered with snow in one April and not another. In fact I have here April 1967 that I recorded as a teenager. This has been a passion of mine forever, and as it turns out now that I’m as old as I am, I still can’t figure out why one year is wetter than the other.
BN: Well you seem to be getting a lot closer than most people would. I think it was in 1989 when you and NASA scientist Roy Spencer pioneered a new method of measuring and monitoring temperature recordsvia satellites, since that time up until now. Why did you feel you needed to develop a new method to begin with, and how did it differ in terms of the readings of established methods at the time?
JC: Well the issue was we only had surface temperature measurements and they are scattered over the world. They don’t cover much of the world at all, actually mainly just the land regions and scattered places on the ocean. And the measurement itself is not that robust. The stations move, the instruments changed through time, and so it’s a very difficult thing to detect. In fact a small little change in the area right around the station can really affect the temperature of that station
So Roy Spencer and Dick Mcknight came up with an idea about looking at some satellite data. This is the temperature of the deep layer of the atmosphere, so this is like the surface to about 8000 meters. And so if we could see the temperature of that bulk atmospheric layer, we would have a very robust measurement, and the microwave sensors on the NOAA Polo orbiting satellites did precisely that. And so we were the first to really put those data into a simple data set that had the temperature, at that time, for month by month since about November 1978.
BN: Okay, and how do readings differ from the climate science at the time?
JC:First of all they differed because we had a global measurement. We really did see the entire Globe from satellite, because the orbit of that satellite is polar and the Earth spins around underneath. So every day we have 14 orbits as the Earth spins around underneath. We see the entire planet so that’s one big difference.
The other one is that the actual result did not show as much warming as what the surface temperatures showed. And we’re doing even more work now to demonstrate that a lot of the surface stations are spuriously affected by the growth of an infrastructure around them. And so there’s kind of a false warming signal there. You don’t get the background climate signal with surface temperature measurements; you get a bit of what’s happening in the local area.
BN: Your research has to do with testing the theories posited by climate model forecasts, so you don’t actually do any modeling yourself. But what criteria do you use to test these theories?
JC: That’s a very good question, because in climate you hear all kinds of claims and theories being thrown out there. For a lot of people who don’t really understand the climate system it’s a quick and easy answer just to say: Oh humans caused that, you know it’s global warming, something like that is the answer. When in fact the climate system is very complex, so we look at these claims and Roy Spencer and I are just a few of the people around the world that actually build data sets from scratch. I mean we start with the photon counts of the satellite radiometers, or the original paper records of 19th century East Africa, for example. We do all this from scratch so that we can test the claims that people make.
Once we build the data set, we test it to make sure we have confidence in the data set, that it’s telling us a truth about what’s happening over time. And then we check the claim. So for example, we make surface temperature data sets that go back to the 19th century. Someone will say: Well this is now the hottest decade, or that more records happen this decade than in the past. And we can demonstrate, in the United States especially, that’s not the case. You would need to go back to the 1930s if you want to see real record temperatures that occurred at that time.
And for climate models we like to use the satellite data set since it’s a robust deep layer measurement; it’s measuring lots of mass of the atmosphere, the heat content really. That’s a direct value we can get out of the climate model, so we are comparing Apples to Apples: What the satellite produces and observes is what the climate model also generates, and we can compare them one to one.
In a paper Ross McKitrick and I wrote a couple of years ago, we found that 100 of the climate models we’re warming the atmosphere faster than it was actually warming. So that’s not a good result if you’re trying to test your theory of how the climate works with the model against what actually happens.
BN: How much do you think the deeply over-exaggerated predictions of Doom and Gloom have to do with the methodology substantiated by confirmation bias?
JC: That’s an interesting question because we’re a bit confused as well. We have been publishing these papers since 1994 that have demonstrated models warm too much relative to the actual climate, and yet we don’t see an improvement in climate models and trying to match reality with their model output. Now I think a number of modelers understand that: yes the there is a difference there and the models are just too hot. But what is the process that’s gone wrong in the models is a difficult question for these folks. Because models have hundreds of places you can turn a little knob, change a coefficient, and that will change the result. It’s not a physical thing, it’s not based on physics; it’s the model parameterizations— the little pieces of the model that try to represent an actual part of the atmosphere. For example, when do clouds form? That’s a pretty big question. How much humidity in the atmosphere is required to create a cloud? Because once the cloud forms it reflects sunlight and cools the Earth. So that’s it that’s one of the big questions.
So in testing the models we like to use the bulk atmospheric temperature; it’d a very direct measurement that models produce and so we can then say there’s a problem here with climate models.
BN: To what degree did your observation on data differ from their forecasts?
Generally it’s about a factor of two. At times it’s been more, but on average the latest models (CMIP6) for the Deep layer of the atmosphere are warming about twice too fast, and that’s a real problem. I think when now we’re looking at over 40 years with which we can test these models, and they’re already that far off.
Figure 8: Warming in the tropical troposphere according to the CMIP6 models. Trends 1979–2014 (except the rightmost model, which is to 2007), for 20°N–20°S, 300–200 hPa.
So we should not use them to to tell us what’s going to happen in the future since they haven’t even gotten us to the right place in the last 40 years.
BN: Given that your real world data refuted what the forecasts were every time for decades, why then (and I recognize that this is conjecture) why are, let’s say, 97 or 99 % of scientists so firmly behind climate crisis narrative?
JC: Yeah I don’t know how many are really fully behind that crisis climate narrative. I saw a recent survey where about 55 percent might have been of the opinion that the climate warming was going to be a problem. Warming itself is not a problem: I mean the Earth has been warmer in the past than it is today, so the Earth has survived that before. And I don’t think putting extra plant food in the atmosphere is going to be a real problem for us to overcome. I do think the world is going to warm some from the extra CO2, but there are a lot of benefits that come from that.
You’re you’re dealing with a question about human nature and funding and so on. I think we all know that the more dramatic the story is, especially in the political world, the more attention you will get. Therefore your work can be highlighted and that helps you with funding and attention and so on. And part of what’s going on here. Then there’s the other real stronger political narrative: that there are groups and in the world political Elite that like to have a narrative that scares people, so that they can then offer a solution. And so it’s a simple way to say: elect me to this office and I will be able to solve this problem.
Then you are facing people like us who actually produce the data and we can report on extreme events and so on and say: Well you know there isn’t any change in these extreme events, so what’s the problem you’re trying to solve? And then we look at the other side of that issue and say: Okay if you actually implement this regulation or this law, it’s not going to make any difference on the climate end, so it’s a you kind of lose on two ends on that story.
BN: You’re a distinguished professor of atmospheric science and also director of Earth Sciences also at Alabama in Huntsville, these are prominent positions. How have you managed to hold on to them with climate views that are so divergent from the norm?
JC: Well the environment in the state of Alabama is different than what you have in Washington. I’m from California way across the country, and I tell people that one of the reasons I like to live in Alabama because in Alabama you can call a duck a duck; that you can just be direct about what’s going on and and you’re not going to be given the evil eye or cast out. As it is now in the climate establishment, you know, saying that all the models are warming too much and that there is not a disaster arising that causes great consternation.Because the narrative has been built over the last 30 years that we are supposed to be in a catastrophe. To come out and say, well here’s the data and the data show there is no catastrophe looming; we’re doing fine, the world is doing fine, human life is thriving in places it’s allowed to. So what’s the problem here you’re trying to solve.
BN:Did you ever manage to get your findings to policy makers that have influence to do something about it?
[An important proof against the CO2 global warming claim was included in John Christy’s testimony 29 March 2017 at the House Committee on Science, Space and Technology. The text and diagram below are from that document which can be accessed here.
IPCC Assessment Reports show that the IPCC climate models performed best versus observations when they did not include extra GHGs and this result can be demonstrated with a statistical model as well.
Figure 5. Simplification of IPCC AR5 shown above in Fig. 4. The colored lines represent the range of results for the models and observations. The trends here represent trends at different levels of the tropical atmosphere from the surface up to 50,000 ft. The gray lines are the bounds for the range of observations, the blue for the range of IPCC model results without extra GHGs and the red for IPCC model results with extra GHGs.The key point displayed is the lack of overlap between the GHG model results (red) and the observations (gray). The nonGHG model runs (blue) overlap the observations almost completely.
JC: Well, I’ve been to Congress 20 times, testified before hearings. So the information is there and available, but I can’t force Congress to make legislation that matches the real world. The Congressional world is a political world, and things happen there that are kind of out of my reach and ability to influence.
BN: According to your research, you’ve also said that the climate models underestimate negative feedback loops. Can you explain to me what is this mechanism and the effect of overestimation of the loops on understanding climate for what it is?
JC: That’s a very complicated issue, and I don’t understand it all for sure, but we can say just from some general results and general observation what’s going on here. One of those General observations is that when a climate model warms up the atmosphere one degree Kelvin, it sends out 1.4 watts per metersquared so the air atmosphere warms up and energy escapes to space 1.4 watts. When we use actual observations of the atmosphere, when the real atmosphere warms up one Kelvin it sends out 2.6 watts of energy. That’s almost twice as much so that tells you right there that the climate models are retaining or holding on to energy that the real world allows to escape when it warms. So that’s a negative feedback: as the atmosphere warms for a bit the real real world knows how to let that heat escape; whereas the models don’t and they retain it and that’s why they keep building up heat over time.
BN: What other variables do you look at?
JC: The state climatologists I deal a lot get very practical questions that people ask. They want to know: is it getting hot or is it getting wetter. Are rain storms getting heavier and are the Hurricanes getting worse and so on. I actually wrote a booklet called a practical guide to climate change in Alabama. But it covers a lot of the country as well. It’s free, you can download it from the first page of my website The Alabama State climatologist. I answer a lot of these very practical questions and as we go down the list: droughts are not getting worse over time, heavy rainstorms are not getting worse over time, here in the Southeast in fact. Ross McKitrick and I also had a paper where we went back to 1878 and demonstrated that the trends are not significant. Hurricanes are not going up at all; in fact 2022 is going to be one of the quietest that we’ve had in a while. Tornadoes are not becoming more numerous, heat waves are not becoming worse. So one after another, the weather that people really care about, that if it changes could cause problem or catastrophe, we find those events are not changing, they’ve always been around.[Title below in red is link to Christy’s booklet.]
BN: Some of the biggest critics of climate skeptics say: okay yeah it’s not fair one extreme weather event doesn’t say much, but they argue that there are very particular trends that have been on the increase. Recently have you observed this at all?
JC: That’s exactly the kind of thing we build data sets to discover. For example there is a story, and there is some evidence for it, that in the last hundred years there’s been an increase in in heavy rain events in part of our country, not all of it just part of the country. So I built a data set that went back in fact back to the 1860s. And we looked at that very carefully, and found that when you go back far enough, there were a lot of heavy events back then. And so over the long time period of 140 years or more we don’t see an upward trend. It’s unusual in that sample of time 140 years that we don’t see a change in those kind of events. So that’s why I think it has great value to build these data sets so you can specifically answer the question and the claim that is being made
One of the worst ones was made by the New York Times when they were talking about how many record high temperatures occurred in a recent heat wave around the country. So I looked at that carefully, and they were allowing stations to be included that only had 30 years or even less than 30 years of data. Some had a hundred years but a lot of them just had 30 years. Well when you become very systematic, you say: I’m only going to allow stations that have a hundred years so that every station that measured in 2022 can be compared with the entire time series. Then their story falls apart because the 1930s and the 50s were so hot in our country that they still hold the records for the number of high temperature events.
The scary thing for me is that as much as it completely falls apart, there’s no logic to it,
yet it’s still firmly stands as what most people believe.
You have to credit those in the climate establishment and the media or whoever is behind all this, that they have been successful in scaring people about the climate. Because now you find that even in grade school textbooks. Almost every new story that comes out, and this is where this establishment is very good, they make sure every story has some kind of line in it about climate change. They don’t ever go back and talk to someone who actually builds these data sets who says is that really the worst it’s been was 120 years ago. They just make those claims.
Other than the fact that sea level is rising a bit, the extreme events are just not there to really cause problems now. We are in a problem of having greater damages occur because of extreme events, and mainly because we’ve just built so much more stuff and placed It In harm’s way. Our coastlines are crowded with Condominiums, entertainment parks and retirement villages, and so on. There’s so many more of them that when a hurricane does come, it’s going to wipe out a lot more and so for the absolute value of those damages has gone up. But the number of hurricanes, their strength and so on, the background climate has not caused that problem. It’s just that we like to build things in places that are dangerous.
We have records of sea level rise, and it’s on the order of about an inch per decade, except in places where the land’s sinking. You can find that on the Louisiana Gulf Coast and places like that, but otherwise it’s about an inch per decade. I tell folks that an inch per decade, two and a half centimeters a decade is not your problem. It’s 10 feet in six hours from the next hurricane that’s your problem. If you can withstand a rise of sea level of 10 feet in six hours then you’re probably going to be okay. But if you can’t then a hurricane can really cause problems, and so we just have more exposure to that kind of his situation now than we’ve had before.
BN: What about the trend with sea level rise? Should we be worried about future Generations having to deal with issues that might not affect us in our lifetime but eventually will threaten their lives?
I think your listeners would need to understand that sea level is a dynamic variable–It goes up, it goes down. It has been over a hundred meters lower than today just in the last 25,000 years, and there was a period from about 15 000 years ago to 8 000 years ago where the sea level rose about 12 centimeters per decade for seven thousand years. That’s a lot more than two and a half centimeters a decade as it’s doing now, so the world has managed to deal with rising sea levels before. If we go back to the last warm period about 130 000 years ago, the sea level then was higher than it is now by about five meters or so. So just naturally we would expect at least another five meters of sea level; it won’t happen tomorrow, it won’t happen this Century. But slowly it will likely continue to rise and so that should be placed in your thinking if you’re building a dock for say a military port or something you want to last a long time. Put a cushion in there, a way to handle another half meter of seat level rise in the next hundred years, and you should be okay.
BN: About your temperature records: How much has the Earth warmed let’s say over the last four years?
JC: Yes. With this November we finished 43 years of measurements. In that time the temperature has risen half a degree Celsius. And you might want to look at other things about the world. World agricultural production has expanded tremendously. Nations are now exporting grain more than they had before, because people are pretty smart and figure out how to do things better all the time. Growing food is one thing they figured out how to do better as time passed, so the climate warming of a half degree has not caused a a major catastrophe at all. Wealth has increased around the planet, now some governments are trying to prevent you from growing your wealth, but that’s a hard thing to stop people who like to have food; they like to have conveniences in their life and that’s hard to pass laws that say you can’t enjoy the life the way you want to.
BN:How much of the warming are you reliably able to say is as a result of human activity?
JC: Okay. The answer is none in the sense that you said reliably. I can’t come up with an answer for that reliably. Warming from humans assumes warming that is not due to El Nino; or warming that’s not due to volcanic suppression of temperatures earlier in the record, which comes up to about a tenth of a degree per decade.
Are there other factors that we can say for sure have played a role in the incremental warming of the planet over the last few decades. We see that we’ve had a couple of volcanoes in the first half of that period Eyjafjallajökull and Pinatubo and those cool the planet in the first half of that 40 years. So that tilted the trend up and that’s where I come up with a one-tenth per decade is the warming rate, which means the climate is not very sensitive to carbon dioxide or greenhouse gas warming. It’s probably half or even less as sensitive as models tend to report.
BN: So if CO2 exposure or insertion into the atmosphere were to double what would the results be?
JC: I actually had a little paper on that and we’re kind of expecting maybe about 2070 or 2080 it will be double from what it was back in 1850. And the warming of that amount uh will be about a degree, 1.3 C is what I calculated. The general rule I found about people is they don’t mind an extra degree on their temperature. In fact if you look at the United States the average American experiences a much warmer temperature now than they did a hundred years ago. Because the average American has moved South; the average American has moved to much warmer climates–California, Arizona Texas, Alabama, Florida and so on. Because cold is not a whole lot of fun. You know, skiing, snowmobiling and ice fishing and so on, that’s fine. But the average person likes it to be warm and so that’s why many people in our country have moved to warmer areas. So I don’t think that 1.3 Kelvin is going to matter much whether people really care about those extreme events and so on.
BN: What do you your temperature records tell you about previous hotter temperatures?
JC: Since 1979, what we see is an upward trend in the in the global temperature that I think is manageable. But it goes up and down the 1997-98 El Nino was a big event and in 2016 El Nino was a big event. We also see the downs that come from a volcano that might go off and cool off the planet. Those are bigger effects than that small trend that’s going up. The global temperature can change by two tenths of a degree from month to month when we’re talking about a tenth per decade. Then people say, you know a month to month we can handle but we can’t handle 20 years worth of a small change. That just doesn’t make sense and and the real world evidence is pretty clear that that humans have done extremely well as our planet has been warming a little bit, whether it’s natural or not.
BN: Can you tell me about the Milankovich Cycles?
Milankovich Cycles are the orbital cycles of the earth orbit around the Sun and its tilt of the axis and the distance from the Sun. It is not a perfect circular orbit around the sun, it’s kind of an ellipse and it changes through time. All those factors work together to put a little bit more solar energy in certain places and less than others. These cycles are likely related to the Ice Ages we talk about.
If you can melt the snow in Canada in the summer, then you won’t have an ice age. So the snow falls in the winter and if you can’t melt the snow in Canada in the summer because the Earth is tilted away a bit in July and August. Then the snow hangs around all summer long, the next winter more snow that piles up the next summer it doesn’t melt and so on the next year. You get this mechanism that adds and extends snow cover leading to an ice age
So the tilt of the axis and other parameters I just mentioned can moderate how much sunlight comes in the summertime in Canada. And it’s up to 100 watts per meter squared which is a lot of energy difference over time. That’s probably the strongest theory that has a good amount of evidence that those orbital changes can cause huge changes in the climate from ice ages to the current interglacial.
BN: There’s claims that the way that humans are living is causing daily Extinction of two to three hundred different species. Is this a natural course of Evolution?
JC: You know 99 % of the species that have ever lived are extinct, so extinction is is pretty natural. Obviously humans cause some extinctions. When you destroy the environment of a small place and that was the only place that particular species lived then yes you know humans caused that extinction Did climate change from humans cause any extinctions? I think that jury is still out because most species love the extra carbon dioxide. Plants do specifically and then everything that eats plants loves that, so you might want to say the extra carbon dioxide actually helped in some sense the whole biosphere.
But I think that what humans do to the surface and to water, if it’s not clean properly and if you just really poison the surface in the air, then that can cause some real problems for the species that are living out there. And that’s why we have rules about not putting poison in air or in the water.
BN: Does that qualify as climate change?
JC: No. To say carbon dioxide is a poison, you really have to scratch your head on that because plants love the stuff. It invigorates the biosphere. When did all of this Greenery evolve and the corals occur and grow and develop? it was when there was two to four or five times as much CO2 as is in the air now. Carbon dioxide invigorates the biosphere, so we’re just actually putting back carbon dioxide that had been in the atmosphere earlier. And I don’t think the world is going to have much problem with that in terms of its biosphere. The issue is about the climate going to become so bad that some things can’t handle it and I don’t really see the evidence for that happening.
BN: Critics of your views on climate have argued that you undercut your credibility by making claims that exceed your data and that you’re unwilling to agree with different findings. How do you respond to that?
JC: Show me a finding and let me look at it and if it’s a valid finding, fine I’ll agree with it. But you know you can find anything on the web these days about claims that someone might make but you show me the evidence. Let me see what you’re complaining about and we can have a discussion about that. I just had a paper published last week on snowfall in the western states of the United States that shows for the main snowfall regions there is no trend in snowfall. The amount of snow that’s falling right now is the same as it was 120 years ago. So snow is still falling out in the western mountains of the United States–that’s evidence, that’s data. And so when someone claims that oh my, snowfall is going away out in the west, I said well well here look at this evidence from real station data that people recorded back in 1890 to now.
So I can answer that question with real information. You don’t see many people like me in debates because they’re not offered to me. In fact I’ve been uninvited you know. Someone on a particular panel would say hey let’s get this guy to come here and speak to us, and then I receive the disinvitation because I was not going to go along with the theme of their climate change as a catastrophe presentation
BN: You referred to times in the past that CO2 levels were significantly higher than they are now. Do records show any negative effects as a result of such high CO2?
JC: Well when you say negative, that’s almost a moral question: good or bad that the dinosaurs went extinct? I think they’d the dinosaurs would have an opinion about that. Let me rephrase: If it had to be on those levels today would it negatively impact Humanity? We see carbon dioxide has increased as humans are producing energy so that their lives can be enhanced. There’s a direct relationship between how much carbon or energy you’re able to use and carbon is the main source today and your ability to thrive.
Think about it we didn’t leave the Stone Age because we ran out of rocks. We left the Stone Age because something better came along, you know, Iron, Bronze and so on. In terms of energy we didn’t leave the wood and dung age because we ran out of trees or excrement, we found a better source that was carbon: coal, oil and so on. And transportation: we didn’t leave the horse and buggy age because we ran out of horses. It was because Henry Ford made a vehicle that was cheap and affordable. My great grandfather who was in destitute poverty in Oklahoma in the 1930s had a Model T. And another thing about Henry Ford: He didn’t go around getting the government to kill all the horses so you’d have to buy his cars. Horses were still available for the poorest people you know. And he didn’t make the government go out and build gas stations or drill for oil, that was done at the market for the private level.
But today we have a government that says this is what we want for the energy structure, and so we’re going to be using your taxpayer money to put out all these charging stations and force you to buy electric cars or at least subsidize them tremendously, and put up all these windmills and so on at Great expense and great environmental wreckage.
I can assure you that without energy life is brutal and short, and so energy is a thing that has caused our lifespans to double so that children no longer fear about diseases that used to wipe out Millions. Because of the advances that energy has brought through electricity and experimentation and all the sciences that we have developed now. All that’s based upon that access to energy.
So yes developing countries are going to get their energy, they’re going to find the energy they need. I’m not making this as a prediction, just using this as an observation. Right now it’s carbon that’s the cheapest and most effective and very high density. So we will see these countries use carbon to advance and we should not stand in their way. Because they want to live like us who already have pretty big carbon Footprints.
If you want to have some comfort in that, remember the carbon dioxide we’re putting back into the atmosphere is invigorating the biosphere and it also represents people living longer and better lives. Just no question that as energy is made available and affordable people live longer and better lives. I think that’s going to ultimately be the the inertia that’s going to carry forward this issue past all the preaching about carbon dioxide problems.
Environmentalists would argue that they’re not against electricity and prosperity, they’re just advocating for a better cleaner way to do it. It’s a tremendous misconception that a windmill or a solar panel can somehow give you cleaner and more reliable energy than what you have now. That’s just not true. To build a windmill, there’s tremendous environmental wreckage that you have to go through in terms of all the minerals you have to yank out of the earth and process. And processing takes energy by the way. And then building all these transmission lines. The energy is so diffuse, it is so weak in wind and solar that you have to gather up huge amounts of land to put it together. Robert Price said it well when he called it the iron law of power: the weaker the source of energy the more stuff and material you need to gather it, to concentrate it and to make it useful.
You have to spend huge amounts of dollars in environmental cost to make a windmill or make a solar panel, which by the way doesn’t last forever. So this carbon that already has a tremendous amount of energy in a very small dense space means that its environmental footprint is much much smaller than what you have with solar or wind. In fact it’s about one to a thousand or two thousand in terms of the square footage you need.
Look at windmill and solar panel farms. Not only are they just ugly but they cause tremendous environmental damage in their construction and maintenance. What are the long-term effects decades from now? If we just continue to get our energy from fossil fuels how bad can it get?
Well start with how good it can get. People will have access to affordable energy so they’re going to live longer and they’re going to live better lives when you have access to this. The impact on the climate is about the only thing you can think of. Well the sea level is going to continue to rise since it’s been rising for several hundred years, and at a manageable rate by the way. And the atmosphere might warm some more, but certainly not in terms of some catastrophic effect that will cause us to lose our ability to thrive.
I’m just very optimistic that people are clever and they can figure out how to adapt to whatever is going to happen. The real issues I deal with as a state climatologist are the extreme events that we know are going to happen that you’re not ready for. I mean that flood that happened 50 years ago is going to come back again. And it’s going to cause some real problems if you don’t build your infrastructure and put your houses or industry where they can be safe. If you don’t build up on the coast too much, so then you won’t be clobbered by a hurricane or something like that. It’s these kind of natural extreme events to which we’re far more vulnerable right now, rather than some small and gradual change that the climate system might undergo.
BN: I did read somewhere that someone has said, and I’m sure you must get it a lot, whether you get any funding from the fossil fuel industry. Do you?
JC: No I do not, and I made that decision way back in the early 1990s. I might make a fossil fuel company mad by some of them information I would produce, but so be it. I can put my head to bed at night and not be worried about: Did I accommodate some agenda somewhere? I’m just after what observations say. Can I build the best observational data sets to answer the questions of climate that we have, and that’s what I want to do.
BN: I suppose one of the biggest tragedies about it would be that it would discredit the real science and the fundamental research that you’re doing. Just that it’ll be a non-starter because people will immediately dismiss it.
JC: That’s unfortunate because the perception then is that, well if a fossil fuel company paid someone to do some research they really wanted to know the answer about something and this person was completely honest did the work properly and provided the answer to the fossil fuel companies is as. Well that answer would be tainted because it came from a fossil fuel company. Well hello: Think about what environmental advocacy groups and pressure groups do all of the time. They pay tremendous sums to people so they can come up with an answer that gives them their leverage in claiming this is a catastrophic problem. So I can at least take that perception off the table.
BN: Lastly are you aware of any ways in which geoengineering could possibly be affecting the natural balance of things? Is it being done more than we’re aware of and could it backfire?
JC: Anytime humansdo something they’re going to have an impact, no question about that. So you could call it geoengineering but inadvertently we have made some desert valleys cooler because now we irrigate crops. We have taken water that fell someplace and moved it to another place. So that’s a bit of geoengineering there. And by the way a lot of those places feed a lot of the world, so you can’t say it’s bad I suppose.
But the other question about geoengineering is:
Can we do something to prevent a perceived problem here?
And that’s the real danger I think, because you don’t know the consequences when you start tinkering with a very complex and dynamic system. And so I would say stay away from that. Suppose someone did a big geoengineering experiment and something bad happened somewhere. Well that country would sue the world and say: look you made this bad thing happen to us you are liable. And so then we’re getting nowhere in terms of preventing some problem on the planet.
Two fallacies ensure meaningless public discussion about climate “crisis” or “emergency.” H/T to Terry Oldberg for comments and writings prompting me to post on this topic.
One corruption is the numerous times climate claims include fallacies of Equivocation. For instance, “climate change” can mean all observed events in nature, but as defined by IPCC all are 100% caused by human activities. Similarly, forecasts from climate models are proclaimed to be “predictions” of future disasters, but renamed “projections” in disclaimers against legal liability. And so on.
A second error in the argument is the Fallacy of Misplaced Concreteness, AKA Reification. This involves mistaking an abstraction for something tangible and real in time and space. We often see this in both spoken and written communications. It can take several forms:
♦ Confusing a word with the thing to which it refers
♦ Confusing an image with the reality it represents
♦ Confusing an idea with something observed to be happening
Examples of Equivocation and Reification from the World of Climate Alarm
“Seeing the wildfires, floods and storms, Mother Nature is not happy with us failing to recognize the challenges facing us.” – Nancy Pelosi
Mother Nature’ is a philosophical construct and has no feelings about people.
“This was the moment when the rise of the oceans began to slow and our planet began to heal …” – Barack Obama
The ocean and the planet do not respond to someone winning a political party nomination. Nor does a planet experience human sickness and healing.
“If something has never happened before, we are generally safe in assuming it is not going to happen in the future, but the exceptions can kill you, and climate change is one of those exceptions.” – Al Gore
The future is not knowable, and can only be a matter of speculation and opinion.
“The planet is warming because of the growing level of greenhouse gas emissions from human activity. If this trend continues, truly catastrophic consequences are likely to ensue. “– Malcolm Turnbull
Temperature is an intrinsic property of an object, so temperature of “the planet” cannot be measured. The likelihood of catastrophic consequences is unknowable. Humans are blamed as guilty by association.
“Anybody who doesn’t see the impact of climate change is really, and I would say, myopic. They don’t see the reality. It’s so evident that we are destroying Mother Earth. “– Juan Manuel Santos
“Climate change” is an abstraction anyone can fill with subjective content. Efforts to safeguard the environment are real, successful and ignored in the rush to alarm.
“Climate change, if unchecked, is an urgent threat to health, food supplies, biodiversity, and livelihoods across the globe.” – John F. Kerry
To the abstraction “Climate Change” is added abstract “threats” and abstract means of “checking Climate Change.”
“Climate change is the most severe problem that we are facing today, more serious even than the threat of terrorism.” -David King
Instances of people killed and injured by terrorists are reported daily and are a matter of record, while problems from Climate Change are hypothetical
Corollary: Reality is also that which doesn’t happen, no matter how much we expect it to.
Climate Models Are Built on Fallacies
A previous post Chameleon Climate Modelsdescribed the general issue of whether a model belongs on the bookshelf (theoretically useful) or whether it passes real world filters of relevance, thus qualifying as useful for policy considerations.
Following an interesting discussion on her blog, Dr. Judith Curry has written an important essay on the usefulness and limitations of climate models.
The paper was developed to respond to a request from a group of lawyers wondering how to regard claims based upon climate model outputs. The document is entitled Climate Models and is a great informative read for anyone. Some excerpts that struck me in italics with my bolds and added images.
Climate model development has followed a pathway mostly driven by scientific curiosity and computational limitations. GCMs were originally designed as a tool to help understand how the climate system works. GCMs are used by researchers to represent aspects of climate that are extremely difficult to observe, experiment with theories in a new way by enabling hitherto infeasible calculations, understand a complex system of equations that would otherwise be impenetrable, and explore the climate system to identify unexpected outcomes. As such, GCMs are an important element of climate research.
Climate models are useful tools for conducting scientific research to understand the climate system. However, the above points support the conclusion that current GCM climate models are not fit for the purpose of attributing the causes of 20th century warming or for predicting global or regional climate change on timescales of decades to centuries, with any high level of confidence. By extension, GCMs are not fit for the purpose of justifying political policies to fundamentally alter world social, economic and energy systems.
It is this application of climate model results that fuels the vociferousness of the debate surrounding climate models.
Evolution of state-of-the-art Climate Models from the mid 70s to the mid 00s. From IPCC (2007)
The actual equations used in the GCM computer codes are only approximations of the physical processes that occur in the climate system.
While some of these approximations are highly accurate, others are unavoidably crude. This is because the real processes they represent are either poorly understood or too complex to include in the model given the constraints of the computer system. Of the processes that are most important for climate change, parameterizations related to clouds and precipitation remain the most challenging, and are the greatest source of disagreement among different GCMs.
There are literally thousands of different choices made in the construction of a climate model (e.g. resolution, complexity of the submodels, parameterizations). Each different set of choices produces a different model having different sensitivities. Further, different modeling groups have different focal interests, e.g. long paleoclimate simulations, details of ocean circulations, nuances of the interactions between aerosol particles and clouds, the carbon cycle. These different interests focus their limited computational resources on a particular aspect of simulating the climate system, at the expense of others.
Overview of the structure of a state-of-the-art climate model. See Climate Models Explainedby R.G. Brown
Human-caused warming depends not only on how much CO2 is added to the atmosphere, but also on how ‘sensitive’ the climate is to the increased CO2. Climate sensitivity is defined as the global surface warming that occurs when the concentration of carbon dioxide in the atmosphere doubles. If climate sensitivity is high, then we can expect substantial warming in the coming century as emissions continue to increase. If climate sensitivity is low, then future warming will be substantially lower.
In GCMs, the equilibrium climate sensitivity is an ‘emergent property’ that is not directly calibrated or tuned.
While there has been some narrowing of the range of modeled climate sensitivities over time, models still can be made to yield a wide range of sensitivities by altering model parameterizations. Model versions can be rejected or not, subject to the modelers’ own preconceptions, expectations and biases of the outcome of equilibrium climate sensitivity calculation.
Further, the discrepancy between observational and climate model-based estimates of climate sensitivity is substantial and of significant importance to policymakers. Equilibrium climate sensitivity, and the level of uncertainty in its value, is a key input into the economic models that drive cost-benefit analyses and estimates of the social cost of carbon.
Variations in climate can be caused by external forcing, such as solar variations, volcanic eruptions or changes in atmospheric composition such as an increase in CO2. Climate can also change owing to internal processes within the climate system (internal variability). The best known example of internal climate variability is El Nino/La Nina. Modes of decadal to centennial to millennial internal variability arise from the slow circulations in the oceans. As such, the ocean serves as a ‘fly wheel’ on the climate system, storing and releasing heat on long timescales and acting to stabilize the climate. As a result of the time lags and storage of heat in the ocean, the climate system is never in equilibrium.
The combination of uncertainty in the transient climate response (sensitivity) and the uncertainties in the magnitude and phasing of the major modes in natural internal variability preclude an unambiguous separation of externally forced climate variations from natural internal climate variability. If the climate sensitivity is on the low end of the range of estimates, and natural internal variability is on the strong side of the distribution of climate models, different conclusions are drawn about the relative importance of human causes to the 20th century warming.
Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (ΔT), 1861-2000. The thin dashed line represents annual ΔT, the bold line—its 13-year smoothing, and the line constructed from rectangles—WFC (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009
Anthropogenic (human-caused) climate change is a theory in which the basic mechanism is well understood, but whose potential magnitude is highly uncertain.
What does the preceding analysis imply for IPCC’s ‘extremely likely’ attribution of anthropogenically caused warming since 1950? Climate models infer that all of the warming since 1950 can be attributed to humans. However, there have been large magnitude variations in global/hemispheric climate on timescales of 30 years, which are the same duration as the late 20th century warming. The IPCC does not have convincing explanations for previous 30 year periods in the 20th century, notably the warming 1910-1945 and the grand hiatus 1945-1975. Further, there is a secular warming trend at least since 1800 (and possibly as long as 400 years) that cannot be explained by CO2, and is only partly explained by volcanic eruptions.
CO2 relation to Temperature is Inconsistent.
Summary
There is growing evidence that climate models are running too hot and that climate sensitivity to CO2 is on the lower end of the range provided by the IPCC. Nevertheless, these lower values of climate sensitivity are not accounted for in IPCC climate model projections of temperature at the end of the 21st century or in estimates of the impact on temperatures of reducing CO2 emissions.
The climate modeling community has been focused on the response of the climate to increased human caused emissions, and the policy community accepts (either explicitly or implicitly) the results of the 21st century GCM simulations as actual predictions. Hence we don’t have a good understanding of the relative climate impacts of the above (natural factors) or their potential impacts on the evolution of the 21st century climate.
Footnote:
There are a series of posts here which apply reality filters to attest climate models. The first was Temperatures According to Climate Modelswhere both hindcasting and forecasting were seen to be flawed.
Given the persistent headlines about climate change over the years, it’s surprising how long it took the Nobel Committee to award the Physics prize to a climate modeler, which finally occurred earlier this month.
Indeed, Syukuro Manabe has been a pioneer in the development of so-called general circulation climate models (GCMs) and more comprehensive Earth System Models (ESMs). According to the Committee, Manabe was awarded the prize “For the physical modelling of the earth’s climate, quantifying variability, and reliably predicting global warming.”
What Manabe did was to modify early global weather forecasting models, adapting them to long-term increases in human emissions of carbon dioxide that alter the atmosphere’s internal energy balance, resulting in a general warming of surface temperatures, along with a much larger warming of temperatures above the surface over the earth’s vast tropics.
Unlike some climate modelers, like NASA’s James Hansen — who lit the bonfire of the greenhouse vanities in 1988, Manabe is hardly a publicity hound. And while politics clearly influences it (see Al Gore’s 2007 Prize), the Nobel Committee also respects primacy, as Manabe’s model was the first comprehensive GCM. He produced it at the National Oceanic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton NJ. The seminal papers were published in 1975 and 1980.
And, after many modifications and renditions, it is also the most incorrect of all the world’s GCMs at altitude over the vast tropics of the planet.
Getting the tropical temperatures right is critical. The vast majority of life-giving moisture that falls over the worlds productive midlatitude agrosystems originates as evaporation from the tropical oceans.
The major determinant of how much moisture is wafted into our region is the vertical distribution of tropical temperature. When the contrast is great, with cold temperatures aloft compared to the normally hot surface, that surface air is buoyant and ascends, ultimately transferring moisture to the temperate zones. When the contrast is less, the opposite occurs, and less moisture enters the atmosphere.
Every GCM or ESM predicts that several miles above the tropical surface should be a “hot spot,” where there is much more warming caused by carbon dioxide emissions than at the surface. If this is improperly forecast, then subsequent forecasts of rainfall over the world’s major agricultural regions will be unreliable.
That in turn will affect forecasts of surface temperature. Everyone knows a wet surface heats up (and cools down) slower than a dry one (see: deserts), so getting the moisture input right is critical.
Following Manabe, vast numbers of modelling centers popped up, mushrooms fertilized by public — and only public — money.
Every six years or so, the U.S. Department of Energy collects all of these models, aggregating them into what they call Coupled Model Intercomparison Projects (CMIPs). These serve as the bases for the various “scientific assessments” of climate change produced by the U.N.’s Intergovernmental Panel on Climate Change (IPCC) or the U.S. “National Assessments” of climate.
Figure 8: Warming in the tropical troposphere according to the CMIP6 models. Trends 1979–2014 (except the rightmost model, which is to 2007), for 20°N–20°S, 300–200 hPa. John Christy (2019)
In 2017, University of Alabama’s John Christy, along with Richard McNider, published a paper that, among other things, examined the 25 applicable families of CMIP-5 models, comparing their performance to what’s been observed in the three-dimensional global tropics. Take a close look at Figure 3 from the paper, in the Asia-Pacific Journal of Atmospheric Sciences, and you’ll see that the model GFDL-CM3 is so bad that it is literally off the scale of the graph. [See Climate Models: Good, Bad and Ugly]
At its worst, the GFDL model is predicting approximately five times as much warming as has been observed since the upper-atmospheric data became comprehensive in 1979. This is the most evolved version of the model that won Manabe the Nobel.
In the CMIP-5 model suite, there is one, and only one, that works. It is the model INM-CM4 from the Russian Institute for Numerical Modelling, and the lead author is Evgeny Volodin. It seems that Volodin would be much more deserving of the Nobel for, in the words of the committee “reliably predicting global warming.”
Might this have something to do with the fact that INM-CM4 and its successor models have less predicted warming than all of the other models?
Patrick J. Michaels is a senior fellow working on energy and environment issues at the Competitive Enterprise Institute and author of “Scientocracy: The Tangled Web of Public Science and Public Policy.”
Peter J. Wallison and Benjamin Zycher examine the evidence in their Law & Liberty article What We Really Know About Climate Change. Excerpts in italics with my bolds and added images.
The assumption that humans are the single most significant cause of climate change is unsupported by the available science.
The sixth Assessment Report (AR6) of the Intergovernmental Panel on Climate Change (IPCC) continues a long history of alarmist predictions with the deeply dubious statement that human-caused climate change has now become “irreversible.” President Biden and many others have called climate change an “existential threat” to humanity; and Biden claimed in his inaugural address to have heard from the Earth itself “a cry for survival.”
Hurricane Ida also has brought new claims about the dangers of climate change, but those assertions are inconsistent with the satellite record on tropical cyclones, which shows no trend since the early 1970s.
Yet the headline on the front page of the New York Times of August 12, 2021 was: “Greek Island Burns in a Sign of Crises to Come.” The accompanying article, continuing the multi-year effort of that newspaper to spread fears about climate change unsupported by evidence, argued that this was “another inevitable episode of Europe’s extreme weather [caused] by the man-made climate change that scientists have now concluded is irreversible.”
Almost every word in that sentence is either false or seriously misleading, continuing a multi-decade campaign of apocalyptic warnings about the effects of greenhouse gas emissions. Data on centuries and millennia of climate phenomena, constructed by scientists over many years around the world, show that the severe weather that the Times attributes to “man-made climate change” is consistent with the normal weather patterns and variability displayed in both the formal records and such proxy data as ice cores. In fact, there is little evidence that “extreme weather” events have become more frequent since 1850, the approximate end of the little ice age.
Increasing atmospheric concentrations of greenhouse gases have yielded detectable effects, but “scientists” in general have not, as the Times falsely stated, concluded that “extreme weather” is now “irreversible.” The statement itself comes from the “Summary for Policymakers” published as part of the most recent IPCC study of climate change; it is deeply problematic given the analyses and data provided in the scientific chapter (“The Physical Science Basis”) of the report to which the Times referred. Scientists disagree sharply about the significance of climate change and the analytic tools used to evaluate it, let alone whether it is “irreversible.”
There has been no upward trend in the number of “hot” days between 1895 and 2017; 11 of the 12 years with the highest number of such days occurred before 1960. Since 2005, NOAA has maintained the U.S. Climate Reference Network, comprising 114 meticulously maintained temperature stations spaced more or less uniformly across the lower 48 states, along with 21 stations in Alaska and two stations in Hawaii. They are placed to avoid heat-island effects and other such distortions as much as possible. The reported data show no increase in average temperatures over the available 2005-2020 period. In addition, a recent reconstruction of global temperatures over the past 1 million years—created using data from ice-sheet formations—shows that there is nothing unusual about the current warm period.
These alarmist predictions almost always are based upon climate models that have proven poor at reconstructing the past and ongoing temperature record. For example, the Times article implies that wildfires will increase in the future as the earth grows hotter. But there has been no trend in the number of U.S. wildfires in the 35 years since 1985, and global acreage burned has declined over past decades.
Unable to demonstrate that observed climate trends are due to human-caused (anthropogenic) climate change—or even that these events are particularly unusual or concerning—climate catastrophists will often turn to dire predictions about prospective climate phenomena. The problem with such predictions is that they are almost always generated by climate models driven by highly complex sets of assumptions about which there is significant dispute. It goes without saying that the predictions of models that cannot reconstruct what has happened in the past should not be given heavy weight in terms of predictions about the future, but that is exactly what many analysts are doing.
Extreme weather occurrences are likewise used as evidence of an ongoing climate crisis, but again, a study of the available data undercuts that assessment. U.S. tornado activity shows either no increase or a downward trend since 1954. Data on tropical storms, hurricanes, and accumulated cyclone energy (a wind-speed index measuring the overall strength of a given hurricane season) reveal little change since satellite measurements of the phenomena began in the early 1970s. The Palmer Drought Severity Index shows no trend since 1895.
Rising sea levels are another frequently cited example of the impending climate crisis. And yet sea levels have been rising since at least the mid-19th century, a phenomenon unlikely to have been caused only by human activity. The earth has been warming due to both natural and anthropogenic causes, resulting in some melting of sea ice, and a thermal expansion of sea water; the degree to which rising sea level has been caused by man is unknown. And the current rate of sea-level rise as measured by the satellites is 3.3 millimeters per year, or about 13 inches over the course of a century. Will that yield a crisis?
The data reported by the National Oceanic and Atmospheric Administration show that temperatures have risen and fallen since 1850, with an overall upward movement of about 1 degree C for 1850 through 2020. The 1910-1945 warming—which was very roughly the same magnitude as that observed from the mid-1970s through about 2000—is of particular interest in that it cannot be explained by higher greenhouse-gas concentrations, which increased from 300 parts per million to 310 parts per million over that period. This reinforces the commonsense observation that temperatures result from some combination of natural and anthropogenic influences, but alarmist reports seldom if ever suggest that there is any cause of warming other than the latter.
Changes in the extents of Arctic and Antarctic sea ice also raise questions about the importance of moderate warming. Since 1979, Arctic sea ice has declined relative to the 30-year average (again, the degree to which this is the result of anthropogenic factors is not known). Meanwhile, Antarctic sea ice has been growing relative to the 30-year average, and the global sea-ice total has remained roughly constant since 1979.
It is important to recognize that the assumption of many politicians, environmental groups, media sources like the New York Times, and no small number of scientist-activists—that humans are the single most significant cause of climate change—is unsupported by the available science. Such an absence of evidence should produce humility among this group, but it seems to foster more alarmism. At the very least, it should make Americans think twice before embracing radical solutions to a supposed problem that may not be important.
Spatial pattern of trends in Gross Primary Production (1982- 2015). Source: Sun et al. 2018.
Much of the mainstream press has touted, loudly, the alarmist conclusions of the latest IPCC report—amusingly, the IPCC AR6 provides a “Headlines Statements” link to assist alarmist politicians and media—but that reporting has obscured its problems, very real and very large. The report concedes that the mainstream climate models on the whole overstate warming for any given set of parameters and assumptions, but it then relies on those models for predictions about the future.
Figure 8: Warming in the tropical troposphere according to the CMIP6 models. Trends 1979–2014 (except the rightmost model, which is to 2007), for 20°N–20°S, 300–200 hPa. John Christy (2019)
The report concedes as well that the most extreme among its alternative scenarios— “RCP8.5”—has a likelihood that “is considered low,” but then uses RCP8.5 more than any other scenario. The report pretends to measure separately the magnitudes of natural and anthropogenic warming, but in reality does not do so (see Figure SPM.2); instead, it assumes away natural influences, which are asserted to have an average effect of zero. The IPCC models in summary have only two important variables: greenhouse gases, which have a warming effect, and sulfate aerosols, which have a cooling effect. That is why the IPCC models cannot explain the warming observed from 1910-1945. IPCC assumes, but does not attempt to model, a zero effect of sunlight variation, volcanic eruptions, and other such natural phenomena.
The fact is that we don’t understand all the elements in the complex climate system—the effects of clouds alone are understood poorly—and it is beyond irresponsible to adopt policies on the basis of flawed model projections that would slow economic growth in the US and elsewhere. That is a senseless and dangerous policy, which will only hurt people around the world who are striving to create better lives for themselves and their families.
Figure 5. CO2 emissions (a) and concentrations (b), anthropogenic radiative forcing (c), and global mean temperature change (d) for the three long-term extensions. As in Fig. 3, concentration, forcing, and temperature outcomes are calculated with a simple climate model (MAGICC version 6.8.01 BETA; Meinshausen et al., 2011a, b). Outcomes for the CMIP5 versions of the long-term extensions of RCP2.6 and RCP8.5 (Meinshausen et al., 2011c), as calculated with the same model, are shown for comparison.
Climate science research and assessments under the umbrella of the Intergovernmental Panel on Climate Change (IPCC) have misused scenarios for more than a decade. Symptoms of misuse have included the treatment of an unrealistic, extreme scenario as the world’s most likely future in the absence of climate policy and the illogical comparison of climate projections across inconsistent global development trajectories.
Reasons why such misuse arose include (a) competing demands for scenarios from users in diverse academic disciplines that ultimately conflated exploratory and policy relevant pathways, (b) the evolving role of the IPCC – which extended its mandate in a way that creates an inter-relationship between literature assessment and literature coordination, (c) unforeseen consequences of employing a temporary approach to scenario development, (d) maintaining research practices that normalize careless use of scenarios, and (e) the inherent complexity and technicality of scenarios in model-based research and in support of policy.
Consequently, much of the climate research community is presently off-track from scientific coherence and policy-relevance.
Attempts to address scenario misuse within the community have thus far not worked. The result has been the widespread production of myopic or misleading perspectives on future climate change and climate policy. Until reform is implemented, we can expect the production of such perspectives to continue, threatening the overall credibility of the IPCC and associated climate research. However, because many aspects of climate change discourse are contingent on scenarios, there is considerable momentum that will make such a course correction difficult and contested – even as efforts to improve scenarios have informed research that will be included in the IPCC 6th Assessment.
Discussion of How Imaginary Scenarios Spoil Attempts to Envision Climate Futures
The article above is paywalled, but a previous post reprinted below goes into the background of the role of scenarios in climate modelling, and demonstrates the effects referring to results from the most realistic model, INMCM5
Roger Pielke Jr. explains that climate models projections are unreliable because they are based on scenarios no longer bounded by reality. His article is The Unstoppable Momentum of Outdated Science. Excerpts in italics with my bolds.
Much of climate research is focused on implausible scenarios of the future, but implementing a course correction will be difficult.
In 2020, climate research finds itself in a similar situation to that of breast cancer research in 2007. Evidence indicates the scenarios of the future to 2100 that are at the focus of much of climate research have already diverged from the real world and thus offer a poor basis for projecting policy-relevant variables like economic growth and carbon dioxide emissions. A course-correction is needed.
In a new paper of ours just out in Environmental Research Letters we perform the most rigorous evaluation to date of how key variables in climate scenarios compare with data from the real world (specifically, we look at population, economic growth, energy intensity of economic growth and carbon intensity of energy consumption). We also look at how these variables might evolve in the near-term to 2040.
We find that the most commonly-used scenarios in climate research have already diverged significantly from the real world, and that divergence is going to only get larger in coming decades. You can see this visualized in the graph above, which shows carbon dioxide emissions from fossil fuels from 2005, when many scenarios begin, to 2045. The graph shows emissions trajectories projected by the most commonly used climate scenarios (called SSP5-8.5 and RCP8.5, with labels on the right vertical axis), along with other scenario trajectories. Actual emissions to date (dark purple curve) and those of near-term energy outlooks (labeled as EIA, BP and ExxonMobil) all can be found at the very low end of the scenario range, and far below the most commonly used scenarios.
Our paper goes into the technical details, but in short, an important reason for the lower-than-projected carbon dioxide emissions is that economic growth has been slower than expected across the scenarios, and rather than seeing coal use expand dramatically around the world, it has actually declined in many regions.
It is even conceivable, if not likely, that in 2019 the world has passed “peak carbon dioxide emissions.” Crucially, the projections in the figure above are pre-Covid19, which means that actual emissions 2020 to 2045 will be even less than was projected in 2019.
While it is excellent news that the broader community is beginning to realize that scenarios are increasingly outdated, voluminous amounts of research have been and continue to be produced based on the outdated scenarios. For instance, O’Neill and colleagues find that “many studies” use scenarios that are “unlikely.” In fact, in their literature review such “unlikely” scenarios comprise more than 20% of all scenario applications from 2014 to 2019. They also call for “re-examining the assumptions underlying” the high-end emissions scenarios that are favored in physical climate research, impact studies and economic and policy analyses.
Make no mistake. The momentum of outdated science is powerful. Recognizing that a considerable amount of climate science to be outdated is, in the words of the late Steve Rayer, “uncomfortable knowledge” — that knowledge which challenges widely-held preconceptions. According to Rayner, in such a context we should expect to see reactions to uncomfortable knowledge that include:
denial (that scenarios are off track),
dismissal (the scenarios are off track, but it doesn’t matter),
diversion (the scenarios are off track, but saying so advances the agenda of those opposed to action) and,
displacement (the scenarios are off track but there are perhaps compensating errors elsewhere within scenario assumptions).
Such responses reinforce the momentum of outdated science and make it more difficult to implement a much needed course correction.
Responding to climate change is critically important. So too is upholding the integrity of the science which helps to inform those responses. Identification of a growing divergence between scenarios and the real-world should be seen as an opportunity — to improve both science and policy related to climate — but also to develop new ways for science to be more nimble in getting back on track when research is found to be outdated.
[A previous post is reprinted below since it demonstrates how the scenarios drive forecasting by CMIP6 models, including the example of the best performant model: INMCM5]
Background from Previous Post : Best Climate Model: Mild Warming Forecasted
Links are provided at the end to previous posts describing climate models 4 and 5 from the Institute of Numerical Mathematics in Moscow, Russia. Now we have forecasts for the 21st Century published for INM-CM5 at Izvestiya, Atmospheric and Oceanic Physics volume 56, pages218–228(July 7, 2020). The article is Simulation of Possible Future Climate Changes in the 21st Century in the INM-CM5 Climate Model by E. M. Volodin & A. S. Gritsun. Excerpts are in italics with my bolds, along with a contextual comment.
Abstract
Climate changes in 2015–2100 have been simulated with the use of the INM-CM5 climate model following four scenarios: SSP1-2.6, SSP2-4.5, and SSP5-8.5 (single model runs) and SSP3-7.0 (an ensemble of five model runs). Changes in the global mean temperature and spatial distribution of temperature and precipitation are analyzed. The global warming predicted by the INM-CM5 model in the scenarios considered is smaller than that in other CMIP6 models. It is shown that the temperature in the hottest summer month can rise more quickly than the seasonal mean temperature in Russia. An analysis of a change in Arctic sea ice shows no complete Arctic summer ice melting in the 21st century under any model scenario. Changes in the meridional stream function in atmosphere and ocean are studied.
Overview
The climate is understood as the totality of statistical characteristics of the instantaneous states of the atmosphere, ocean, and other climate system components averaged over a long time period.
Therefore, we restrict ourselves to an analysis of some of the most important climate parameters, such as average temperature and precipitation. A more detailed analysis of individual aspects of climate change, such as changes in extreme weather and climate situations, will be the subject of another work. This study is not aimed at a full comparison with the results of other climate models, where calculations follow the same scenarios, since the results of other models have not yet been published in peer reviewed journals by the time of this writing.
The INM-CM5 climate model [1, 2] is used for the numerical experiments. It differs from the previous version, INMCM4, which was also used for experiments on reproducing climate change in the 21st century [3], in the following:
an aerosol block has been added to the model, which allows inputting anthropogenic emissions of aerosols and their precursors;
the concentrations and optical properties of aerosols are calculated, but not specified, like in the previous version;
the parametrizations of cloud formation and condensation are changed in the atmospheric block;
the upper boundary in the atmospheric block is raised from 30 to 60 km;
the horizontal resolution in the ocean block is doubled along each coordinate; and,
the software related to adaptation to massively parallel computers is improved, which allows the effective use a larger number of compute cores.
The model resolution in the atmospheric and aerosol blocks is 2° × 1.5° in longitude and latitude and 73 levels and, in the ocean, 0.5° × 0.25° and 40 levels. The calculations were performed at supercomputers of the Joint Supercomputer Center, Russian Academy of Sciences, and Moscow State University, with the use of 360 to 720 cores. The model calculated 6–10 years per 24 h in the above configuration.
Four scenarios were used to model the future climate: SSP1-2.6, SSP2-4.5, SSP3-7.0, and SSP5-8.5. The scenarios are described in [4]. The figure after the abbreviation SSP (Shared Socioeconomic Pathway) is the number of the mankind development path (see the values in [4]). The number after the dash means the radiation forcing (W m–2) in 2100 compared to the preindustrial level. Thus, the SSP1-2.6 scenario is the most moderate and assumes rapid actions which sharply limit and then almost completely stop anthropogenic emissions. Within this scenario, greenhouse gas concentrations are maximal in the middle of the 21st century and then slightly decrease by the end of the century. The SSP5-8.5 scenario is the warmest and implies the fastest climate change. The scenarios are recommended for use in the project on comparing CMIP6 (Coupled Model Intercomparison Project, Phase 6, [5]) climate models. Each scenario includes the time series of:
carbon dioxide, methane, nitrous oxide, and ozone concentrations;
emissions of anthropogenic aerosols and their precursors;
the concentration of volcanic sulfate aerosol; and
the solar constant.
One model experiment was carried out for each of the above scenarios. It began at the beginning of 2015 and ended at the end of 2100. The initial state was taken from the so-called historical experiment with the same model, where climate changes were simulated for 1850–2014, and all impacts on the climate system were set according to observations. The results of the ensemble of historical experiments with the model under consideration are given in [6, 7]. For the SSP3-7.0 scenario, five model runs was performed differing in the initial data taken from different historical experiments. The ensemble of numerical experiments is required to increase the statistical confidence of conclusions about climate changes.
[My Contextual Comment inserted Prior to Consideration of Results]
Firstly, the INM-CM5 historical experiment can be read in detail by following a linked post (see Resources at the end), but this graphic summarizes the model hindcasting of past temperatures (GMT) compared to HadCrutv4.
Figure 1. The 5-year mean GMST (K) anomaly with respect to 1850–1899 for HadCRUTv4 (thick solid black); model mean (thick solid red). Dashed thin lines represent data from individual model runs: 1 – purple, 2 – dark blue, 3 – blue, 4 – green, 5 – yellow, 6 – orange, 7 – magenta. In this and the next figures numbers on the time axis indicate the first year of the 5-year mean.
Secondly, the scenarios are important to understand since they stipulate data inputs the model must accept as conditions for producing forecasts according to a particular scenario (set of assumptions). The document with complete details referenced as [4] is The Scenario Model Intercomparison Project (ScenarioMIP) for CMIP6.
All the details are written there but one diagram suggests the implications for the results described below.
Figure 5. CO2 emissions (a) and concentrations (b), anthropogenic radiative forcing (c), and global mean temperature change (d) for the three long-term extensions. As in Fig. 3, concentration, forcing, and temperature outcomes are calculated with a simple climate model (MAGICC version 6.8.01 BETA; Meinshausen et al., 2011a, b). Outcomes for the CMIP5 versions of the long-term extensions of RCP2.6 and RCP8.5 (Meinshausen et al., 2011c), as calculated with the same model, are shown for comparison.
As shown, the SSP1-26 is virtually the same scenario as the former RCP2.6, while SSP5-85 is virtually the same as RCP8.5, the wildly improbable scenario (impossible according to some analysts). Note that FF CO2 emissions are assumed to quadruple in the next 80 years, with atmospheric CO2 rising from 400 to 1000 ppm ( +150%). Bear these suppositions in mind when considering the INMCM5 forecasts below.
Results [Continuing From Volodin and Gritsun]
Fig. 1. Changes in the global average surface temperature (K) with respect to the pre-industrial level in experiments according to the SSP1-2.6 (triangles), SSP2-4.5 (squares), SSP3-7.0 (crosses), and SSP5-8.5 (circles) scenarios.
Let us describe some simulation results of climate change in the 21st century. Figure 1 shows the change in the globally averaged surface air temperature with respect to the data of the corresponding historical experiment for 1850–1899. In the warmest SSP5-8.5 scenario (circles), the temperature rises by more than 4° by the end of the 21st century. In the SSP3-7.0 scenario (crosses), different members of the ensemble show warming by 3.4°–3.6°. In the SSP2-4.5 scenario (squares), the temperature increases by about 2.4°. According to the SSP1-2.6 scenario (triangles) , the maximal warming by ~1.7° occurs in the middle of the 21st century, and the temperature exceeds the preindustrial temperature by 1.4° by the end of the century.
[My comment: Note that the vertical scale starts with +1.0C as was seen in the historical experiment. Thus an anomaly of 1.4C by 2100 is an increase of only 0.4C, while the SSP2-4.5 result adds 1.4C to the present].
The results for other CMIP6 models have not yet been published in peer-reviewed journals. However, according to the preliminary analysis (see, e.g. https://cmip6workshop19.sciencesconf.org/ data/Session1_PosterSlides.pdf, p.29), the INM-CM5 model shows the lowest temperature increase among the CMIP6 models considered for all the scenarios due to the minimal equilibrium sensitivity to the CO2 concentration doubling, which is ~2.1° for the current model version, like for the previous version, despite new condensation and cloud formation blocks. [For more on CMIP6 comparisons see post Climate Models: Good, Bad and Ugly]
Fig. 2. Differences between the annual average surface air temperatures (K) in 2071–2100 and 1981–2010 for the (a) SSP5-8.5 and (b) SSP1-2.6 scenarios.
The changes in the surface air temperature are similar for all scenarios; therefore, we analyze the difference between temperatures in 2071–2100 and 1981–2010 under the SSP5-8.5 and SSP1-2.6 scenarios (Fig. 2). The warming is maximal in the Arctic; it reaches 10° and 3°, respectively. Other features mainly correspond to CMIP5 data [8], including the INMCM4 model, which participates in the comparison. The warming on the continents of the Northern Hemisphere is about 2 times higher than the mean, and the warming in the Southern Hemisphere is noticeably less than in the Northern Hemisphere. The land surface is getting warmer than the ocean surface in all the scenarios except SSP1-2.6, because the greenhouse effect is expected to weaken in the second half of the 21st century in this scenario, and the higher heat capacity of the ocean prevents it from cooling as quickly as the land.
The changes in precipitation in December–February and June–August for the SSP3-7.0 scenario averaged over five members of the ensemble are shown in Fig. 4. All members of the ensemble show an increase in precipitation in the winter in a significant part of middle and high latitudes. In summer, the border between the increase and decrease in precipitation in Eurasia passes mainly around or to the north of 60°. In southern and central Europe, all members of the ensemble show a decrease in precipitation. Precipitation also increases in the region of the summer Asian monsoon, over the equatorial Pacific, due to a decrease in the upwelling and an increase in ocean surface temperature (OST). The distribution of changes in precipitation mainly corresponds to that given in [6, Fig. 12.22] for all CMIP5 models.
The change in the Arctic sea ice area in September, when the ocean ice cover is minimal over the year, is of interest. Figure 5 shows the sea ice area in September 2015–2019 to be 4–6 million km2 in all experiments, which corresponds to the estimate from observations in [11]. The Arctic sea ice does not completely melt in any of the experiments and under any scenario. However, according to [8, Figs. 12.28 and 12.31], many models participating in CMIP6, where the Arctic ice area is similar to that observed at the beginning of the 21st century, show the complete absence of ice by the end of the 21st century, especially under the RCP8.5 scenario, which is similar to SSP5-8.5.
The reason for these differences is the lower equilibrium sensitivity of the INM-CM5 model.
Note that the scatter of data between experiments under different scenarios in the first half of the 21st century is approximately the same as between different members of the ensemble under the SSP3-7.0 scenario and becomes larger only after 2070. The sea ice area values are sorted in accordance with the radiative forcing of the scenarios only after 2090. This indicates the large contribution of natural climate variability into the Arctic ice area. In the SSP1-2.6 experiment, the Arctic ice area at the end of the 21st century approximately corresponds to its area at the beginning of the experiment.
Climate changes can be also traced in the ocean circulation. Figure 6 shows the change in the 5-year averaged intensity of the Atlantic meridional circulation, defined as the maximum of the meridional streamfunction at 32° N. All experiments show a decrease in the intensity of meridional circulation in the 21st century and natural fluctuations against this decrease. The decrease is about 4.5–5 Sv for the SSP5-8.5 scenario, which is close to values obtained in the CMIP5 models [8, Fig. 12.35] under the RCP8.5 scenario. Under milder scenarios, the weakening of the meridional circulation is less pronounced. The reason for this weakening of the meridional circulation in the Atlantic, as far as we know, is not yet fully understood.
Conclusion
Numerical experiments have been carried out to reproduce climate changes in the 21st century according to four scenarios of the CMIP6 program [4, 5], including an ensemble of five experiments under the SSP3-7.0 scenario. The changes in the global mean surface temperature are analyzed. It is shown that the global warming predicted by the INM-CM5 model is the lowest among the currently published CMIP6 model data. The geographical distribution of changes in the temperature and precipitation is considered. According to the model, the temperature in the warmest summer month will increase faster than the summer average temperature in Russia.
None of the experiments show the complete melting of the Arctic ice cover by the end of the 21st century. Some changes in the ocean dynamics, including the flow velocity and the meridional stream function, are analyzed. The changes in the Hadley and Ferrel circulation in the atmosphere are considered.
Roger Pielke Jr. explains that climate models projections are unreliable because they are based on scenarios no longer bounded by reality. His article is The Unstoppable Momentum of Outdated Science. Excerpts in italics with my bolds.
Much of climate research is focused on implausible scenarios of the future, but implementing a course correction will be difficult.
In 2020, climate research finds itself in a similar situation to that of breast cancer research in 2007. Evidence indicates the scenarios of the future to 2100 that are at the focus of much of climate research have already diverged from the real world and thus offer a poor basis for projecting policy-relevant variables like economic growth and carbon dioxide emissions. A course-correction is needed.
In a new paper of ours just out in Environmental Research Letters we perform the most rigorous evaluation to date of how key variables in climate scenarios compare with data from the real world (specifically, we look at population, economic growth, energy intensity of economic growth and carbon intensity of energy consumption). We also look at how these variables might evolve in the near-term to 2040.
We find that the most commonly-used scenarios in climate research have already diverged significantly from the real world, and that divergence is going to only get larger in coming decades. You can see this visualized in the graph above, which shows carbon dioxide emissions from fossil fuels from 2005, when many scenarios begin, to 2045. The graph shows emissions trajectories projected by the most commonly used climate scenarios (called SSP5-8.5 and RCP8.5, with labels on the right vertical axis), along with other scenario trajectories. Actual emissions to date (dark purple curve) and those of near-term energy outlooks (labeled as EIA, BP and ExxonMobil) all can be found at the very low end of the scenario range, and far below the most commonly used scenarios.
Our paper goes into the technical details, but in short, an important reason for the lower-than-projected carbon dioxide emissions is that economic growth has been slower than expected across the scenarios, and rather than seeing coal use expand dramatically around the world, it has actually declined in many regions.
It is even conceivable, if not likely, that in 2019 the world has passed “peak carbon dioxide emissions.” Crucially, the projections in the figure above are pre-Covid19, which means that actual emissions 2020 to 2045 will be even less than was projected in 2019.
While it is excellent news that the broader community is beginning to realize that scenarios are increasingly outdated, voluminous amounts of research have been and continue to be produced based on the outdated scenarios. For instance, O’Neill and colleagues find that “many studies” use scenarios that are “unlikely.” In fact, in their literature review such “unlikely” scenarios comprise more than 20% of all scenario applications from 2014 to 2019. They also call for “re-examining the assumptions underlying” the high-end emissions scenarios that are favored in physical climate research, impact studies and economic and policy analyses.
Make no mistake. The momentum of outdated science is powerful. Recognizing that a considerable amount of climate science to be outdated is, in the words of the late Steve Rayer, “uncomfortable knowledge” — that knowledge which challenges widely-held preconceptions. According to Rayner, in such a context we should expect to see reactions to uncomfortable knowledge that include:
denial (that scenarios are off track),
dismissal (the scenarios are off track, but it doesn’t matter),
diversion (the scenarios are off track, but saying so advances the agenda of those opposed to action) and,
displacement (the scenarios are off track but there are perhaps compensating errors elsewhere within scenario assumptions).
Such responses reinforce the momentum of outdated science and make it more difficult to implement a much needed course correction.
Responding to climate change is critically important. So too is upholding the integrity of the science which helps to inform those responses. Identification of a growing divergence between scenarios and the real-world should be seen as an opportunity — to improve both science and policy related to climate — but also to develop new ways for science to be more nimble in getting back on track when research is found to be outdated.
[A previous post is reprinted below since it demonstrates how the scenarios drive forecasting by CMIP6 models, including the example of the best performant model: INMCM5]
Background from Previous Post : Best Climate Model: Mild Warming Forecasted
Links are provided at the end to previous posts describing climate models 4 and 5 from the Institute of Numerical Mathematics in Moscow, Russia. Now we have forecasts for the 21st Century published for INM-CM5 at Izvestiya, Atmospheric and Oceanic Physics volume 56, pages218–228(July 7, 2020). The article is Simulation of Possible Future Climate Changes in the 21st Century in the INM-CM5 Climate Model by E. M. Volodin & A. S. Gritsun. Excerpts are in italics with my bolds, along with a contextual comment.
Abstract
Climate changes in 2015–2100 have been simulated with the use of the INM-CM5 climate model following four scenarios: SSP1-2.6, SSP2-4.5, and SSP5-8.5 (single model runs) and SSP3-7.0 (an ensemble of five model runs). Changes in the global mean temperature and spatial distribution of temperature and precipitation are analyzed. The global warming predicted by the INM-CM5 model in the scenarios considered is smaller than that in other CMIP6 models. It is shown that the temperature in the hottest summer month can rise more quickly than the seasonal mean temperature in Russia. An analysis of a change in Arctic sea ice shows no complete Arctic summer ice melting in the 21st century under any model scenario. Changes in the meridional stream function in atmosphere and ocean are studied.
Overview
The climate is understood as the totality of statistical characteristics of the instantaneous states of the atmosphere, ocean, and other climate system components averaged over a long time period.
Therefore, we restrict ourselves to an analysis of some of the most important climate parameters, such as average temperature and precipitation. A more detailed analysis of individual aspects of climate change, such as changes in extreme weather and climate situations, will be the subject of another work. This study is not aimed at a full comparison with the results of other climate models, where calculations follow the same scenarios, since the results of other models have not yet been published in peer reviewed journals by the time of this writing.
The INM-CM5 climate model [1, 2] is used for the numerical experiments. It differs from the previous version, INMCM4, which was also used for experiments on reproducing climate change in the 21st century [3], in the following:
an aerosol block has been added to the model, which allows inputting anthropogenic emissions of aerosols and their precursors;
the concentrations and optical properties of aerosols are calculated, but not specified, like in the previous version;
the parametrizations of cloud formation and condensation are changed in the atmospheric block;
the upper boundary in the atmospheric block is raised from 30 to 60 km;
the horizontal resolution in the ocean block is doubled along each coordinate; and,
the software related to adaptation to massively parallel computers is improved, which allows the effective use a larger number of compute cores.
The model resolution in the atmospheric and aerosol blocks is 2° × 1.5° in longitude and latitude and 73 levels and, in the ocean, 0.5° × 0.25° and 40 levels. The calculations were performed at supercomputers of the Joint Supercomputer Center, Russian Academy of Sciences, and Moscow State University, with the use of 360 to 720 cores. The model calculated 6–10 years per 24 h in the above configuration.
Four scenarios were used to model the future climate: SSP1-2.6, SSP2-4.5, SSP3-7.0, and SSP5-8.5. The scenarios are described in [4]. The figure after the abbreviation SSP (Shared Socioeconomic Pathway) is the number of the mankind development path (see the values in [4]). The number after the dash means the radiation forcing (W m–2) in 2100 compared to the preindustrial level. Thus, the SSP1-2.6 scenario is the most moderate and assumes rapid actions which sharply limit and then almost completely stop anthropogenic emissions. Within this scenario, greenhouse gas concentrations are maximal in the middle of the 21st century and then slightly decrease by the end of the century. The SSP5-8.5 scenario is the warmest and implies the fastest climate change. The scenarios are recommended for use in the project on comparing CMIP6 (Coupled Model Intercomparison Project, Phase 6, [5]) climate models. Each scenario includes the time series of:
carbon dioxide, methane, nitrous oxide, and ozone concentrations;
emissions of anthropogenic aerosols and their precursors;
the concentration of volcanic sulfate aerosol; and
the solar constant.
One model experiment was carried out for each of the above scenarios. It began at the beginning of 2015 and ended at the end of 2100. The initial state was taken from the so-called historical experiment with the same model, where climate changes were simulated for 1850–2014, and all impacts on the climate system were set according to observations. The results of the ensemble of historical experiments with the model under consideration are given in [6, 7]. For the SSP3-7.0 scenario, five model runs was performed differing in the initial data taken from different historical experiments. The ensemble of numerical experiments is required to increase the statistical confidence of conclusions about climate changes.
[My Contextual Comment inserted Prior to Consideration of Results]
Firstly, the INM-CM5 historical experiment can be read in detail by following a linked post (see Resources at the end), but this graphic summarizes the model hindcasting of past temperatures (GMT) compared to HadCrutv4.
Figure 1. The 5-year mean GMST (K) anomaly with respect to 1850–1899 for HadCRUTv4 (thick solid black); model mean (thick solid red). Dashed thin lines represent data from individual model runs: 1 – purple, 2 – dark blue, 3 – blue, 4 – green, 5 – yellow, 6 – orange, 7 – magenta. In this and the next figures numbers on the time axis indicate the first year of the 5-year mean.
Secondly, the scenarios are important to understand since they stipulate data inputs the model must accept as conditions for producing forecasts according to a particular scenario (set of assumptions). The document with complete details referenced as [4] is The Scenario Model Intercomparison Project (ScenarioMIP) for CMIP6.
All the details are written there but one diagram suggests the implications for the results described below.
Figure 5. CO2 emissions (a) and concentrations (b), anthropogenic radiative forcing (c), and global mean temperature change (d) for the three long-term extensions. As in Fig. 3, concentration, forcing, and temperature outcomes are calculated with a simple climate model (MAGICC version 6.8.01 BETA; Meinshausen et al., 2011a, b). Outcomes for the CMIP5 versions of the long-term extensions of RCP2.6 and RCP8.5 (Meinshausen et al., 2011c), as calculated with the same model, are shown for comparison.
As shown, the SSP1-26 is virtually the same scenario as the former RCP2.6, while SSP5-85 is virtually the same as RCP8.5, the wildly improbable scenario (impossible according to some analysts). Note that FF CO2 emissions are assumed to quadruple in the next 80 years, with atmospheric CO2 rising from 400 to 1000 ppm ( +150%). Bear these suppositions in mind when considering the INMCM5 forecasts below.
Results [Continuing From Volodin and Gritsun]
Fig. 1. Changes in the global average surface temperature (K) with respect to the pre-industrial level in experiments according to the SSP1-2.6 (triangles), SSP2-4.5 (squares), SSP3-7.0 (crosses), and SSP5-8.5 (circles) scenarios.
Let us describe some simulation results of climate change in the 21st century. Figure 1 shows the change in the globally averaged surface air temperature with respect to the data of the corresponding historical experiment for 1850–1899. In the warmest SSP5-8.5 scenario (circles), the temperature rises by more than 4° by the end of the 21st century. In the SSP3-7.0 scenario (crosses), different members of the ensemble show warming by 3.4°–3.6°. In the SSP2-4.5 scenario (squares), the temperature increases by about 2.4°. According to the SSP1-2.6 scenario (triangles) , the maximal warming by ~1.7° occurs in the middle of the 21st century, and the temperature exceeds the preindustrial temperature by 1.4° by the end of the century.
[My comment: Note that the vertical scale starts with +1.0C as was seen in the historical experiment. Thus an anomaly of 1.4C by 2100 is an increase of only 0.4C, while the SSP2-4.5 result adds 1.4C to the present].
The results for other CMIP6 models have not yet been published in peer-reviewed journals. However, according to the preliminary analysis (see, e.g. https://cmip6workshop19.sciencesconf.org/ data/Session1_PosterSlides.pdf, p.29), the INM-CM5 model shows the lowest temperature increase among the CMIP6 models considered for all the scenarios due to the minimal equilibrium sensitivity to the CO2 concentration doubling, which is ~2.1° for the current model version, like for the previous version, despite new condensation and cloud formation blocks. [For more on CMIP6 comparisons see post Climate Models: Good, Bad and Ugly]
Fig. 2. Differences between the annual average surface air temperatures (K) in 2071–2100 and 1981–2010 for the (a) SSP5-8.5 and (b) SSP1-2.6 scenarios.
The changes in the surface air temperature are similar for all scenarios; therefore, we analyze the difference between temperatures in 2071–2100 and 1981–2010 under the SSP5-8.5 and SSP1-2.6 scenarios (Fig. 2). The warming is maximal in the Arctic; it reaches 10° and 3°, respectively. Other features mainly correspond to CMIP5 data [8], including the INMCM4 model, which participates in the comparison. The warming on the continents of the Northern Hemisphere is about 2 times higher than the mean, and the warming in the Southern Hemisphere is noticeably less than in the Northern Hemisphere. The land surface is getting warmer than the ocean surface in all the scenarios except SSP1-2.6, because the greenhouse effect is expected to weaken in the second half of the 21st century in this scenario, and the higher heat capacity of the ocean prevents it from cooling as quickly as the land.
The changes in precipitation in December–February and June–August for the SSP3-7.0 scenario averaged over five members of the ensemble are shown in Fig. 4. All members of the ensemble show an increase in precipitation in the winter in a significant part of middle and high latitudes. In summer, the border between the increase and decrease in precipitation in Eurasia passes mainly around or to the north of 60°. In southern and central Europe, all members of the ensemble show a decrease in precipitation. Precipitation also increases in the region of the summer Asian monsoon, over the equatorial Pacific, due to a decrease in the upwelling and an increase in ocean surface temperature (OST). The distribution of changes in precipitation mainly corresponds to that given in [6, Fig. 12.22] for all CMIP5 models.
The change in the Arctic sea ice area in September, when the ocean ice cover is minimal over the year, is of interest. Figure 5 shows the sea ice area in September 2015–2019 to be 4–6 million km2 in all experiments, which corresponds to the estimate from observations in [11]. The Arctic sea ice does not completely melt in any of the experiments and under any scenario. However, according to [8, Figs. 12.28 and 12.31], many models participating in CMIP6, where the Arctic ice area is similar to that observed at the beginning of the 21st century, show the complete absence of ice by the end of the 21st century, especially under the RCP8.5 scenario, which is similar to SSP5-8.5.
The reason for these differences is the lower equilibrium sensitivity of the INM-CM5 model.
Note that the scatter of data between experiments under different scenarios in the first half of the 21st century is approximately the same as between different members of the ensemble under the SSP3-7.0 scenario and becomes larger only after 2070. The sea ice area values are sorted in accordance with the radiative forcing of the scenarios only after 2090. This indicates the large contribution of natural climate variability into the Arctic ice area. In the SSP1-2.6 experiment, the Arctic ice area at the end of the 21st century approximately corresponds to its area at the beginning of the experiment.
Climate changes can be also traced in the ocean circulation. Figure 6 shows the change in the 5-year averaged intensity of the Atlantic meridional circulation, defined as the maximum of the meridional streamfunction at 32° N. All experiments show a decrease in the intensity of meridional circulation in the 21st century and natural fluctuations against this decrease. The decrease is about 4.5–5 Sv for the SSP5-8.5 scenario, which is close to values obtained in the CMIP5 models [8, Fig. 12.35] under the RCP8.5 scenario. Under milder scenarios, the weakening of the meridional circulation is less pronounced. The reason for this weakening of the meridional circulation in the Atlantic, as far as we know, is not yet fully understood.
Conclusion
Numerical experiments have been carried out to reproduce climate changes in the 21st century according to four scenarios of the CMIP6 program [4, 5], including an ensemble of five experiments under the SSP3-7.0 scenario. The changes in the global mean surface temperature are analyzed. It is shown that the global warming predicted by the INM-CM5 model is the lowest among the currently published CMIP6 model data. The geographical distribution of changes in the temperature and precipitation is considered. According to the model, the temperature in the warmest summer month will increase faster than the summer average temperature in Russia.
None of the experiments show the complete melting of the Arctic ice cover by the end of the 21st century. Some changes in the ocean dynamics, including the flow velocity and the meridional stream function, are analyzed. The changes in the Hadley and Ferrel circulation in the atmosphere are considered.