Intro to Climate Fallacies

First an example of how classical thought fallacies derail discussion from any search for meaning. H/T Jim Rose.

Then I took the liberty to change the discussion topic to climate change, by inserting typical claims heard in that context.

Below is a previous post taking a deeper dive into the fallacies that have plagued global warming/climate change for decades

Background Post Climatism is a Logic Fail

Two fallacies in particular ensure meaningless public discussion about climate “crisis” or “emergency.” H/T to Terry Oldberg for comments and writings prompting me to post on this topic.

One corruption is the numerous times climate claims include fallacies of Equivocation. For instance, “climate change” can mean all observed events in nature, but as defined by IPCC all are 100% caused by human activities.  Similarly, forecasts from climate models are proclaimed to be “predictions” of future disasters, but renamed “projections” in disclaimers against legal liability.  And so on.

A second error in the argument is the Fallacy of Misplaced Concreteness, AKA Reification. This involves mistaking an abstraction for something tangible and real in time and space. We often see this in both spoken and written communications. It can take several forms:

♦ Confusing a word with the thing to which it refers

♦ Confusing an image with the reality it represents

♦ Confusing an idea with something observed to be happening

Examples of Equivocation and Reification from the World of Climate Alarm

“Seeing the wildfires, floods and storms, Mother Nature is not happy with us failing to recognize the challenges facing us.” – Nancy Pelosi

Mother Nature’ is a philosophical construct and has no feelings about people.

“This was the moment when the rise of the oceans began to slow and our planet began to heal …”
– Barack Obama

The ocean and the planet do not respond to someone winning a political party nomination. Nor does a planet experience human sickness and healing.

“If something has never happened before, we are generally safe in assuming it is not going to happen in the future, but the exceptions can kill you, and climate change is one of those exceptions.” – Al Gore

The future is not knowable, and can only be a matter of speculation and opinion.

“The planet is warming because of the growing level of greenhouse gas emissions from human activity. If this trend continues, truly catastrophic consequences are likely to ensue. “– Malcolm Turnbull

Temperature is an intrinsic property of an object, so temperature of “the planet” cannot be measured. The likelihood of catastrophic consequences is unknowable. Humans are blamed as guilty by association.

“Anybody who doesn’t see the impact of climate change is really, and I would say, myopic. They don’t see the reality. It’s so evident that we are destroying Mother Earth. “– Juan Manuel Santos

“Climate change” is an abstraction anyone can fill with subjective content. Efforts to safeguard the environment are real, successful and ignored in the rush to alarm.

“Climate change, if unchecked, is an urgent threat to health, food supplies, biodiversity, and livelihoods across the globe.” – John F. Kerry

To the abstraction “Climate Change” is added abstract “threats” and abstract means of “checking Climate Change.”

Climate change is the most severe problem that we are facing today, more serious even than the threat of terrorism.” -David King

Instances of people killed and injured by terrorists are reported daily and are a matter of record, while problems from Climate Change are hypothetical

 

Corollary: Reality is also that which doesn’t happen, no matter how much we expect it to.

Climate Models Are Built on Fallacies

 

A previous post Chameleon Climate Models described the general issue of whether a model belongs on the bookshelf (theoretically useful) or whether it passes real world filters of relevance, thus qualifying as useful for policy considerations.

Following an interesting discussion on her blog, Dr. Judith Curry has written an important essay on the usefulness and limitations of climate models.

The paper was developed to respond to a request from a group of lawyers wondering how to regard claims based upon climate model outputs. The document is entitled Climate Models and is a great informative read for anyone. Some excerpts that struck me in italics with my bolds and added images.

Climate model development has followed a pathway mostly driven by scientific curiosity and computational limitations. GCMs were originally designed as a tool to help understand how the climate system works. GCMs are used by researchers to represent aspects of climate that are extremely difficult to observe, experiment with theories in a new way by enabling hitherto infeasible calculations, understand a complex system of equations that would otherwise be impenetrable, and explore the climate system to identify unexpected outcomes. As such, GCMs are an important element of climate research.

Climate models are useful tools for conducting scientific research to understand the climate system. However, the above points support the conclusion that current GCM climate models are not fit for the purpose of attributing the causes of 20th century warming or for predicting global or regional climate change on timescales of decades to centuries, with any high level of confidence. By extension, GCMs are not fit for the purpose of justifying political policies to fundamentally alter world social, economic and energy systems.

It is this application of climate model results that fuels the vociferousness
of the debate surrounding climate models.

Evolution of state-of-the-art Climate Models from the mid 70s to the mid 00s. From IPCC (2007)

Evolution of state-of-the-art Climate Models from the mid 70s to the mid 00s. From IPCC (2007)

The actual equations used in the GCM computer codes are only approximations of
the physical processes that occur in the climate system.

While some of these approximations are highly accurate, others are unavoidably crude. This is because the real processes they represent are either poorly understood or too complex to include in the model given the constraints of the computer system. Of the processes that are most important for climate change, parameterizations related to clouds and precipitation remain the most challenging, and are the greatest source of disagreement among different GCMs.

There are literally thousands of different choices made in the construction of a climate model (e.g. resolution, complexity of the submodels, parameterizations). Each different set of choices produces a different model having different sensitivities. Further, different modeling groups have different focal interests, e.g. long paleoclimate simulations, details of ocean circulations, nuances of the interactions between aerosol particles and clouds, the carbon cycle. These different interests focus their limited computational resources on a particular aspect of simulating the climate system, at the expense of others.


Overview of the structure of a state-of-the-art climate model. See Climate Models Explained by R.G. Brown

Human-caused warming depends not only on how much CO2 is added to the atmosphere, but also on how ‘sensitive’ the climate is to the increased CO2. Climate sensitivity is defined as the global surface warming that occurs when the concentration of carbon dioxide in the atmosphere doubles. If climate sensitivity is high, then we can expect substantial warming in the coming century as emissions continue to increase. If climate sensitivity is low, then future warming will be substantially lower.

In GCMs, the equilibrium climate sensitivity is an ‘emergent property’
that is not directly calibrated or tuned.

While there has been some narrowing of the range of modeled climate sensitivities over time, models still can be made to yield a wide range of sensitivities by altering model parameterizations. Model versions can be rejected or not, subject to the modelers’ own preconceptions, expectations and biases of the outcome of equilibrium climate sensitivity calculation.

Further, the discrepancy between observational and climate model-based estimates of climate sensitivity is substantial and of significant importance to policymakers. Equilibrium climate sensitivity, and the level of uncertainty in its value, is a key input into the economic models that drive cost-benefit analyses and estimates of the social cost of carbon.

Variations in climate can be caused by external forcing, such as solar variations, volcanic eruptions or changes in atmospheric composition such as an increase in CO2. Climate can also change owing to internal processes within the climate system (internal variability). The best known example of internal climate variability is El Nino/La Nina. Modes of decadal to centennial to millennial internal variability arise from the slow circulations in the oceans. As such, the ocean serves as a ‘fly wheel’ on the climate system, storing and releasing heat on long timescales and acting to stabilize the climate. As a result of the time lags and storage of heat in the ocean, the climate system is never in equilibrium.

The combination of uncertainty in the transient climate response (sensitivity) and the uncertainties in the magnitude and phasing of the major modes in natural internal variability preclude an unambiguous separation of externally forced climate variations from natural internal climate variability. If the climate sensitivity is on the low end of the range of estimates, and natural internal variability is on the strong side of the distribution of climate models, different conclusions are drawn about the relative importance of human causes to the 20th century warming.

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (ΔT), 1861-2000. The thin dashed line represents annual ΔT, the bold line—its 13-year smoothing, and the line constructed from rectangles—WFC (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009

Anthropogenic (human-caused) climate change is a theory in which the basic mechanism is well understood, but whose potential magnitude is highly uncertain.

What does the preceding analysis imply for IPCC’s ‘extremely likely’ attribution of anthropogenically caused warming since 1950? Climate models infer that all of the warming since 1950 can be attributed to humans. However, there have been large magnitude variations in global/hemispheric climate on timescales of 30 years, which are the same duration as the late 20th century warming. The IPCC does not have convincing explanations for previous 30 year periods in the 20th century, notably the warming 1910-1945 and the grand hiatus 1945-1975. Further, there is a secular warming trend at least since 1800 (and possibly as long as 400 years) that cannot be explained by CO2, and is only partly explained by volcanic eruptions.

CO2 relation to Temperature is Inconsistent.

Summary

There is growing evidence that climate models are running too hot and that climate sensitivity to CO2 is on the lower end of the range provided by the IPCC. Nevertheless, these lower values of climate sensitivity are not accounted for in IPCC climate model projections of temperature at the end of the 21st century or in estimates of the impact on temperatures of reducing CO2 emissions.

The climate modeling community has been focused on the response of the climate to increased human caused emissions, and the policy community accepts (either explicitly or implicitly) the results of the 21st century GCM simulations as actual predictions. Hence we don’t have a good understanding of the relative climate impacts of the above (natural factors) or their potential impacts on the evolution of the 21st century climate.

Footnote:

There are a series of posts here which apply reality filters to attest climate models.  The first was Temperatures According to Climate Models where both hindcasting and forecasting were seen to be flawed.

Others in the Series are:

Sea Level Rise: Just the Facts

Data vs. Models #1: Arctic Warming

Data vs. Models #2: Droughts and Floods

Data vs. Models #3: Disasters

Data vs. Models #4: Climates Changing

Climate Medicine

Climates Don’t Start Wars, People Do

virtual-reality-1920x1200

Beware getting sucked into any model, climate or otherwise.

 

Clauser’s Case: GHG Science Wrong, Clouds the Climate Thermostat

This post provides a synopsis of Dr. John Clauser’s Clintel presentation last May.  Below are the texts from his slides gathered into an easily readable format. The two principal takeways are (in my words):

A.  IPCC’s Green House Gas Science is Flawed and Untrustworthy

B.  Clouds are the Thermostat for Earth’s Climate, Not GHGs.

Part I Climate Change is a Myth.

  • The IPCC and its collaborators have been tasked with computer modeling and observationally measuring two very important numbers – the Earth’s so-called power imbalance, and its power-balance feedback-stability strength. They have grossly botched both tasks, in turn, leading them to draw the wrong conclusion.
  • I assert that the IPCC has not proven global warming! On the contrary, observational data are fully consistent with no global warming. Without global warming, there is no climate-change crisis!
  • Their computer modeling (GISS) of the climate is unable to simulate the Earth’s surface temperature history, let alone predict its future.
  • Their computer modeling (GISS) is unable to simulate anywhere near the Earth’s albedo (sunlight reflectivity). The computer simulated sunlight reflected power and associated power imbalance error, are typically about fourteen times bigger than the claimed measured power imbalance, and about twenty five times bigger than the claimed measured power imbalance error range.
  • The IPCC’s observational data are wildly self-inconsistent and/or are fully consistent with no global warming.
  • The IPCC’s observational data claim an albedo for cloudy skies that is inconsistent with direct measurements by a factor of two. Alternatively, their data significantly violate conservation of energy.
  • Scientists performing the power-balance measurements admit that the available methodologies are incapable of measuring a net power imbalance with anywhere near the desired accuracy. This difficulty is due to huge temporal and spatial fluctuations of the imbalance, along with gross under-sampling of the data.
  • The observational data they report are self-inconsistent and are visibly dishonestly fudged to claim warming. The fudged final reported values, herein highlighted and exposed, are an example of the proverbial proliferation of bad pennies.
  • NOAA’s claims that there is an observed increase in extreme weather events are bogus. Their own published data disprove their own arguments. A 100 year history of extreme weather event frequency, plotted frontwards in time is virtually indistinguishable from the same historical data plotted backwards in time.
  • In Part II, I present the cloud-thermostat feedback mechanism. My new mechanism dominantly controls and stabilizes the Earth’s climate and temperature. The IPCC has not previously considered this mechanism. The IPCC ignores cloud-cover variability.

The IPCC’s two sacred tasks – both botched!

  1. The IPCC and its collaborators have been tasked with computer modeling and observationally measuring two very important numbers – the Earth’s so-called power imbalance, and its power-balance feedback-stability strength.
  2. The Earth’s net power imbalance is its sunlight heating power (its power-IN), minus its two components of cooling power – reflected sunlight and reradiated infrared power (its power-OUT).
  3. Based on their claimed power imbalance and global-warming assertion, the IPCC and its collaborators assemble a house of cards argument that forebodes an impending climate change apocalypse/ catastrophe.
  4. Additionally, the IPCC and its contributors calculate the strength of naturally occurring feedback mechanisms that presently stabilize the Earth’s temperature and climate
  5. They claim only marginal effectiveness for these mechanisms, and correspondingly assert that there is a “tipping point”, whereinafter further added greenhouse gasses catastrophically cause what amounts to a thermal-runaway of the Earth’s temperature.
  6. The IPCC scapegoats atmospheric greenhouse gasses as the cause of global warming, and further mandates that trillions of dollars must be spent to stop greenhouse gas release into the environment with a so-called “zero-carbon” policy.
  7. The IPCC also mandates multi-trillion dollar per year geoengineering projects including Solar Radiation Management Systems to stabilize the Earth’s climate and CO2 capture projects to reduce the atmospheric CO2 levels.
  8. I assert that the IPCC and its contributors have not proven global warming, whereupon their house of cards collapses.
  9. My cloud thermostat mechanism’s net feedback “strength” (the IPCC’s 2nd sacred task to estimate) is anywhere from -5.7 to -12.7 W/m2/K (depending on the assumed cloud albedo, 0.36 vs. 0.8), compared to the IPCC’s botched best estimate for their mechanisms of -1.1 W/m2/K. My mechanism’s overwhelmingly dominant strength confirms that it is the dominant feedback mechanism controlling the Earth’s climate.
  10. Correspondingly, I confidently assert that the climate crisis is a colossal trillion-dollar hoax.

The IPCC’s computer modeling uses flawed physics to estimate the Earth’s temperature history

• The above graph is copied from [AR5, (IPCC, 2013) Fig 11.25].
• It shows the IPCC’s CMIP5 computer modeling of the Earth’s temperature “anomaly”. The various computed curves display the earth’s predicted (colored) and historical (gray) “temperature anomaly”.
• The solid black curve is the observed temperature anomaly
• Note that all 40+ models are incapable of simulating the Earth’s past temperature history. The total disarray and total lack of reliability among the CMIP5 predictions was first highlighted by Steve Koonin (former White House science advisor to Barack Obama) in his recent book- Unsettled? What climate science tells us, what it doesn’t, and why it matters.
• Something is obviously very wrong with the physics incorporated within the computer models, and their predictions are totally unreliable.
• Albedo is the fraction of sunlight power that is directly reflected by the Earth back out into space (OSR=100 W/m2 portion of power-OUT)


• The above Figure, copied from Stephens et al. (2015), shows the IPCC’s CMIP5 computer modeling (colored curves) of the Earth’s mean annual albedo temporal variation. The solid black curve is the Earth’s albedo measured by satellite radiometry. (The variation is not sinusoidal.)
• The added scale shows the associated reflected sunlight power. It assumes a constant solar irradiance – 340 W/m2
• Note that the IPCC’s computer modeling is grossly incapable of simulating the observed Earth’s reflected power, and especially incapable of simulating that power’s dramatic temporal fluctuation.
• The actual power’s annual variation is actually much greater than is shown by this Figure by about 18 W/m2, due to the ellipticity of the Earth’s orbit and the associated sinusoidal temporal variation of the so-called solar constant.
• Despite more than 10 W/m2 gross errors in the computer simulation’s calculated reflected power, as is shown on the Figure, the IPCC [AR6 (2021)] still claims that it has computer simulated and precisely measured this power, yielding an imbalance that is equal to 0.7 ± 0.2 W/m2 – Huh?

The IPCC’s observational data are consistent with NO global warming

• Power-IN is the sunlight power incident on the Earth. The IPCC and climate scientists call it Short Wavelength (SW) Radiation. It is about 340 Watts per square meter of the Earth’s surface area. (It is not actually constant, but varies ± 9 W/m2.)
• Power-OUT has two components:
• One component is the sunlight energy that is directly reflected by the Earth back out into space, whereinafter it can no longer heat the planet. That component is claimed by the IPCC to be about 100 W/m2.
• The other component is the far-infrared heat radiated into space by a hot planet. It is claimed to be about 240 W/m2. The IPCC calls the far-infrared heat radiation component, Long Wavelength (LW) Radiation.
• Measuring the power imbalance consists of measuring power-IN, measuring power-OUT and subtracting. Simple enough? Not really. The problem is that power-IN, and power-OUT are huge numbers, and that the difference between them is miniscule – 0.2% of power-IN. That miniscule difference is the net imbalance that is sought, both experimentally and theoretically.

Unfortunately, it is so small that it is very difficult, if not impossible, to measure to the desired accuracy, 0.1 W/m2, or 0.03% of power-IN. It is much tougher to measure when power-IN and power-OUT are both also hugely varying in a seemingly random irreproducible fashion. Large variations occur both in time and in space over the surface of the Earth. As noted in a previous slide, this grossly under-sampled fluctuation is about 28 W/m2, compared with the IPCC’s claimed imbalance, 0.7 ± 0.2 W/m2.
• A variety of methods has been employed to measure these powers. They include satellite radiometry, (the ERBE, and CERES Terra and Aqua satellites), ocean heat content (OHC) measured using the ARGO buoy chain and XBT water sampling by ships, and finally by ground sunlight observations using the Baseline Surface Radiation Network (BSRN).
• The various measured values are all in wild disagreement with each other. Importantly, none of the reported data actually show a convincing net warming power imbalance. Importantly, much of the reported data are totally fudged in a manner that dishonestly changes them from showing no warming to showing warming!

AR6 Power-flow Diagrams

Critiques of Power-Flow Diagrams by Trenberth et al. (2010, 2014)

• Satellites measure the Top of Atmosphere energy balance, while Ocean Heat Content data apply to the surface energy balance. One may legitimately mix power-flux data at the two different altitudes, if and only if one fully understands all of the power-flow processes in the atmosphere that occur between the surface and the Top of Atmosphere. If the latter requirement is not true, then one ends up with an “apples to oranges” comparison.
• Trenberth et al. (2010, 2014) are highly critical of Loeb, Stephens, L’Ecuyer, and Hansen’s claimed “understanding” of the associated connection between the power flows at these two altitudes.
• Trenberth and Fasullo (2010) point to a huge “missing energy” indicated by the difference between the satellite data and the OHC data power-imbalance calculations, and specifically ask “Where exactly does the energy go?”
• Hansen et al. (2011) dismiss Trenberth and Fasullo’s alleged missing energy as being simply due to satellite calibration errors.
• Trenberth Fasullo and Balmesada (2014) further note that despite various considerations of the surface power balance, significant unresolved discrepancies remain, and they are skeptical of the power imbalance claims.
• In effect, Trenberth et al. are the earliest “whistle blowers” to the above-mentioned data fudges.

Part I –The Climate Change Myth– Conclusions

1. The IPCC and its contributors claim the Earth has a net-warming energy imbalance. I show here that those claims are false.
2. The IPCC bases its claims on computer modeling of the Earth’s atmosphere, and on observational data from a variety of observational modalities. Both the computer models and the observational data are grossly flawed, and fudged.
3. The IPCC’s computer modeling and its predictions are totally unreliable. There is something clearly very wrong with the physics incorporated within these computer models. Since the computer models can’t even explain the past, why should anyone trust their prediction for the future?
4. Not one of the observational modalities for measuring the Earth’s power imbalance convincingly shows net global warming.
5. I show where various observers and the IPCC have dishonestly fudged their reported data, and have dishonestly changed it from showing No Warming, to showing Warming. Crucially important data fudges are revealed here and highlighted in red. If you don’t believe me, check my arithmetic.
6. The IPCC and NOAA further claim that the purported power imbalance has already caused an increase in dangerous extreme weather events. NOAA’s own data disprove their own claims.
7. I thus offer Great News. Despite what you may have heard from the IPCC and others, there is no real climate crisis! The planet is NOT in peril!
8. The IPCC’s (and NOAA’s) claims are a hoax. Trillions of dollars are being wasted.

Part II – The cloud thermostat 

1. So what is really happening? Why is the earth’s climate actually as stable as it really is?
2. The cloud thermostat mechanism is clearly the overwhelmingly dominant climate controlling feedback mechanism that controls stabilizes the Earth’s climate and temperature. It thereby prevents global warming and climate change.
3. The cloud-thermostat mechanism provides very powerful feedback that stabilizes the Earth’s climate and temperature. It great strength obtains from the observed large fluctuation of the Earth’s power imbalance.
4. The mechanism gains its strength from the Earth’s observed very large cloud-cover variation. The power imbalance is actually observed to be continuously strongly fluctuating by anywhere between 18 to 55 W/m2.
5. Clouds modulate the outgoing Shortwave power and therefore control the Earth’s power imbalance, minimally with a 18 W/m2 available power range (ignoring the added 18 W/m2 solar-constant variation), which is minimally 26 times the IPCC’s 0.7 W/m2 claimed power imbalance, and 45 times the IPCC’s ± 0.2 W/m2 power imbalance error range.
6. The above numbers use the IPCC’s assumed data parameters. With more realistic assumptions, the cloud-thermostat mechanism controls the Earth’s power imbalance with a 73 W/m2 available power range, which is 100 times bigger than the IPCC’s 0.7 W/m2 claimed power imbalance, and 180 times bigger than the IPCC’s ± 0.2 W/m2 power-imbalance total error range.
7. This seemingly random fluctuation of the power imbalance is not random at all, but is actually a crucial part of a thermostat-like feedback mechanism that controls and stabilizes the Earth’s climate and
temperature. It is observed by King et al. (2013) and by Stephens et al. (2015) to be quasi-periodic,
8. Just like the thermostat in your home, the power-imbalance is never zero. The furnace or AC is always either ON or OFF. The thermostat simply modulates the heating/cooling duty cycle.

Features of the cloud thermostat mechanism

1. In preparation for the introduction of this model, I first describe important, underappreciated, but conspicuous properties of clouds – their variability and their strong reflectivity of sunlight (SW radiation).
2. I show that the cloud-thermostat mechanism involves the dominant (73%) use of sunlight energy by the planet.
3. I show that when the cloud-thermostat mechanism is viewed as a form of climate-stabilizing negative feedback, it is by far the most powerful of any such mechanism heretofore considered.
4. The IPCC estimates that the net stabilizing feedback strength or the Earth’s climate, including the destabilizing feedback strength of greenhouses is about -1 W/m2/ºC.
5. I show that the cloud thermostat feedback increases the net natural stabilizing feedback strength to about anywhere between -7 W/m2/ºC and -14 W/m2/ºC, depending on the assumptions used.

There are 5 important take-home messages to be gleaned from these satellite photographs.

1. Clouds reflect dramatically more sunlight than the rest of the planet does!
2. Clouds of all types appear bright white!
3. The photos (along with a large number of careful measurements) strongly suggest that the average cloud reflectivity (of sunlight) is about 0.8 – 0.9. (For comparison, white paper has a reflectivity of ≈ 0.99.) [Wild et al.(2019) claim that cloud reflectivity is 0.36.]
4. The rest of the planet appears much darker than the clouds. The average reflectivity of land (green and brown areas) and ocean (dark blue areas) is ≈ 0.16.
5.Cloud coverage area is highly variable over the Earth.

What does sunlight mostly do when it reaches the Earth’s surface?

• It is commonly believed that sunlight that is absorbed by the Earth’s surface simply warms the surface. That may be true over land. But land represents only about 30% of the surface.
• Oceans cover 70% of the Earth’s surface. Correspondingly, about 70% of incoming sunlight falls on the oceans. Virtually all of the Earth’s exposed water surface occurs in the oceans.
• Following the AR6 power-flow diagram, 160 W/m2is absorbed by the whole Earth, meaning that roughly 70% X 160 = 112 W/m2 is absorbed by oceans.
• The AR6 power-flow diagram indicates that 82 W/m2 is used for evaporating water, and not for heating the surface.
• Since clouds are mostly produced over the oceans (because that’s where the exposed water is), then 82/112 = 73% of the input energy absorbed by the Earth’s oceans is used, not for warming the Earth, but instead simply for making clouds.

How does the cloud thermostat work?

1. Recall that the IPCC’s AR6 power-flow map asserts that 73% of the input energy absorbed by the Earth’s oceans is used, not for warming the Earth, but instead simply for evaporating seawater and making clouds, rather than for raising the Earth’s surface temperature. Recall that the Earth has a strongly varying cloud cover and albedo.
2. Temperature control of the Earth’s surface by this mechanism works exactly the same way as does a common home thermostat. A thermostat automatically corrects a structure’s temperature in the presence of varying modest heat leaks. For the earth, the presence of significant CO2 in the earth’s atmosphere, manmade or not, provides, in fact, a very small heat leak (at most, about 2 W/m2).  Note that, just like the Earth, the power imbalance for a thermostatically controlled system is never zero. It is always fully heating or fully cooling.
3. How does the cloud thermostat work? When the Earth’s cloud-cover fraction is too high, then the earth’s surface temperature is too low. Why? Clouds produce shadows. Cloudy days are cooler than sunny days. A high cloud-cover fraction equals a highly shadowed area. With reduced sunlight reaching the ocean’s surface and lower temperature, the evaporation rate of seawater is reduced. The cloud production rate over ocean (70% of the earth) is low because sunlight is needed to evaporate seawater. The earth’s too-high cloud-cover fraction obediently starts to decrease. Very quickly, cloud-cover fraction decreases, the temperature increases. The Earth’s cloud-cover fraction is no longer too high. Equilibrium cloud cover and temperature are restored.
4. When the Earth’s cloud-cover fraction is too low, the surface temperature is then too high, then the reverse process occurs. With low cloud cover, lots of sunlight reaches the ocean surface. Increased sunlit area then evaporates more seawater. The cloud-production rate obediently increases and the cloud-cover fraction is no longer too low . Equilibrium cloud cover and temperature are again restored.
5. Depending of one’s assumption regarding cloud reflectivity (albedo), the cloud thermostat mechanism has anywhere between 18 and 55 W/m2 power available from cloud-fraction variability to overcome a wimpy 0.7 W/m2 heat leak (allegedly blamed on greenhouse gasses) and to stabilize the Earth’s temperature, no matter what the greenhouse gas atmospheric concentration is!
6. These two fluctuating opposing processes, when in equilibrium, provide an equilibrium cloud-cover fraction, and an equilibrium average temperature. The earth thus has a built in thermostat!

Feedback strength of the cloud thermostat mechanism

1. The resulting cloud-thermostat mechanism’s feedback parameter is now readily evaluated under the two scenarios associated with two choices for cloud albedo. The details of the calculation are shown in Appendix D.
2. Using the AR6 choice for cloud albedo, αClouds = 0.36, we have λClouds ≈ – 5.7 W/m2 K, which is 1.7 times larger than (the misnamed) λ Planck , heretofore the strongest feedback term.
3. Alternatively, using the more reasonable choice for cloud albedo, αClouds = 0.8, we have λClouds ≈ -12.7 W/m2 K, which is 3.8 times larger than (the misnamed) λPlanck.
4. These values are plotted as an extension of the AR6 Figure 7.1, which shows the feedback strength for various mechanisms. The total system strength is shown in the left-hand column.
5. Viewed as a temperature-control feedback mechanism, in either scenario, the cloud thermostat has the strongest negative (stabilizing) feedback of any mechanism heretofore considered.
6. It very powerfully controls and stabilizes the Earth’s climate and temperature.

Part II – Conclusions

1. I have introduced here the cloud-thermostat mechanism. It is clearly the overwhelmingly dominant climate controlling feedback mechanism that controls stabilizes the Earth’s climate and temperature. It thereby prevents global warming and climate change.
2. The IPCC’s 2021 AR6 report (p.978) claims that climate stabilizing natural feedback mechanisms have a net (total) stabilizing strength of -1.16 ± 0.6 W/m2/K. My cloud feedback mechanism has a net stabilizing strength of anywhere between -5.7 to -12.7 W/m2/K, depending of one’s assumptions regarding the albedo of clouds.
3. My cloud thermostat mechanism provides nature’s own Solar Radiation Management System. This mechanism already exists. It is built in to nature’s own cloud factory. It works very well to stabilize the Earth’s temperature on a long term basis. And, it is free!

“Recommendations for policy makers”

1. There is no climate crisis! There is, however, a very real problem with providing a decent standard of living to the world’s now enormous population. There is indeed an energy shortage crisis. The latter is being unnecessarily exacerbated by what, in my opinion, is incorrect climate science, and by
government’s associated incorrect muddled response to it.
2. Government and business are currently needlessly spending trillions of dollars on efforts to limit the greenhouse gasses, CO2 and CH4, in the Earth’s atmosphere.
3. CO2 and CH4 are not pollutants. They must be removed from every list of defined pollutants. They have a negligible effect on the climate. Trillions of dollars can be saved by this one simple measure alone! Additionally, the CO2 Coalition points out that atmospheric CO2 is actually beneficial.
4. I recommend that all efforts to limit environmental carbon should be terminated immediately! Trillions of dollars can be saved by eliminating carbon caps, carbon credits, carbon sequestration, carbon footprints, zero-carbon targets, carbon taxes, anti-carbon policies and fossil-fuel limits, in energy policy and elsewhere.

Climatists Deny Natural Warming Factors

After a recent contretemps at Climate Etc. with CO2 warmists, I was again reminded how insistent are zero carbon zealots to deny multiple natural climate factors, in order to attribute all modern warming to humans burning hydrocarbons. A large part of this blindness comes from constraints dictated by the IPCC to climate model builders.  Simply put, natural causes of warming (and cooling) are systematically excluded from CIMP models for the sake of the narrative blaming humans for all climate activity: “Climate Change is real, dangerous and man-made.”  A previous post later on analyzes how models deceive by excluding natural forcings.

Let’s start with a paper that seeks objectively to consider both internal and external climate forcings, including human and natural processes.  The paper by Bokuchava & Semenov was published last October and is behind a paywall at Springer.  An open access copy is here:  Factors of natural climate variability contributing to the Early 20th Century Warming in the Arctic.  Excerpt in italics with my bolds and added images.

Abstract

The warming in the first half of the 20th century in the Northern Hemisphere (NH) (early 20th century warming (ETCW)) was comparable in magnitude to the current warming, but occurred at a time when the growth rate of the greenhouse gas (GG) concentration in the atmosphere was 4–5 times slower than in recent decades. The mechanisms of the early warming are still a subject of discussion. The ETCW was most pronounced in the high latitudes of the NH, and the recent reconstructions consistently indicate a significant negative anomaly of the Arctic sea ice area during early warming period linked with enhanced Atlantic water inflow to the Arctic and amplified warming in high latitudes of the NH.

Assessing the contributions of internal variability and external natural and anthropogenic factors to this climatic anomaly is key for understanding historical and modern climate dynamics. This paper considers mechanisms of ETCW associated with various internal variability and external anthropogenic and natural factors. An analysis of the findings on the topic of long-term studies of climate variations in the NH during the period of instrumental observations does not allow one to attribute the ETCW to one particular mechanism of internal climate variability or external forcing of the climate.

Most likely, this event was caused by a combined effect of long-term climatic fluctuations in the North Atlantic and the North Pacific with a noticeable contribution of external radiative forcing associated with a decrease in volcanic activity, changes in solar activity, and an increase in GG concentration in the atmosphere due to anthropogenic emissions. Furthermore, this climate variation in high latitudes of the NH has been enhanced by a number of positive feedbacks. An overview of existing research is given, as are the main mechanisms of internal and external climate variability in the NH in the early 20th century. Despite the fact that the internal variability of the climate system is apparently the main mechanism that explains the ETCW, the quantitative assessment of the contribution of each factor remains uncertain, since it significantly depends on the initial conditions in the models and the lack of instrumental data in the early 20th century, especially in polar latitudes.

Figure 1. 30-year moving trends in global surface air temperature
(°C / 30 years) according to Berkley dataset [4]

The main cause of the recent warming is considered to be due to the anthropogenic forcing  primarily the carbon dioxide (CO2) concentration growth causing a greenhouse effect [5]. But the role of CO2 for ETCW could not be as important since this period precedes the time of the accelerating growth of radiative forcing by greenhouse gases (GHG). This GHG increase after 1950s is also inconsistent with the global SAT decline from 1940s to 1970s.

Numerical experiments with different climate model generations [6,7] show that modern warming is well reproduced when averaged over model ensembles (indicating external influence as major factor). The ETCW amplitude, despite the increasing accuracy of model simulations, still differs significantly in climate models. This may indicate the important role of internal climate variability [2], as well as the uncertainty of results of model experiments due to incorrectly specified forcing.

The majority of studies [8,9] agree that such a strong warming can be explained by a combination of internal climate system variability as quasi-periodic oscillation or random climate fluctuation with increasing global temperature in the background associated with external anthropogenic and natural forcings (increased GHGs emissions and a pause in volcanic eruptions, in particular).

This paper provides an overview of the existing hypotheses that may explain ECTW, describes the main mechanisms of internal climate variability during the twentieth century, in particular in the Arctic region.

Figure 2. Average annual SAT (°C) anomalies in the period 1900-2015,
according to Berkley observational dataset (5-year running mean), global (black curve),
Northern Hemisphere (blue curve), Southern Hemisphere (orange curve),
NH high latitudes (60°-90° N) (red curve), and NH high latitudes
without 5-yr running mean smoothing (gray curve)

Internal variability in the Arctic can be enhanced by positive radiation feedbacks [12], including surface albedo – temperature feedback, which can strongly impact the absorption of solar shortwave radiation. This mechanism manifests itself during prolonged warm periods, mainly in autumn, when a growing ice-free ocean surface with low albedo absorbs more solar radiation and warms the upper ocean layer that leads to further sea ice melting [10]. This positive radiation feedback contributes to the faster temperature increase in the Arctic. This phenomenon is now well-known as “Arctic (or Polar) Amplification”.

However, other positive feedbacks also play major roles in the Arctic Amplification. There are positive feedbacks related to long-wave radiation, for instance, an increase of water vapor content and cloud cover leads to a greenhouse effect, which is more pronounced at high latitudes [13], as well as dynamic feedbacks, which imply strengthened oceanic and atmospheric ocean heat transfer to the Arctic in the conditions of the shrinking sea ice extent [14,15].

Arctic Amplification may also be a consequence of non-local mechanisms such as enhanced northward latent heat transfer in the warmer atmosphere [16] Quasi-periodic fluctuations of North Atlantic sea surface temperature (SST) of 60-80 year time scale [17] suggest a possible role of oceanic heat transfer as a driver of long-term SAT anomalies in the Arctic that can be enhanced by positive feedbacks [18].

Thus, the amplitude of SST oscillations in the NH polar latitudes can be a combination of both regional response to global climate change and the formation of internal oscillations in the ocean atmosphere system.

Natural internal factors – ocean-atmosphere system variability
Atmosphere circulation variability

Figure 3. Winter Arctic (60°-90°N) SAT anomalies for according to
Berkley observations (5-year running mean) (black curve); NAO index (pink curve),
PNA index (blue curve) according to HadSLP2.0 dataset [25]

The North Atlantic Oscillation (NAO) and the closely related Arctic Oscillation (AO) is the dominant mode of large-scale winter atmospheric variability in the North Atlantic, characterized by sea level pressure dipole with one center over Greenland (Icelandic minimum) and another center of the opposite sign in the North Atlantic mid latitudes (Azores maximum). NAO controls the strength and direction of westerly winds and the position of storm tracks in the North Atlantic sector, thus crucially impacting the European climate [23].

During the first two decades of the 20th century, the positive phase of NAO was expressed in a stronger than usual zonal circulation over the North Atlantic (Fig. 3). The long-term dominance of these atmospheric circulation pattern led to an advection of heat to the northeastern part of the North Atlantic. However, the NAO transition to the negative phase after 1920s and in general inconsistency between NAO and Arctic SAT variations in the first half of the 20th century do not support an hypothesis of NAO contribution to the ETCW warming [24].

The Pacific North American Oscillation index (PNA) characterizes the pressure gradient between the North Pacific (Aleutian minimum) and the East of North America (Canadian maximum) and is related to fluctuations of North Pacific zonal flow. An important feature of PNA in the context of the ETCW is that both (positive and negative) PNA phases may contribute to atmospheric heat advection to the Arctic. In the 1930s and 1950s, the negative phase (Fig. 3) led to the transfer of warm air masses to the pole across the northwestern Pacific Ocean, and the positive phase of the 1940s forced increased zonal transfer to the Western coast of Canada and Alaska [8]. PNA is strongly influenced by the Pacific Southern Oscillation (El Nino Southern Oscillation – ENSO) – the positive index phase is associated with the El Nino phenomena, and the negative with La Niña events.

Atmospheric circulation in the mid-latitudes of the Pacific Ocean may also depend on fluctuations of the Pacific trade winds [28]. The trade winds weakening is manifested in the SAT growth in Pacific mid-latitudes, which coincides on the time scale with the warming of 1910-1940s in the high Arctic latitudes and in the lowering of temperatures during the cooling period between 1940s and 1970s when the strength of the trade winds had been increasing.

Ocean circulation variability

Figure 4. Winter Arctic (60°-90°N) SAT anomalies according to
Berkley dataset (5-year running mean, black curve); AMO index (pink curve),
PDO index (blue curve) according to HadiSST2.0 dataset [37]

Arctic Amplification in the 20th century, including ETCW period can be associated not only with an increase of atmospheric heat transport, but also with an enhancement of ocean heat inflow in the North Atlantic to the extratropical latitudes of the NH from its equatorial part [30].

Instrumental data show that SST variability in the North Atlantic during the 20th century was dominated by cyclic fluctuations on time scales of 50-80 years, showing two warm periods in the 1930s-1940s and at the end of the 20th century and two cold periods in the beginning of the century and in the 1960s-1970s. SST oscillations in the North Atlantic are called Atlantic Multidecadal Oscillation (AMO). The observational data also indicate AMO-like cycles in the Arctic SAT (Fig. 4).

Paleo-reconstructions of AMO [33] demonstrate that strong, low-frequency (60-100 years) SSTnvariability is a robust feature of the North Atlantic climate over the past five centuries. There are also indications of a significant correlation between Arctic sea ice area and AMO index including a sharp change during ECTW period [34].

There is another pronounced internal climate variability that may act synchronously with AMO. This is the Pacific Decadal Oscillation (PDO), which reflects a variability of the Pacific SSTs north of 20° N and has 20-40 years periodicity [35]. PDO might have played an equally important role in the heat advection to the Arctic in the middle of the century. Several current studies [36,29] suggest the synchronous phase shift of AMO and PDO largely contributed to the accelerated Arctic warming, both the ongoing and ETCW.

Сonclusions

Understanding the mechanisms of ETCW and subsequent cooling is a key to determine the relative contribution of internal natural variability to global climate change on multi-decadal time scale. Studies of climate changes in high latitudes in the mid-twentieth century allows us to identify a number of possible mechanisms involving natural variability and positive feedbacks in the Arctic climate system that may partially explain ETCW.

Based on the recent literature it can be concluded that internal oceanic variability, together with additional impact of natural atmospheric circulation variations are important factors for ETCW. Recently, a number of results indicating the Pacific Ocean as a source of multidecadal fluctuation both on a global scale and in high latitudes has increased. Howewer, assessment of a relative contribution to ETCW in the Atlantic and Pacific sectors remains uncertain.

Climate model simulations [9,43,44] argue that the internal variability of the ocean-atmosphere system cannot explain the entire amplitude of temperature fluctuations in the first half of the 20th century as a single factor, and must act in combination with external forcings (solar and volcanic activity), positive feedbacks in the Arctic climate system, and anthropogenic factors. Quantifying the contribution of each factor still remains a matter of debate.

Climate Deception:  Models Hide the Paleo Incline

Figure 1. Anthropgenic and natural contributions. (a) Locked scaling factors, weak Pre Industrial Climate Anomalies (PCA). (b) Free scaling, strong PCA

In  2009, the iconic email from the Climategate leak included a comment by Phil Jones about the “trick” used by Michael Mann to “hide the decline,” in his Hockey Stick graph, referring to tree proxy temperatures  cooling rather than warming in modern times.  Now we have an important paper demonstrating that climate models insist on man-made global warming only by hiding the incline of natural warming in Pre-Industrial times.  The paper is From Behavioral Climate Models and Millennial Data to AGW Reassessment by Philippe de Larminat.  H/T No Tricks Zone. Excerpts in italics with my bolds.

Abstract

Context. The so called AGW (Anthropogenic Global Warming), is based on thousands of climate simulations indicating that human activity is virtually solely responsible for the recent global warming. The climate models used are derived from the meteorological models used for short-term predictions. They are based on the fundamental and empirical physical laws that govern the myriad of atmospheric and oceanic cells integrated by the finite element technique. Numerical approximations, empiricism and the inherent chaos in fluid circulations make these models questionable for validating the anthropogenic principle, given the accuracy required (better than one per thousand) in determining the Earth energy balance.

Aims and methods. The purpose is to quantify and simulate behavioral models of weak complexity, without referring to predefined parameters of the underlying physical laws, but relying exclusively on generally accepted historical and paleoclimate series.

Results. These models perform global temperature simulations that are consistent with those from the more complex physical models. However, the repartition of contributions in the present warming depends strongly on the retained temperature reconstructions, in particular the magnitudes of the Medieval Warm Period and the Little Ice Age. It also depends on the level of the solar activity series. It results from these observations and climate reconstructions that the anthropogenic principle only holds for climate profiles assuming almost no PCA neither significant variations in solar activity. Otherwise, it reduces to a weak principle where global warming is not only the result of human activity, but is largely due to solar activity.

Discussion

GCMs (short acronym for AOCGM: Atmosphere Ocean General Circulation Models, or for Global Climate model) are fed by series related to climate drivers. Some are of human origin: fossil fuel combustion, industrial aerosols, changes in land use, condensation trails, etc. Others are of natural origin: solar and volcanic activities, Earth’s orbital parameters, geomagnetism, internal variability generated by atmospheric and oceanic chaos. These drivers, or forcing factors, are expressed in their own units: total solar irradiance (W m–2), atmospheric concentrations of GHG (ppm), optical depth of industrial or volcanic aerosols (dimless), oceanic indexes (ENSO, AMO…), or by annual growth rates (%). Climate scientists have introduced a metric in order to characterize the relative impact of the different climate drivers on climate change. This metric is that of radiative forcings (RF), designed to quantify climate drivers through their effects on the terrestrial radiation budget at the top of the atmosphere (TOA).

However, independently of the physical units and associated energy properties of the RFs, one can recognize their signatures in the output and deduce their contributions. For example, volcanic eruptions are identifiable events whose contributions can be quantified without reference to either their assumed radiative forcings, or to physical modeling of aerosol diffusion in the atmosphere. Similarly, the Preindustrial Climate Anomalies (PCA) gathering the Medieval Warm Period (MWP) and the Little Ice Age (LIA), shows a profile similar to that of the solar forcing reconstructions. Per the methodology proposed in this paper, the respective contributions of the RF inputs are quantified through behavior models, or black-box models.

Now, Figures 1-a and 1-b presents simulations obtained from the models identified under two different sets of assumptions, detailed in sections 6 and 7 respectively.

Figure 1. Anthropgenic and natural contributions. (a) Locked scaling factors, weak Pre Industrial Climate Anomalies (PCA). (b) Free scaling, strong PCA

In both cases, the overall result for the global temperature simulation (red) fits fairly well with the observations (black).  Curves also show the forcing contributions to modern warming (since 1850). From this perspective, the natural (green) and anthropogenic (blue) contributions are in strong contradiction between panels (a) and (b). This incompatibility is at the heart of our work.

Simulations in panel (a) are calculated per section 6, where the scaling multipliers planned in the model are locked to unity, so that the radiative forcing inputs are constrained to strictly comply with the IPCC quantification. The remaining parameters of the black-box model are adjusted in order to minimize the deviation between the observations (black curve) and the simulated outputs (red). Per these assumptions, the resulting contributions (blue vs. green) comply with the AGW principle. Also, the conformity of the results with those of the CMIP supports the validity of the type of behavioral model adopted for our simulations.

Paleoclimate Temperatures

Although historically documented the Medieval Warm Period (MWP) and the Little Ice Age (LIA) don’t make consensus about their amplitudes and geographic extensions [2, 3]. In Fig. 7.1-c of the First Assessment Report of IPCC, a reconstruction from showed a peak PCA amplitude of about 1.2 °C [4]. Then later on, a reconstruction by the so-called ‘hockey stick graph’, was reproduced five times in the IPCC Third Assessment Report (2001), wherein there was no longer any significant MWP [5].

After, 2003 controversies reference to this reconstruction had disappeared from subsequent IPCC reports:it is not included among the fifteen paleoclimate reconstructions covering the millennium period listed in the fifth report (AR5, 2013) [6]. Nevertheless, AR6 (2021) revived a hockey stick graph reconstruction from a consortium initiated by a network “PAst climate chanGES” [7,8]. The IPCC assures (AR6, 2.3.1.1.2): “this synthesis is generally in agreement with the AR5 assessment”.

Figure 2 below puts this claim into perspective. It shows the fifteen reconstructions covering the preindustrial period accredited by the IPCC in AR5 (2013, Fig. 5.7 to 5.9, and table 5.A.6), compiled (Pangaea database) by [7]. Visibly, the claimed agreement of the PAGES2k reconstruction (blue) with the AR5 green lines does not hold.

Figure 2. Weak and strong preindustrial climate anomalies, respectively from AR5 (2013) in green and AR6 (2021) in blue.

Conclusion

In section 8 above, a set of consistent climate series is explored, from which solar activity appears to be the main driver of climate change. To eradicate this hypothesis, the anthropogenic principle requires four simultaneous assessments:

♦  A strong anthropogenic forcing, able to account for all of the current warming.
♦  A low solar forcing.
♦  A low internal variability.
♦  The nonexistence of significant pre-industrial climate anomalies, which could indeed be explained by strong solar forcing or high internal variability.

None of these conditions is strongly established, neither by theoretical knowledge nor by historical and paleoclimatic observations. On the contrary, our analysis challenges them through a weak complexity model, fed by accepted forcing profiles, which are recalibrated owning to climate observations. The simulations show that solar activity contributes to current climate warming in proportions depending on the assessed pre-industrial climate anomalies.

Therefore, adherence to the anthropogenic principle requires that when reconstructing climate data, the Medieval Warming Period and the Little Ice Age be reduced to nothing, and that any series of strongly varying solar forcing be discarded. 

Background on Disappearing Paleo Global Warming

The first graph appeared in the IPCC 1990 First Assessment Report (FAR) credited to H.H.Lamb, first director of CRU-UEA. The second graph was featured in 2001 IPCC Third Assessment Report (TAR) the famous hockey stick credited to M. Mann.

Rise and Fall of the Modern Warming Spike

 

Good and Bad Climate Models Simply Put

Thanks to John Shewchuk of ClimateCraze for explaining simply how climate models are evaluated and why most are untrustworthy in the above video. He also explains why worst performing model was prized rather than the one closest to the truth.  Below is a synopsis of a discussion by Patrick Michaels on the same point.

Background:  Nobel Prize for Worst Climate Model

Patrick J. Michaels reports at Real Clear Policy Nobel Prize Awarded for the Worst Climate Model. Excerpts in italics with my bolds and added images.

Given the persistent headlines about climate change over the years, it’s surprising how long it took the Nobel Committee to award the Physics prize to a climate modeler, which finally occurred earlier this month.

Indeed, Syukuro Manabe has been a pioneer in the development of so-called general circulation climate models (GCMs) and more comprehensive Earth System Models (ESMs). According to the Committee, Manabe was awarded the prize “For the physical modelling of the earth’s climate, quantifying variability, and reliably predicting global warming.”

What Manabe did was to modify early global weather forecasting models, adapting them to long-term increases in human emissions of carbon dioxide that alter the atmosphere’s internal energy balance, resulting in a general warming of surface temperatures, along with a much larger warming of temperatures above the surface over the earth’s vast tropics.

Unlike some climate modelers, like NASA’s James Hansen — who lit the bonfire of the greenhouse vanities in 1988, Manabe is hardly a publicity hound. And while politics clearly influences it (see Al Gore’s 2007 Prize), the Nobel Committee also respects primacy, as Manabe’s model was the first comprehensive GCM. He produced it at the National Oceanic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton NJ. The seminal papers were published in 1975 and 1980.

And, after many modifications and renditions, it is also the most incorrect of all the world’s GCMs at altitude over the vast tropics of the planet.

Getting the tropical temperatures right is critical. The vast majority of life-giving moisture that falls over the worlds productive midlatitude agrosystems originates as evaporation from the tropical oceans.

The major determinant of how much moisture is wafted into our region is the vertical distribution of tropical temperature. When the contrast is great, with cold temperatures aloft compared to the normally hot surface, that surface air is buoyant and ascends, ultimately transferring moisture to the temperate zones. When the contrast is less, the opposite occurs, and less moisture enters the atmosphere.

Every GCM or ESM predicts that several miles above the tropical surface should be a “hot spot,” where there is much more warming caused by carbon dioxide emissions than at the surface. If this is improperly forecast, then subsequent forecasts of rainfall over the world’s major agricultural regions will be unreliable.

That in turn will affect forecasts of surface temperature. Everyone knows a wet surface heats up (and cools down) slower than a dry one (see: deserts), so getting the moisture input right is critical.

Following Manabe, vast numbers of modelling centers popped up, mushrooms fertilized by public — and only public — money.

Every six years or so, the U.S. Department of Energy collects all of these models, aggregating them into what they call Coupled Model Intercomparison Projects (CMIPs). These serve as the bases for the various “scientific assessments” of climate change produced by the U.N.’s Intergovernmental Panel on Climate Change (IPCC) or the U.S. “National Assessments” of climate.

Figure 8: Warming in the tropical troposphere according to the CMIP6 models. Trends 1979–2014 (except the rightmost model, which is to 2007), for 20°N–20°S, 300–200 hPa. John Christy (2019)

In 2017, University of Alabama’s John Christy, along with Richard McNider, published a paper that, among other things, examined the 25 applicable families of CMIP-5 models, comparing their performance to what’s been observed in the three-dimensional global tropics. Take a close look at Figure 3 from the paper, in the Asia-Pacific Journal of Atmospheric Sciences, and you’ll see that the model GFDL-CM3 is so bad that it is literally off the scale of the graph. [See Climate Models: Good, Bad and Ugly]

At its worst, the GFDL model is predicting approximately five times as much warming as has been observed since the upper-atmospheric data became comprehensive in 1979. This is the most evolved version of the model that won Manabe the Nobel.

In the CMIP-5 model suite, there is one, and only one, that works. It is the model INM-CM4 from the Russian Institute for Numerical Modelling, and the lead author is Evgeny Volodin. It seems that Volodin would be much more deserving of the Nobel for, in the words of the committee “reliably predicting global warming.”

Might this have something to do with the fact that INM-CM4 and its successor models have less predicted warming than all of the other models?

Patrick J. Michaels is a senior fellow working on energy and environment issues at the Competitive Enterprise Institute and author of “Scientocracy: The Tangled Web of Public Science and Public Policy.”

Top Climate Model Improved to Show ENSO Skill

Previous posts (linked at end) discuss how the climate model from RAS (Russian Academy of Science) has evolved through several versions. The interest arose because of its greater ability to replicate the past temperature history. The model is part of the CMIP program which is now going the next step to CMIP7, and is one of the first to test with a new climate simulation. Improvements to the latest model, INMCM60, show an enhanced ability to replicate ENSO oscillations in the Pacific ocean, which have significant climate impacts world wide.

This news comes by way of a new paper published in the Russian Journal of Numerical Analysis and Mathematical Modelling February 2024.  The title is ENSO phase locking, asymmetry and predictability in the INMCM Earth system model Seleznev et al. (2024) Excerpts in italics with my bolds and images from the article.

Abstract:

Advanced numerical climate models are known to exhibit biases in simulating some features of El Niño–Southern Oscillation (ENSO) which is a key mode of inter-annual climate variability. In this study we analyze how two fundamental features of observed ENSO – asymmetry between hot and cold states and phase-locking to the annual cycle – are reflected in two different versions of the INMCM Earth system model (state-of-the-art Earth system model participating in the Coupled Model Intercomparison Project).

We identify the above ENSO features using the conventional empirical orthogonal functions (EOF) analysis which is applied to both observed and simulated upper ocean heat content (OHC) data in the tropical Pacific. We obtain that the observed tropical Pacific OHC variability is described well by two leading EOF-modes which roughly reflect the fundamental recharge-discharge mechanism of ENSO. These modes exhibit strong seasonal cycles associated with ENSO phase locking while the revealed nonlinear dependencies between amplitudes of these cycles reflect ENSO asymmetry.

We also assess and compare predictability of observed and simulated ENSO based on linear inverse modeling. We find that the improved INMCM6 model has significant benefits in simulating described features of observed ENSO as compared with the previous INMCM5 model. The improvements of the INMCM6 model providing such benefits arediscussed. We argue that proper cloud parametrization scheme is crucial for accurate simulation of ENSO dynamics with numerical climate models

Introduction

El Niño–Southern Oscillation (ENSO) is the most prominent mode of inter-annual climate variability which originates in the tropical Pacific, but has a global impact [41]. Accurately simulating ENSO is still a challenging task for global climate modelers [3,5,15,25]. In the comprehensive study [35] large-ensemble climate model simulations provided by the Coupled Model Intercomparison Project phases 5 (CMIP5)and 6 (CMIP6) were analyzed. It was found that the CMIP6 models significantly outperform those fromCMIP5 for 8 out of 24 ENSO-relevant metrics, especially regarding the simulation of ENSO spatial patterns, diversity and teleconnections. Nevertheless, some important aspects of the observed ENSO are still not satisfactorily simulated by the most of state-of-the-art models [7,38,49]. In this study we are aimed at examination of how two such aspects – ENSO asymmetry and ENSO phase-locking to the annual cycle –are reflected in the INMCM Earth system model [44, 45].

The asymmetry between hot (El Nino) and cold (La Nina) states is a fundamental feature in the observed ENSO occurrences [39]. El Niño events are often stronger than La Niña events, while the latter ones tend to be more persistent [10]. Such an asymmetry is generally attributed to nonlinear feedbacks between sea surface temperatures (SSTs), thermocline and winds in the tropical Pacific [2,19,28]. The alternative conceptions highlight the role of tropical instability waves [1] and fast atmospheric processes associated with irregular zonal wind anomalies [24]. ENSO phase-locking is identified as the tendency of ENSO-events to peak in boreal winter.

Several studies [11,17,34] argue that the phase-locking is associated with seasonal changes in thermocline depth, ocean upwelling velocity, and cloud feedback processes. These processes collectively contribute to the coupling strength modulation between ocean and atmosphere, which, in the context of conceptual ENSO models [4,18], provides seasonal modulation of stability (in the sense of decay rate) of the “ENSO oscillator”. Another theory [20,42] supposes the phase-locking results from nonlinear interactions between the seasonal forcing and the inherent ENSO cycle. Both the asymmetry and phaselocking effects are typically captured by low-dimensional data-driven ENSO models [14, 21, 26, 29, 37].

In this work we identify the ENSO features discussed above via the analysis of upper ocean heat content (OHC) variability in the the tropical Pacific. The recent study [37] analyzed high-resolution reanalysis dataset of the tropical Pacific (10N – 10S, 120E – 80W) OHC anomalies in the 0–300 m depth layer using the standard empirical orthogonal function (EOF) decomposition [16]. It was found that observed OHC variability is effectively captured by two leading EOFs, which roughly describe the fundamental recharge-discharge mechanism of ENSO [18]. The time series of the corresponding principal components (PCs) demonstrate strong seasonal cycles, reflecting ENSO phase-locking, while the revealed inter-annual nonlinear dependencies between these cycles can be associated with ENSO asymmetry [37].

Here we apply similar analysis to the OHC data simulated by two different versions of INMCM Earth system model. The first is the INMCM5 model [45] from CMIP6, and the second is the perspective INMCM6 [44] model with improved parameterization of clouds, large-scale condensation and aerosols. Along with the traditional EOF decomposition we invoke the linear inverse modeling to assess and compare predictability of ENSO from observed and simulated data.

The paper is organized as follows. Sect. 2 describes the datasets we analyze: OHC reanalysis dataset and OHC data obtained from the ensemble simulations of global climate with two versions of INMCM model. Data preparation, including separation of the forced and internal variability, is also discussed. The ensemble EOF analysis is represented, which is used for identifying the meaningful processes contributing to observed and simulated ENSO dynamics. Sect. 3 presents the results we obtain in analyzing both observed and simulated OHC data. In Sect. 4 we summarize and discuss the obtained results, particularly regarding the significant benefits of new version of INMCM model (INMCM6) in simulating key features of observed ENSO.

Fig. 1: Two leading EOFs of the observed tropical Pacific upper ocean heat content (OHC) variability

Fig. 2: Two leading EOFs of the INMCM5 ensemble of tropical Pacific upper ocean heat content simulations

Fig. 3: The same as in Fig. 2 but for INMCM6 model simulations

The corresponding spatial patterns in Fig. 1 have clear interpretation. The first contributes to the central and eastern tropical Pacific, where most significant variations of sea surface temperature (SST) during El Niño/La Nina events occur [9]. The second predominates mainly in the western tropical Pacific and can be associated with the OHC accumulation and discharge before and during the El Niño events [48].

What we can see from Fig. 2 is that the two leading EOFs of OHC variability simulated by the INMCM5 model do not correspond to the observed ones. The corresponding time series and spatial patterns exhibit smaller-scale features, as compared to those we obtain from the reanalysys data, indicating their noisier spatio-temporal nature.

The two leading EOFs of the improved INMCM6 model (Fig. 3), by contrast, capture well both the spatial and temporal features of observed EOFs. In the next section we focus on furtheranalysis of these EOFs assuming that they contain the most meaningful information about ENSO dynamics.

Discussion

In this study we have analyzed how two different versions of the INMCM model [44,45] (state-of-the-art Earth system model participating in the Coupled Model Intercomparison Project, CMIP) simulate some features of El Niño–Southern Oscillation (ENSO) which is a key mode of the global climate. We identified the ENSO features via the EOF analysis applied to both observed and simulated upper ocean heat content(OHC) variability in the the tropical Pacific. It was found that the observed tropical Pacific OHC variability is captured well by two leading modes (EOFs) which reflect the fundamental recharge-discharge mechanism of ENSO involving a recharge and discharge of OHC along the equator caused by a disequilibrium between zonal winds and zonal mean thermocline depth. These modes are phase-shifted and exhibit the strong seasonal cycles associated with ENSO phase locking. The inter-annual dependencies between amplitudes of the revealed ESNO seasonal cycles are strongly nonlinear which reflects the asymmetry between hot (ElNino) and cold (La Nina) states of observed ENSO. We found that the INMCM5 model (the previous version of the INMCM model from CMIP6) poorly reproduces the leading modes of observed ENSO and reflect neither the observed ENSO phase locking nor asymmetry. At the same time, the perspective INMCM6 model demonstrates significant improvement in simulating these key features of observed ENSO. The analysis of ENSO predictability based on linear inverse modeling indicates that the improved INMCM6 model reflects well the ENSO spring predictability barrier and therefore could potentially have an advantage in long range weather prediction as compared with the INMCM5.

Such benefits of the new version of the INMCM model (INMCM6) in simulating observed ENSO dynamics can be provided by using more relevant parametrization of sub-grid scale processes. Particularly, the difference in the amplitude of OHC anomaly associated with ENSO between INMCM5 and INMCM6 shown in Fig.2-3 can be explained mainly by the difference in cloud parameterization in these models. In short, in INMCM5 El-Nino event leads to increase of middle and low clouds over central and eastern Pacific that leads to cooling because of decrease in surface incoming shortwave radiation.

While decrease in low clouds and increase in high clouds in INMCM6 over El-Nino region during positive phase of ENSO lead to further upper ocean warming [43]. This is consistent with the recent study [36] which argued that erroneous cloud feedback arising from a dominant contribution of low-level clouds may lead to heat flux feedback bias in the tropical Pacific, which play a key role in ENSO dynamics. Fast decrease in OHC in central Pacific after El-Nino maximum in INMCM6 can probably occur because of too shallow mixed layer in equatorial Pacific in the model, that leads to fast surface cooling after renewal of upwelling and further increase of tradewinds. Summarizing the above we can conclude that proper cloud parameterization scheme is crucial for accurate simulation of observed ENSO with numerical climate models.

Background on INMCM6

New 2023 INMCM RAS Climate Model First Results

The INMCM60 model, like the previous INMCM48 [1], consists of three major components: atmospheric dynamics, aerosol evolution, and ocean dynamics. The atmospheric component incorporates a land model including surface, vegetation, and soil. The oceanic component also encompasses a sea-ice evolution model. Both versions in the atmosphere have a spatial 2° × 1° longitude-by-latitude resolution and 21 vertical levels up to 10 hPa. In the ocean, the resolution is 1° × 0.5° and 40 levels.

The following changes have been introduced into the model compared to INMCM48.

Parameterization of clouds and large-scale condensation is identical to that described in [4], except that tuning parameters of this parameterization differ from any of the versions outlined in [3], being, however, closest to version 4. The main difference from it is that the cloud water flux rating boundary-layer clouds is estimated not only for reasons of boundary-layer turbulence development, but also from the condition of moist instability, which, under deep convection, results in fewer clouds in the boundary layer and more in the upper troposphere. The equilibrium sensitivity of such a version to a doubling of atmospheric СО2 is about 3.3 K.

The aerosol scheme has also been updated by including a change in the calculation of natural emissions of sulfate aerosol [5] and wet scavenging, as well as the influence of aerosol concentration on the cloud droplet radius, i.e., the first indirect effect [6]. Numerical values of the constants, however, were taken to be a little different from those used in [5]. Additionally, the improved scheme of snow evolution taking into account refreezing and the calculation of the snow albedo [7] were introduced to the model. The calculation of universal functions in the atmospheric boundary layer in stable stratification has also been changed: in the latest model version, such functions assume turbulence at even large gradient Richardson numbers [8].

 

Hot Climate Models Not Fit For Policymaking

Roy Spencer has published a study at Heritage Global Warming: Observations vs. Climate Models.  Excerpts in italics with my bolds.

Summary

Warming of the global climate system over the past half-century has averaged 43 percent less than that produced by computerized climate models used to promote changes in energy policy. In the United States during summer, the observed warming is much weaker than that produced by all 36 climate models surveyed here. While the cause of this relatively benign warming could theoretically be entirely due to humanity’s production of carbon dioxide from fossil-fuel burning, this claim cannot be demonstrated through science. At least some of the measured warming could be natural. Contrary to media reports and environmental organizations’ press releases, global warming offers no justification for carbon-based regulation.

KEY TAKEAWAYS
  1. The observed rate of global warming over the past 50 years has been weaker than that predicted by almost all computerized climate models.
  2. Climate models that guide energy policy do not even conserve energy, a necessary condition for any physically based model of the climate system.
  3. Public policy should be based on climate observations—which are rather unremarkable—rather than climate models that exaggerate climate impacts.

For the purposes of guiding public policy and for adaptation to any climate change that occurs, it is necessary to understand the claims of global warming science as promoted by the United Nations Intergovernmental Panel on Climate Change (IPCC).  When it comes to increases in global average temperature since the 1970s, three questions are pertinent:

  1. Is recent warming of the climate system materially attributable to anthropogenic greenhouse gas emissions, as is usually claimed?
  2. Is the rate of observed warming close to what computer climate models—used to guide public policy—show?
  3. Has the observed rate of warming been sufficient to justify alarm and extensive regulation of CO2 emissions?

While the climate system has warmed somewhat over the past five decades,
the popular perception of a “climate crisis” and resulting calls for economically
significant regulation of CO2 emissions is not supported by science.

Discussion Points

Temperature Change Is Caused by an Imbalance Between Energy Gain and Energy Loss.

Recent Warming of the Climate System Corresponds to a Tiny Energy Imbalance.

Climate Models Assume Energy Balance, but Have Difficulty Achieving It.

Global Warming Theory Says Direct Warming from a Doubling of CO2 Is Only 1.2°C.

Climate Models Produce Too Much Warming.

Climate models are not only used to predict future changes (forecasting), but also to explain past changes (hindcasting). Depending on where temperatures are measured (at the Earth’s surface, in the deep atmosphere, or in the deep ocean), it is generally true that climate models have a history of producing more warming than has been observed in recent decades.

This disparity is not true of all the models, as two models (both Russian) produce warming rates close to what has been observed, but those models are not the ones used to promote the climate crisis narrative. Instead, those producing the greatest amount of climate change usually make their way into, for example, the U.S. National Climate Assessment,  the congressionally mandated evaluation of what global climate models project for climate in the United States.

The best demonstration of the tendency of climate models to overpredict warming is a direct comparison between models and observations for global average surface air temperature, shown in Chart 1.

In this plot, the average of five different observation-based datasets (blue) are compared to the average of 36 climate models taking part in the sixth IPCC Climate Model Intercomparison Project (CMIP6). The models have produced, on average, 43 percent faster warming than has been observed from 1979 to 2022. This is the period of the most rapid increase in global temperatures and anthropogenic greenhouse gas emissions, and also corresponds to the period for which satellite observations exist (described below). This discrepancy between models and observations is seldom mentioned despite that fact that it is, roughly speaking, the average of the models (or even the most extreme models) that is used to promote policy changes in the U.S. and abroad.

Summertime Warming in the United States

While global averages produce the most robust indicator of “global” warming, regional effects are often of more concern to national and regional governments and their citizens. For example, in the United States large increases in summertime heat could affect human health and agricultural crop productivity. But as Chart 2 shows, surface air temperatures during the growing season (June, July, and August) over the 12-state Corn Belt for the past 50 years reveal a large discrepancy between climate models and observations, with all 36 models producing warming rates well above what has been observed and the most extreme model producing seven times too much warming.  

The fact that global food production has increased faster than population growth in the past 60 years suggests that any negative impacts due to climate change have been small. In fact, “global greening” has been documented to be occurring in response to more atmospheric CO2, which enhances both natural plant growth and agricultural productivity, leading to significant agricultural benefits.

These discrepancies between models and observations are never mentioned when climate researchers promote climate models for energy policy decision-making. Instead, they exploit exaggerated model forecasts of climate change to concoct exaggerated claims of a climate crisis.

Global Warming of the Lower Atmosphere

While near-surface air temperatures are clearly important to human activity, the warming experienced over the low atmosphere (approximately the lowest 10 kilometers of the “troposphere,” where the Earth’s weather occurs) is also of interest, especially given the satellite observations of this layer extending back to 1979.

Satellites provide the only source of geographically complete coverage of the Earth, except very close to the North and South Poles.

Chart 3 shows a comparison of the temperature of this layer as produced by 38 climate models (red) and how the same layer has been observed to warm in three radiosonde (weather balloon) datasets (green), three global reanalysis datasets (which use satellites, weather balloons, and aircraft data; black), and three satellite datasets (blue).

Conclusion

Climate models produce too much warming when compared to observations over the past fifty years or so, which is the period of most rapid warming and increases in carbon dioxide in the atmosphere. The discrepancy ranges from over 40 percent for global surface air temperature, about 50 percent for global lower atmospheric temperatures, and even a factor of two to three for the United States in the summertime. This discrepancy is never mentioned when those same models are used as the basis for policy decisions.

Also not mentioned when discussing climate models is their reliance on the assumption that there are no natural sources of long-term climate change. The models must be “tuned” to produce no climate change, and then a human influence is added in the form of a very small, roughly 1 percent change in the global energy balance. While the resulting model warming is claimed to prove that humans are responsible, clearly this is circular reasoning. It does not necessarily mean that the claim is wrong—only that it is based on faith in assumptions about the natural climate system that cannot be shown to be true from observations.

Finally, possible chaotic internal variations will always lead to uncertainty in both global warming projections and explanation of past changes. Given these uncertainties, policymakers should proceed cautiously and not allow themselves to be influenced by exaggerated claims based on demonstrably faulty climate models.

Roy W. Spencer, PhD, is Principal Research Scientist at the University of Alabama in Huntsville.

 

 

Climate Models Hide the Paleo Incline

Figure 1. Anthropgenic and natural contributions. (a) Locked scaling factors, weak Pre Industrial Climate Anomalies (PCA). (b) Free scaling, strong PCA

In  2009, the iconic email from the Climategate leak included a comment by Phil Jones about the “trick” used by Michael Mann to “hide the decline,” in his Hockey Stick graph, referring to tree proxy temperatures  cooling rather than warming in modern times.  Now we have an important paper demonstrating that climate models insist on man-made global warming only by hiding the incline of natural warming in Pre-Industrial times.  The paper is From Behavioral Climate Models and Millennial Data to AGW Reassessment by Philippe de Larminat.  H/T No Tricks Zone. Excerpts in italics with my bolds.

Abstract

Context. The so called AGW (Anthropogenic Global Warming), is based on thousands of climate simulations indicating that human activity is virtually solely responsible for the recent global warming. The climate models used are derived from the meteorological models used for short-term predictions. They are based on the fundamental and empirical physical laws that govern the myriad of atmospheric and oceanic cells integrated by the finite element technique. Numerical approximations, empiricism and the inherent chaos in fluid circulations make these models questionable for validating the anthropogenic principle, given the accuracy required (better than one per thousand) in determining the Earth energy balance.

Aims and methods. The purpose is to quantify and simulate behavioral models of weak complexity, without referring to predefined parameters of the underlying physical laws, but relying exclusively on generally accepted historical and paleoclimate series.

Results. These models perform global temperature simulations that are consistent with those from the more complex physical models. However, the repartition of contributions in the present warming depends strongly on the retained temperature reconstructions, in particular the magnitudes of the Medieval Warm Period and the Little Ice Age. It also depends on the level of the solar activity series. It results from these observations and climate reconstructions that the anthropogenic principle only holds for climate profiles assuming almost no PCA neither significant variations in solar activity. Otherwise, it reduces to a weak principle where global warming is not only the result of human activity, but is largely due to solar activity.

Discussion

GCMs (short acronym for AOCGM: Atmosphere Ocean General Circulation Models, or for Global Climate model) are fed by series related to climate drivers. Some are of human origin: fossil fuel combustion, industrial aerosols, changes in land use, condensation trails, etc. Others are of natural origin: solar and volcanic activities, Earth’s orbital parameters, geomagnetism, internal variability generated by atmospheric and oceanic chaos. These drivers, or forcing factors, are expressed in their own units: total solar irradiance (W m–2), atmospheric concentrations of GHG (ppm), optical depth of industrial or volcanic aerosols (dimless), oceanic indexes (ENSO, AMO…), or by annual growth rates (%). Climate scientists have introduced a metric in order to characterize the relative impact of the different climate drivers on climate change. This metric is that of radiative forcings (RF), designed to quantify climate drivers through their effects on the terrestrial radiation budget at the top of the atmosphere (TOA).

However, independently of the physical units and associated energy properties of the RFs, one can recognize their signatures in the output and deduce their contributions. For example, volcanic eruptions are identifiable events whose contributions can be quantified without reference to either their assumed radiative forcings, or to physical modeling of aerosol diffusion in the atmosphere. Similarly, the Preindustrial Climate Anomalies (PCA) gathering the Medieval Warm Period (MWP) and the Little Ice Age (LIA), shows a profile similar to that of the solar forcing reconstructions. Per the methodology proposed in this paper, the respective contributions of the RF inputs are quantified through behavior models, or black-box models.

Now, Figures 1-a and 1-b presents simulations obtained from the models identified under two different sets of assumptions, detailed in sections 6 and 7 respectively.

Figure 1. Anthropgenic and natural contributions. (a) Locked scaling factors, weak Pre Industrial Climate Anomalies (PCA). (b) Free scaling, strong PCA

In both cases, the overall result for the global temperature simulation (red) fits fairly well with the observations (black).  Curves also show the forcing contributions to modern warming (since 1850). From this perspective, the natural (green) and anthropogenic (blue) contributions are in strong contradiction between panels (a) and (b). This incompatibility is at the heart of our work.

Simulations in panel (a) are calculated per section 6, where the scaling multipliers planned in the model are locked to unity, so that the radiative forcing inputs are constrained to strictly comply with the IPCC quantification. The remaining parameters of the black-box model are adjusted in order to minimize the deviation between the observations (black curve) and the simulated outputs (red). Per these assumptions, the resulting contributions (blue vs. green) comply with the AGW principle. Also, the conformity of the results with those of the CMIP supports the validity of the type of behavioral model adopted for our simulations.

Paleoclimate Temperatures

Although historically documented the Medieval Warm Period (MWP) and the Little Ice Age (LIA) don’t make consensus about their amplitudes and geographic extensions [2, 3]. In Fig. 7.1-c of the First Assessment Report of IPCC, a reconstruction from showed a peak PCA amplitude of about 1.2 °C [4]. Then later on, a reconstruction by the so-called ‘hockey stick graph’, was reproduced five times in the IPCC Third Assessment Report (2001), wherein there was no longer any significant MWP [5].

After, 2003 controversies reference to this reconstruction had disappeared from subsequent IPCC reports:it is not included among the fifteen paleoclimate reconstructions covering the millennium period listed in the fifth report (AR5, 2013) [6]. Nevertheless, AR6 (2021) revived a hockey stick graph reconstruction from a consortium initiated by a network “PAst climate chanGES” [7,8]. The IPCC assures (AR6, 2.3.1.1.2): “this synthesis is generally in agreement with the AR5 assessment”.

Figure 2 below puts this claim into perspective. It shows the fifteen reconstructions covering the preindustrial period accredited by the IPCC in AR5 (2013, Fig. 5.7 to 5.9, and table 5.A.6), compiled (Pangaea database) by [7]. Visibly, the claimed agreement of the PAGES2k reconstruction (blue) with the AR5 green lines does not hold.

Figure 2. Weak and strong preindustrial climate anomalies, respectively from AR5 (2013) in green and AR6 (2021) in blue.

Conclusion

In section 8 above, a set of consistent climate series is explored, from which solar activity appears to be the main driver of climate change. To eradicate this hypothesis, the anthropogenic principle requires four simultaneous assessments:

♦  A strong anthropogenic forcing, able to account for all of the current warming.
♦  A low solar forcing.
♦  A low internal variability.
♦  The nonexistence of significant pre-industrial climate anomalies, which could indeed be explained by strong solar forcing or high internal variability.

None of these conditions is strongly established, neither by theoretical knowledge nor by historical and paleoclimatic observations. On the contrary, our analysis challenges them through a weak complexity model, fed by accepted forcing profiles, which are recalibrated owning to climate observations. The simulations show that solar activity contributes to current climate warming in proportions depending on the assessed pre-industrial climate anomalies.

Therefore, adherence to the anthropogenic principle requires that when reconstructing climate data, the Medieval Warming Period and the Little Ice Age be reduced to nothing, and that any series of strongly varying solar forcing be discarded. 

Background on Disappearing Paleo Global Warming

The first graph appeared in the IPCC 1990 First Assessment Report (FAR) credited to H.H.Lamb, first director of CRU-UEA. The second graph was featured in 2001 IPCC Third Assessment Report (TAR) the famous hockey stick credited to M. Mann.

Rise and Fall of the Modern Warming Spike

 

Self Imposed Energy Poverty Coming to Canada

Jock Finlayson describes how climate change policies are depleting Canadians’ financial means in his article Millions of Canadians May Face ‘Energy Poverty’.  Excerpts in italics with my bolds and added images.

The term “energy poverty” is not yet part of day-to-day political debate in Canada, but that’s likely to change in the next few years. In Europe, the high and rising cost of energy has become a political lightning rod in several countries including Britain and France. Something similar may be in store for Canada.

The Trudeau government and some of the provinces are
aggressively pursuing the holy grail of decarbonization.

To achieve this, they’re engineering dramatic increases in carbon and other taxes on fossil fuels and promising to pour vast sums of money into building new electricity generation and transmission infrastructure to help reduce reliance on oil, refined petroleum products, natural gas and coal. Both strategies point to higher energy costs.

Tax advocates say it is a small % of GDP. But it is still $10 Billion extracted from Canadian households

The Trudeau government has legislated a national minimum carbon tax set to reach $170 per tonne of emissions by 2030, up from $50 in 2022 and $65 currently. Ottawa has also imposed a “clean fuel standard” that will further raise the cost of fuel. These policies are driven by concerns over climate change, which is a risk, to be sure, but so is the prospect of rapidly escalating energy prices for Canadian households and businesses.

Energy poverty arises when households and families must devote a significant fraction of their after-tax income to cover the cost of energy used for transportation, home heating and cooking, and the provision of electricity. In 2022, the United Kingdom government estimated that 13.4 percent of households were in energy poverty, which it defined as needing to spend more than 10 percent of income to cover the cost of directly consumed energy.

There’s no single agreed methodology for assessing the prevalence of energy poverty. A recent Canadian study reports that in 2017, between 6 percent and 19 percent of Canadian households experienced some form of energy poverty, with an above-average incidence in rural areas, Atlantic Canada and among people living in older single-family homes. If accurate, this finding suggests that many more Canadians will soon become acquainted with the term as taxes on fossil fuels climb and governments impose new regulations affecting the energy efficiency of buildings, vehicles, industrial equipment, appliances and agricultural operations.

Canada is blessed with plentiful and diverse supplies of energy. Over time, we have become an important global producer and exporter of energy, with oil, natural gas and electricity together expected to account for one-quarter of Canada’s merchandise exports in 2023. Canada is also an intensive consumer of energy, in part because of our cold climate, dispersed population and relatively high living standards.

80% of the Other Renewables is solid biomass (wood), which leaves at most 1% of Canadian total energy supply coming from wind and solar.

End-use energy demand in Canada is around 13,000 petajoules. Of this, industry is responsible for about half, followed by transportation, residential buildings, commercial buildings and agriculture. Refined petroleum products—all based on oil—are the largest fuel type consumed in Canada (around 40 percent of the total), followed by natural gas (36 percent) and electricity (16 percent). Biofuels and other smaller sources comprise the rest. These data underscore Canadians’ overwhelming dependence on fossil fuels to meet their energy needs.

Politicians in a hurry to slash greenhouse gas emissions via higher taxes
and more regulations must be alert to the risk that millions of Canadians
could find themselves in energy poverty by the end of the decade.

Jock Finlayson is a Senior Fellow at the Fraser Institute.

See Also Canada Budget Officer Quashes Climate Alarm

 

IPCC Guilty of “Prosecutor’s Fallacy”

IPCC made an illogical argument in a previous report as explained in a new GWPF paper The Prosecutor’s Fallacy and the IPCC Report.  Excerpts in italics with my bolds and added images.

London, 13 September – A new paper from the Global Warming Policy Foundation reveals that the IPCC’s 2013 report contained a remarkable logical fallacy.

The author, Professor Norman Fenton, shows that the authors of the Summary for Policymakers claimed, with 95% certainty, that more than half of the warming observed since 1950 had been caused by man. But as Professor Fenton explains, their logic in reaching this conclusion was fatally flawed.

“Given the observed temperature increase, and the output from their computer simulations of the climate system, the IPCC rejected the idea that less than half the warming was man-made. They said there was less than a 5% chance that this was true.”

“But they then turned this around and concluded that there was a 95% chance
that more than half of observed warming was man-made.”

This is an example of what is known as the Prosecutor’s Fallacy, in which the probability of a hypothesis given certain evidence, is mistakenly taken to be the same as the probability of the evidence given the hypothesis.

As Professor Fenton explains

“If an animal is a cat, there is a very high probability that it has four legs.
However, if an animal has four legs, we cannot conclude that it is a cat.
It’s a classic error, and is precisely what the IPCC has done.”

Professor Fenton’s paper is entitled The Prosecutor’s Fallacy and the IPCC Report.

What the number does and does not mean

Recall that the particular ‘climate change number’ that I was asked to explain was the number 95: specifically, relating to the assertion made in the IPCC 2013 Report of ‘at least 95% degree of certainty that more than half the recent warming is man-made’.  The ‘recent warming’ related to the period 1950–2010. So, the assertion is about the probability of humans causing most of this warming.

Before explaining the problem with this assertion, we need to make clear that (although superficially similar) it is very different to another more widely known assertion (still promoted by NASA) that ‘97% of climate scientists agree that humans are causing global warming and climate change’. That assertion was simply based on a flawed survey of authors of published papers and has been thoroughly debunked.

The 95% degree of certainty is a more serious claim.
But the  case made for it in the IPCC report is also flawed.

[Commment: In the short video above, Norman Fenton explains the fallacy IPCC committed.  Synopsis of example.  A man dies is a very rowdy gathering of young men.  A size 13 footprint is found on the body.  Fred is picked up by the police.  He admits to being there but not to killing anyone, despite wearing size 13 shoes.  Since statistics show that only 1% of young men have size 13 feet, the prosecutor claims a 99% chance Fred is guilty.  The crowd was reported to be on the order of 1000, so  there were likely 10 others with size 13 shoes.  So in fact there is only a 10% chance Fred is guilty.]

The flaw in the IPCC summary report

It turns out that the assertion that ‘at least 95% degree of certainty that more than half the recent warming is man-made’ is  based on the same fallacy. In my article about the programme, I highlighted this concern as follows:

The real probabilistic meaning of the 95% figure. In fact it comes from a classical hypothesis test in which observed data is used to test the credibility of the ‘null hypothesis’. The null hypothesis is the ‘opposite’ statement to the one believed to be true, i.e. ‘Less than half the warming in the last 60 years is man-made’. If, as in this case, there is only a 5% probability of observing the data if the null hypothesis is true, the statisticians equate this figure (called a p-value) to a 95% confidence that we can reject the null hypothesis.

But the probability here is a statement about the data given the hypothesis. It is not generally the same as the probability of the hypothesis given the data (in fact equating the two is often referred to as the ‘prosecutors fallacy’, since it is an error often made by lawyers when interpreting statistical evidence).

IPCC defined ‘extremely likely’ as at least 95% probability.  The basis for the claim is found in Chapter 10 of the detailed Technical Summary, which describes various climate change simulation models, which reject the null hypothesis (that more than half the warming was not man-made) at the 5% significance level. Specifically, in the simulation models, if you assumed that there was little man-made impact, then there was less than 5% chance of observing the warming that has been measured. In other words, the models do not support the null hypothesis of little man-made climate change. The problem is that, even if the models were accurate (and it is unlikely that they are) we cannot conclude that there is at least a 95% chance that more than half the warming was man-made, because doing so is the fallacy of the transposed conditional.

The illusion of confidence in the coin example comes from ignoring (the ‘prior probability’) of how rare the double-headed coins are. Similarly, in the case of climate change there is no allowance made for the prior probability of man-made climate change, i.e. how likely it is that humans rather than other factors such as solar activity cause most of the warming. After all, previous periods of warming certainly could not have been caused by increased greenhouse gases from humans, so it seems reasonable to assume – before we have considered any of the evidence – that the probability humans caused most of the recent increase in temperature to be very low. 

Only the assumptions of the simulation models are allowed,
and other explanations are absent.

In both of these circumstances, classical statistics can then be used to deceive you into presenting an illusion of confidence when it is not justified.

See Also 

Beliefs and Uncertainty: A Bayesian Primer

 

You pick one unopened door. Monty opens one other door. Do you stay with your choice or switch?

Monty Hall Problem Simulator

 

Koonin’s Climate Honesty

Steven Koonin shared his honest and wise perspective on global warming/climate change in the interview above.  For those who prefer reading, an excerpted transcript from the closed captions provides the highlights in italics with my bolds and added images.

PR: Welcome to uncommon knowledge; I’m Peter Robinson. Now a professor at New York University and a fellow at the Hoover Institution, Steven Koonin received a Bachelor of Science degree at Caltech and a doctorate in physics at MIT during a career in which he published more than 200 peer-reviewed scientific papers and a textbook on computational physics. Dr Koonin rose to become Provost of Caltech. In 2009 President Obama appointed him under Secretary of science at the Department of Energy a position Dr Koonin held for some two and a half years. During that time he found himself shocked by the misuse of climate science in politics and the press. In 2021 Dr Koonin published Unsettled. What climate science tells us, what it doesn’t and why it matters.

In Unsettled you write of a 2014 workshop for the American physical society, which means it’s you and a bunch of other people who I cannot even begin to follow. Serious professional scientists such as you and several colleagues were asked to subject current climate science to a stress test: to push it, to prod, to test it to see how good it was. From Unsettled I’m quoting you now Steve:

“ I’m a scientist; I work to understand the world through measurements and observations. I came away from the workshop not only not only surprised but shaken by the realization that climate science was far less mature than I had supposed.”

Let’s start with the end of that. What had you supposed?

SK: Well I had supposed that humans were warming the globe; carbon dioxide was accumulating in the atmosphere causing all kinds of trouble, melting ice caps, warming oceans and so on. And the data didn’t support a lot of that. And the projections of what would happen in the future relied on models that were, let’s say, shaky at best.

PR: All right. Former Senator John Kerry is now President Biden’s special Envoy for climate. Let me quote from John Kerry in a 2021 address to the UN Security Council:

“Net zero emissions by 2050 or earlier is the only way that science tells us we can limit this planet’s warming to 1.5 degrees Celsius. Why is that so crucial? Because overwhelming evidence tells us that anything more will have catastrophic implications. We are Marching forward in what is tantamount to a mutual suicide pact.”

Overwhelming evidence science tells us. What’s wrong with that?

SK: Well you should look at the actual science which I suspect that Ambassador Kerry has not done. The U.N puts out assessment reports every five or six years. Those are by the IPCC the Intergovernmental Panel for Climate Change and are meant to survey, assess and summarize the state of our knowledge about the climate. The most recent one came out about a year ago in 2022, the previous one in 2014 or so.

Those reports are massive to read; the latest one is three thousand pages and it took 300 scientists a couple years to write. And you really need to be a scientist to understand them. I have a background in theoretical physics, I can understand this stuff. But still it took me a couple years to really understand what goes on. Now Ambassador Kerry and other politicians certainly have not done that.

Likely he’s getting his information from the summary for policy makers, or more likely for an even further boiled down version. And as you boil down the good assessment into the summary, into more condensed versions, there’s plenty of room for mischief. That Mischief is evident when you compare what comes out the end of that game of telephone with what the actual science really is.

PR: All right: what we know and what we don’t. Let’s start with what we know. I’m quoting you again Steve from Unsettled  “Not everything you’ve heard about climate science is wrong.” In particular you grant in this book two of the central premises or conclusions of climate science that the Press is always telling us about. here’s one and again I’m going to quote you:

“Surely we can all agree that the globe has gotten warmer
over the last several decades.”

SK: No debunking. In fact it’s gotten warmer over the last four centuries Now that’s a different assertion, but it’s equally supported by the assessment reports.  We’ll have to come back to that because the time scale is important. It’s one thing to say this about in my own lifetime the the the climate of the the surface of this planet, and it’s an entirely different thing to say beginning 150 years before this nation was founded temperatures began to rise.

PR: Yes, it’s a different statement but it’s equally true and has some bearing on the warming that we’ve seen over the last century. Here’s the premise that you do grant again I’m going to quote Unsettled

“There is no question that our emission of greenhouse gases in particular CO2 is exerting a warming influence on the planet.” We’re pumping CO2 into the atmosphere, CO2 is a greenhouse gas it must be having some effect of course.”

Absolutely that’s as far as you’re willing to go.   But then you say so actually those are pretty two benign premises that you grant: the Earth has been warming and it’s been warming for a long time. CO2 is a greenhouse gas and it must be having some effect it’s coming from human activities and it’scoming from Humanity, mostly fossil fuels. Now now on to what we don’t know okay again from Unsettled

“Even though human influences could have serious consequences for the climate, they are small in relation to the climate system as a whole. That sets a very high bar for projecting the consequences of human influences.”

That is so counter to the general understanding that informs the headlines, particularly this hot summer we’ve had . So explain that.

SK: Human influences as described in the IPCC are a one percent effect on the radiation flow–the flow of heat radiation and sunlight in the atmosphere. That means your understanding had better be at the one percent level or better if you’re going to predict how the climate system is going to respond. And the one percent makes sense because the changes in temperature we’re talking about are three degrees Kelvin right whereas the average temperature of the earth is about 300 degrees Kelvin.

PR: So human influences are a one percent effect on a complicated chaotic multi-scale system for which we have poor observations  You seem to you seem to quite relaxed about the original science

SK: The underlying science is expressed in the data and expressed in the research literature the journals the research papers that people produce the conference proceedings and so on. The IPCC takes those and assesses and summarizes them and in general it does a pretty good job at that level. And there’s not going to be much politics in that although they might quibble among themselves about adjectives and adverbs; this is extremely certain or this is unlikely or highly unlikely and so on. But by and large it’s pretty good, this is done by fellow Professionals in a professional manner

Now things begin to go wrong. The next step is because nobody who isn’t deeply in the field is going to read all that stuff, so there is a formal process to create a summary for policy makers which is initially drafted by the governments not by the scientists. Well it’s not of course all of them, there’s some subcommittee to do the summary for policy makers and that gets drafted and passed by the scientists for comment. In the end it’s the governments who have approved the summary for policy makers line by line and that’s where the disconnect happens.

For the disconnect I’ll give you an example. Look at the most recent report and the summary for policy makers is talking about deaths from extreme heat incremental deaths and it says that you know extreme heat or heat waves have contributed to uh mortality okay and that’s a true statement But they forgot to tell you that the warming of the planet decreases the incidence of extreme cold events. And since nine times as many people around the globe die from extreme cold than from extreme heat, the warming from the planet has actually cut the number of deaths from extreme temperatures by a lot. That’s not in there at all.

So the statement was completely factual, but factually incomplete
in a way meant to alarm, not to inform. 

And then John Kerry stands up and gives a speech. Maybe he read the SPM I don’t know or his staff read it and probably some of their talking points. And so you get Kerry saying that, you get the Secretary General of the U.N Gutierrez saying, we’re on a highway to climate hell with our foot on the accelerator. But they’re Preposterous of course, even by the IPCC reports they’re Preposterous.  The climate scientists are negligent for not speaking up and saying that’s not okay.

PR: Another one of the things going wrong you write about in a way that I have never seen anyone write about computer models. I have never seen anybody make computer models interesting. So congratulations Steve you did something special as far as I know in the entire Corpus of English language.

Here I’m going to quote from a piece you published in the Wall Street Journal not long ago:

“Projections of future climate and weather events rely on models
demonstrably unfit for the purpose.”

SK: Well, to make a projection of future climate you need to build this big complicated computer model which is really one of the grand computational challenges of all time.

This is not something I wrote a textbook in 1980s when the first PCS came out about how to do modeling on computers with physics. I do know what I’m talking about okay. And then you have to feed into the model what you think future emissions are going to be and the IPCC has five or six different scenarios, High emissions ,low emissions. If you take a particular scenario and feed it into the roughly 50 different models that exist that are developed by groups around the world

So Caltech has a model, Harvard has a model, yeah Oxford. But the Chinese have several models, the Russians and so on. When you feed the same scenario into those different models you get a range of answers. The range is as big as the change you’re trying to describe itself okay, And we can go into the reasons why there is that uncertainty, and in the latest generation of models about 40 percent of them were deemed to be too sensitive to be of much use.

Too sensitive meaning that when you add the carbon dioxide in and the temperature goes up too fast compared to what we’ve seen already. So that’s really disheartening the world’s best models are trying as hard as they can, and they get it very wrong at least 40 percent of the time.

This is not only my assessment you can look at papers published by Tim Palmer and Bjorn Stevens who are serious modelers in the consensus. And their own phrases are that these models are not fit for purpose. at least at the regional or more detailed Global level .

PR: Quoting Unsettled again, and this is one of the most astonishing passages in the book.  Writing about the effects of the increases in computing power over the years:

“Having better tools and information to work with should make the models more accurate
and 
more in line with each other.  This has not happened.
The spread in results among different computer models is increasing.”

This one you’re going to have to explain to me.  As our modeling power, as our processing power increases, we should be closing in on reliable conclusions and yet they seem to be receding faster than we can approach them. if I got that correct that’s right how can that be

SK: Because  as the models become more sophisticated  that means either you made the boxes a little bit smaller in the model the grid boxes so there are more of them or you made more sophisticated your description.

The whole globe is sort of divided into 10 million slabs really.  The average size of a grid box in the current generation is 100 kilometers 60 miles okay and within that 60 miles there’s a lot that goes on that we can’t describe explicitly in the computer because clouds are maybe five kilometers big and Rain happens here and not there within the grid box we can’t describe all that.

One day we’ll be able to , but not really very soon and let me explain why. The current grid boxes are 100 kilometers so you might say well why not make them 10. well suddenly the number of boxes has gone up by a hundred okay so you need a hundred times more powerful computer but it’s worse than that because the time steps have to be smaller also because things shouldn’t move more than a grid box in one time step and so the processing power actually goes up as the cube of the grid size and so if you want to go from 100 kilometers to 10 kilometers that’s a factor of 10. the processing power required goes up by a factor of a thousand and it’s going to be a long time before we got a computer that’s a thousand times more powerful than what we have.

PR: You and I are speaking in the middle of August I just started collecting headlines thinking I’ll just read this to Steve and see what he says about it.

CBS News this past May “Scientists say climate change is making hurricanes worse.”

Koonin in Unsettled:  “Hurricanes and tornadoes show no changes attributable to human influences.” 

[The graph above shows exhibit 2a from Truchelut and Staehling overlaid with the record of atmospheric CO2 concentrations.  From NOAA combining Mauna Loa with earlier datasets.]
To determine Integrated Storm Activity Annually over the Continental U.S. (ISAAC) from 1900 through 2017, we summed this landfall ACE spatially over the entire continental U.S. and temporally over each hour of each hurricane season. We used the same methodology to calculate integrated annual landfall ACE for five additional geographic subsets of the continental U.S.

Well what do you think you’re doing taking on CBS?

SK: Well you know what science does CBS know?  The media gets their information from reporters who have no or very little scientific training. (PR: you mean you didn’t graduate people from Caltech who went to work there?)  Probably one or so and they do a good job. But they have reporters on a climate beat who have to produce stories the more dramatic the better:  If it bleeds It leads. and so you get that kind of stuff I quote

When I say something about hurricanes, I quote right from the IPCC reports and it doesn’t say that at all. Actually the most recent report said it based on a paper which was subsequently corrected

PR: Floods here’s a 2020 headline this is from an article or press release published by the UN environment program quote climate change this is the U.N now not the IPCC but it is a U.N agency:

UNEP: “Climate change is making record-breaking floods.”

Steve Koonin in Unsettled:  “We don’t know whether floods globally are increasing, decreasing or doing nothing at all.”

SK: I would say the U.N needs to be consistent and and they should check their press release against the IPCC reports before they say anything. 

When I wrote unsettled I tried very hard to stick with the gold standard which was the IPCC report at the time or the subsequent research literature I had available to me when I wrote the book only the fifth assessment report which came out in 2014 as we’ve discussed.

The sixth assessment report came out about a year ago and I’m proud to say there’s essentially nothing in there now that needs to be changed in the paperback edition. I will do an update of course but the paperback edition is not going to be totally rewritten.

PR:  All right agriculture. Here’s a 2019 headline

New York Times: “Climate change threatens world’s food supply United Nations warns.”

Steve Koonin in Unsettled:  “Agricultural yields have surged during the past Century even as the globe has warmed.  And projected price impacts from future human induced climate changes through 2050 should hardly be noticeable among ordinary market dynamics.”

SK: It’s not what I said but what the IPCC said. Take current media and almost any climate story, I can write a very effective counter-– it’s like shooting fish in a barrel. I’ve got I’ve actually gotten to the point where I say oh no not another one do I have to do that too. So this is endemic to a media that is ill-informed and has an agenda to set.

The agenda is to promote alarm and induce governments to decarbonize.

I think that probably the primary agenda is to get clicks and eyeballs but and you know there are organizations it’s wonderful there’s an organization called Covering Climate Now which is a non-profit membership organization it’s got the guardian it’s got various other media NPR I believe and their mission is to promote the narrative. They will not allow anything to be broadcast or written that is counter to the narrative The Narrative is: We’ve already broken the climate.

PR: These are headlines in July of 2023. This is last month here as you and I tape this.

New York Times on July 6th: ” Heat records are broken around the globe as Earth warms fast from north to south. Temperatures are surging as greenhouse gases combined with the effects of El Nino.

New York Times on July 18: “Heat waves grip three continents as climate change warms Earth.  Across North America, Europe and Asia hundreds of millions endured blistering conditions.  A U.S official called it a threat to all humankind.”

Wall Street Journal on July 25th:  “July heat waves nearly impossible without climate change study says.  Record temperatures have been fueled by decades of fossil fuel emissions.” 

New York Times on July 27th; “This looks like Earth’s warmest month, hotter ones appear to be in store. July is on track to break all records for any month scientists say,  as the planet enters an extended period of exceptional warmth.”

Unsettled came out in April 2021 so we will forgive you not knowing in April 2021 what would happen last month July of 2023.  But now July 2023 is in the record books,  and doesn’t it prove that climate science is settled?

SK: That statement together with all those headlines confuse weather and climate. So weather is what happens every day or maybe even every season; climate the official definition is a multi-decade average of weather properties. That’s what the IPCC and another U.N agency, the World Meteorological organization (WMO) says.

We have satellites that are continually monitoring the temperature of the atmosphere and they report out every month what the monthly temperature is or more precisely what the monthly temperature anomaly is namely how much warmer or colder is it than the average what would have been expected for that month.  We have data that go back to about 1979. so we have good monthly measures of the global temperature on the lower atmosphere for 40 something years.

You see month-to-month variations of course but a long-term Trend that’s going up no question about it. I I won’t get the number exactly right, but it’s going up at about 0.13 degrees per decade all right. That’s some combination of natural variability and greenhouse gases. Human influences are more general and then every couple years you see a sharp Spike going up, and that’s El Nino.   It’s weather, and so it goes up and then goes back down.

So there’s a long-term Trend which is greenhouse gases and natural variability and then there’s this natural Spike every once in a while, but an eruption goes off you see something, El Ninos happen you see something.  And so on the last month in July there was another Spike in the anomaly the anomalies about as large as we’ve ever seen but not unprecedented okay

The real question is why did it Spike so much right?
Nothing to do with CO2

CO2 is kind of the well human influences a kind of the base on which this uh phenomenon occurs so because the the CO2 even if you stipulate that CO2 is causing some large proportion of this warming,  it’s a slow steady process you would not expect to see spikes you wouldn’t expect to see sudden step functions absolutely not all right and there are various reasons people hypothesize we don’t know yet why we’ve seen the spike in the last month

PR: You better take just a moment to explain what is El Nino

SK: El Nino is a phenomenon in the climate system that happens once every four or five years heat builds up in the equatorial Pacific to the west of Indonesia and so on and then when enough of it builds up it kind of surges across the Pacific and changes the currents and the winds uh as it surges toward South America all right it was discovered in the 19th century and it kind of well understood at this point 19th century means that phenomenon has nothing to do with CO2.

Now people talk about changes in that phenomena as a result of CO2 but it’s there in the climate system already and when it happens it influences weather all over the world we feel it we feel it it gets Rainier in Southern California for example and so on so we had it we we have been in the opposite of an El Nino, a La Nina for the last 3 years, part of the reason people think the West Coast has been in drought and it is Shifting.

It has now shifted in the last months to an El Nino condition that warms the globe and is thought to contribute to this Spike we have seen. But there are other contributions as well one of the most surprising ones is that back in January of 22 an enormous underwater volcano went off in Tonga and it put up a lot of water vapor into the upper atmosphere. It increased the upper atmosphere of water vapor by about 10 percent, and that’s a warming effect and it may be that that is contributing to why the spike is so high. so you’re let me go

PR: Back to New York since you spent you spent July there. I happened to visit in July and we have Canadian wildfires and the Press telling us that the wildfires are because of climate change. And for the first time anybody I know could remember smoke is so heavy and it gets blown into New York And this sky feels as though there’s a solar eclipse taking place for three days it’s so dark in New York

Meanwhile New York is hot it’s really hot and we’re reading reports that Europe is hot and there’s sweltering even in Madrid, a culture built around heat in the midday where they take siestas. Even in Madrid they don’t quite know how to handle this heat and it’s perfectly normal for people to say wait a minute this is getting scary. It feels for the first time as though the Earth is threatening, it’s unsafe in New York of all places where you didn’t have to worry about earthquakes. But the other thing you didn’t have to worry about was breathing the air, but suddenly you can’t breathe the air it feels uncomfortable it’s scary. And you’re saying and your response to that is what?

SK: So we have two responses. First we have a very short memory for weather. Go back in the archives or the newspapers and you can read from even the 19th century on the East Coast descriptions of so-called yellow days when the atmosphere was clouded by smoke from Canadian fires. So look at the historical record first and if it happened before human influences were significant you got a much higher bar to clear to say that’s CO2.

Secondly, there’s a lot of variability. Here in California we had two decades of drought and the governor was screaming New Normal. New Normal. And then what happened last year: historical record torrential rains because people forgot about the 1860 some odd event where the Central Valley was under many feet of water.

PR: So climate is not weather and the weather can really fool you. all right Steve some last questions.  From Unsettled:

“Humans have been successfully adapting to changes in climate for millennia.
Today’s society can adapt to climate changes whether they are
natural phenomena or the result of human influences.”

So you draw the distinction between adapting to climate change on the one hand and the John Kerry approach on the other which is trying to stop climate change. Explain that distinction and why you favor one over the other

SK: Okay. I would take issue though with your description of Kerry’s approach. It’s not trying to stop climate change, it’s to reduce human influences on the climate. Because the climate will keep changing even if we reduce emissions carry the night okay then I would even dream all right go ahead.

Let me talk about adaptation a little bit and give you some examples that are probably not well known, at least it wasn’t really known to me until I looked into it. If you go back to 1900 and you look from 1900 till today the globe warmed by about 1.3 degrees Celsius. That’s This Global temperature record that everybody more or less agrees upon . And before we get to the consequences, the other statement is that the IPCC projects about the same amount of warming over the next hundred years. You might ask what’s going to happen over the next hundred years as that warming happens.

We can look at the past to get some sense of how we might fare,
okay not perfect, but a good indication.

Since 1900 until now:

♦  The global population has gone up by a factor of five, we’re now 8 billion people.
♦  The average lifespan or life expectancy went from 32 years to 73 years
♦  The GDP per capita in constant dollars went up by a factor of seven
♦  The literacy rate went up by a factor of four
♦   The nutrition etc etc

The greatest flourishing of human well-being ever as the globe warmed by 1.3 degrees. And the kicker of course is that the death rate from extreme weather events fell by a factor of 50, due to better prediction, better resilience of infrastructure, and so on. So to think that another 1.3 or 1.4 whatever degrees over the next century is going to significantly derail that beggars belief.

Okay so not an existential threat perhaps some drag on the economy a little bit; the IPCC says not very much at all. So the notion that the world is going to end unless we stop Greenhouse Gas Energy is just nonsense. This is not a mutual suicide pact, not at all.

PR: On August 16th of last year a year ago President Biden signed legislation that included some 360 billion of climate spending, at least the Biden Administration claimed it was climate spending over the next decade. President Biden:

“The American people won and the climate deniers lost and the inflation reduction act takes the most aggressive action to combat climate change ever.”

Curiously enough, they called it the inflation reduction act while it seems to have prompted inflation rather than reduced it. Good legislation or not?

SK: It would be if it focused on useful adaptation, but it’s aimed at mitigation by and large, namely reducing emissions. I think there are parts of it that are good in particular the spur to innovate. New technologies are the only way we’re going to reduce emissions if that is the goal. We need to develop Energy Technologies that are no more expensive than fossil fuels technologies

PR: But our low emission or zero emission goals? Let’s take that one. Because here I have the Provost of Caltech, let’s ask  what tech what we can reasonably hope and what we cannot reasonably hope. Can we reasonably hope you and I are talking after 10 days after the internet went crazy with some claim of cold fusion, no it was room temperature superconductivity. Is this a problem we can crack?

SK:  So  I think it’s going to be really difficult there is one existing solution and that’s nuclear power fission right we know about Fusion separately Fission exists yes uh it can be done right; it’s more expensive than other methods,  because of the regulatory order and it’s got a large lead time, but also because at least in the U.S we build every plant to a custom design.  So one of the things I helped catalyze when I was in the department of energy was small modular reactors.  These are about a tenth the size of the big ones, you can build them in a factory put them on a flatbed truck and this is not a crazy dream. Venture money is going on and there are companies that are on the verge of putting out a test deployment of of commercially constructed power plants.

So why isn’t John Kerry going to one of these hot new startups and doing a photo shoot? I don’t follow Ambassador okay, but you know the nuclear word that is a political hot potato in some quarters. Not to get too much into politics, but I think there is a faction of the left wing that just sees that as anathema and not a solution at all. Meanwhile the Chinese are doing it.

So I like the technology parts of the IRA I do not like the subsidies for wind and solar. One of the things you didn’t mention was I was Chief scientist for BP the oil company for five years. So I learned the energy industry. I never had to make any money in it, but I helped to strategize and kind of systematize thinking for them. So I know from the inside about subsidies to solar and wind. Everybody thinks that’s a solution, but of course wind and solar are intermittent sources of electricity: solar obviously doesn’t produce at night or when it’s cloudy, wind does not produce when the wind doesn’t blow. If you’re going to build a grid that’s entirely wind and solar you better have some way of filling in the times when they’re not producing.

Now if it’s only eight hours or 12 hours you’re trying to fill in, not so hard you can build batteries and so on. But if you need to fill in a couple weeks such as times in Europe, Texas and California when the wind has become still and the solar is clouded out. So you need something else right and that might be batteries although I think that’s unlikely. Gas with carbon capture or nuclear is going to be at least as capable as the wind and solar and since the wind and solar feeds are the cheapest the backup system is going to be more expensive, so you wind up running two parallel systems making electricity at least twice as expensive.

So I say that wind and solar can be an ornament on the real electrical system
but they can never be the backbone of the system.

Let me explain the biggest problem in trying to reduce emissions is not the one and a half billion people in the developed world; it’s the six and a half billion people who don’t have enough energy. And you’re telling them that because of some vague distant threat that we in the developed world are worried about, that they’re going to have to pay more for energy or get more less reliable sources. They should be able to make their own choices about whether they’re willing to tolerate whatever threat there might be from the climate versus having round-the-clock lighting, having adequate Refrigeration, having transportation and so on. Millions of people in India,  six and a half billion people worldwide right absolutely they’re energy starved.

Three billion people on the planet of the 8 billion use less electricity every year than the average U.S refrigerator. So first fix that problem, which is existential and immediate and solvable, and then we can talk about some vague climate thing that might happen 50 years from now.

But scientists must tell the truth, absolutely completely lay it all out,
and we’re not getting that out of the scientific establishment.

PR: Unsettled has been out for more than two years now how have your colleagues responded?

SK: Many colleagues who are not climate scientists say thanks for writing the book it gives me a framework to think about these things and points me to some of the problems that we’re seeing in the popular discussion. I got some rather awful reviews from mainstream climate scientists which disappointed me. Not because they found anything wrong in the book, they didn’t. But the quality of the discussion, the ad hominem attacks, the putting words in my mouth and so on, that wasn’t so good.  Their argument was, Steve Koonin you’re one of us ; you shouldn’t be saying this. It may be true but you shouldn’t be saying it. Steve how could you?

First of all I’ve been involved in science advice in other aspects of public policy particularly National Defense together with some Stanford former colleagues now passed on. And I was taught that you tell the whole truth and you let the politicians make the value judgments and the cost Effectiveness trade-offs. My sense of that balance is no better than anybody else’s, but I can bring to the table the scientific facts. If you trust democracy, you trust people to elect politicians who can over time make a mistake here, they’ll make a mistake there.

But over time you trust them. Now there are colleagues who say: No don’t tell them the truth we can’t trust them to make the right decision. That’s fundamentally what’s going on. I know scientists who know better than everybody else, and you know it’s even worse because these are scientists in the developed world. And if you ask the scientists in Nigeria or India and so on, you get a very different values calculus, that the primary concern is getting enough energy for folks.

 

PR: According to a Harris poll in January 2022 a little over a year year and a half ago now 84% of teenagers in the United States agree with both of the two following statements. they agree with:

♦  Climate change will impact everyone in my generation through Global political instability.
♦  If we don’t address climate change today it will be too late for future Generations making some parts of the planet unlivable.

John Kerry, Al Gore, Greta Thunberg and on and on, and countless voices warning that climate change represents a genuine danger to life on the planet. And now millions of Young Americans are really scared. Surely this has some role to play in what we see the the suicidal ideation and the increasing unhappiness.

SK: I’m sure there are all kinds of social factors but surely this is part of what’s going on. There are two immoralities here. One is the immoral treatment of the developing World which we talked about. The other immorality is scaring the bejesus out of the younger generation. And it’s doubly dangerous because it’s mostly in the west and not in China or India. I’ve tried. I go out and talk in universities and of course the audiences I talk to tend to be quantitative and factually driven. So the minds get opened up if the eyes get opened up.

I think in the U.S the problem will eventually solve itself because the route we are headed down is starting to impact people’s daily lives. Electricity is getting more expensive, you won’t be able to buy an internal combustion car in 10 or 15 years. If you’re here in California, people are going to say wait a second, as they already are in Europe, in UK , Germany, France. And I think there will be a falling down to Earth of all of this at some point and we will get more sensible.

PR: Let’s say your audience now is not a colleague of yours but is an 18 to 24 year old American pretty bright, maybe in college maybe not, but bright. Reads newspapers or at least reads them online. Speaking to that person speaking to an American kid or young adult: Do you need, do they need to be scared?

SK: No absolutely not. I would quote the 1900 to now flourishing as an example. And I would say, you probably believe that hurricanes are getting worse, and then point them to the IPCC line. And say you know you were misinformed about that by the media, don’t you think that there are other things about which you’ve been misinformed. You can read the book and find out many of them, and then go ask your climate friends how come it says one thing in the IPCC report but you’re telling me something else.