Koonin: Reckless Claim of Climate Emergency

Transcript

Hubris is a Greek word that means dangerously overconfident. Based on my research, hubris fairly describes our current response to the issue of climate change.

Here’s what many people believe:

One: The planet is warming catastrophically because of certain human behaviors.
Two: Thanks to powerful computers we can project what the climate will be like
20, 40, or even 100 years from now.
Three: That if we eliminate just one behavior, the burning of fossil fuels,
we can prevent the climate from changing for as long we like.

Each of these presumptions—together, the basis of our hubris regarding the changing climate—is either untrue or so far off the mark as to be useless.

Yes, it’s true that the globe is warming, and that humans are exerting a warming influence upon it. But beyond that, to paraphrase a line from the classic movie The Princess Bride, “I do not think ‘The Science’ says what you think it says.”

For example, government reports state clearly that heat waves in the US are now no more common than they were in 1900.

Hurricane activity is no different than it was a century ago.

Floods have not increased across the globe over more than seventy years.

Source: Voice of International Affairs

Greenland’s ice sheet isn’t shrinking any more rapidly today than it was 80 years ago.

Why aren’t these reassuring facts better known?

Because the public gets its climate information almost exclusively from the media.

And from a media perspective, fear sells.

“Things aren’t that bad” doesn’t sell.

Very few people, and that includes journalists who report on climate news, read the actual science. I have. And what the data—the hard science—from the US government and UN Climate reports say is that… “things aren’t that bad.”

Nor does the public understand the questionable basis of all catastrophic climate change projections: computer modeling.

Projecting future climate is excruciatingly difficult. Yes, there are human influences, but the climate is complex. Anyone who says that climate models are “just physics” either doesn’t understand them or is being deliberately misleading. I should know: I wrote one of the first textbooks on computer modeling.

While modelers base their assumptions upon both fundamental physical laws and observations of the climate, there is still considerable judgment involved. And since different modelers will make different assumptions, results vary widely among different models.

Let’s just take one simple, but significant assumption modelers must make: the impact of clouds on the climate.

Natural fluctuations in the height and coverage of clouds have at least as much of an impact on the flows of sunlight and heat as do human influences. But how can we possibly know global cloud coverage say 10, let alone 50 years from now? Obviously, we can’t. But to create a climate model, we have to make assumptions. That’s a pretty shaky foundation on which to transform the world’s economy.

By the way, creating more accurate models isn’t getting any easier. In fact, the more we learn about the climate system, the more we realize how complex it is.

Rather than admit this complexity, the media, the politicians, and a good portion of the climate science community attribute every terrible storm, every flood, every major fire to “climate change.” Yes, we’ve always had these weather events in the past, the narrative goes, but somehow “climate change” is making everything “worse.”

Even if that were true, isn’t the relevant question, how much worse? Not to mention that “worse” is not exactly a scientific term.  And how would we make it better?  For the alarmists, that’s easy: we get rid of fossil fuels.

Not only is this impractical—we get over 80% of the world’s energy from fossil fuels—it’s not scientifically possible. That’s because CO2 doesn’t disappear from the atmosphere in a few days like, say, smog. It hangs around for a really long time.

About 60 percent of any CO2 that we emit today will remain in the atmosphere 20 years from now, between 30 and 55 percent will still be there after a century, and between 15 and 30 percent will remain after one thousand years.

In other words, it takes centuries for the excess carbon dioxide to vanish from the atmosphere. So, any partial reductions in CO2 emissions would only slow the increase in human influences—not prevent it, let alone reverse it.

CO2 is not a knob that we can just turn down to fix everything. We don’t have that ability. To think that we do is… hubris.

Hubris leads to bad decisions.  A little humility and
a little knowledge would lead to better ones.

I’m Steve Koonin, former Undersecretary for Science in the Obama Administration, and author of Unsettled: What Climate Science Tells Us, What It Doesn’t, and Why It Matters, for Prager University.

Addendum  Fossil Fuels and Greenhouse Gases (GHGs) Climate Science

Professors Lindzen, Happer and Koonin CO2 Coalition Paper April 2024

Table of Contents

I. THERE WILL BE DISASTROUS CONSEQUENCES FOR THE POOR, PEOPLE WORLDWIDE, FUTURE GENERATIONS AND THE WEST IF FOSSIL FUELS, CO2 AND OTHER GHG EMISSIONS ARE REDUCED TO “NET ZERO”

A. CO2 is Essential to Our Food, and Thus to Life on Earth
B. More CO2, Including CO2 from Fossil Fuels, Produces More Food.
C. More CO2 Increases Food in Drought-Stricken Areas.
D. Greenhouse Gases Prevent Us from Freezing to Death
E. Enormous Social Benefits of Fossil Fuels
F. “Net Zeroing” Fossil Fuels Will Cause Massive Human Starvation by Eliminating Nitrogen Fertilizer

II. THE IPCC IS GOVERNMENT CONTROLLED AND THUS ONLY ISSUES GOVERNMENT OPINIONS, NOT SCIENCE

III. SCIENCE DEMONSTRATES FOSSIL FUELS, CO2 AND OTHER GHGs WILL NOT CAUSE CATASTROPHIC GLOBAL WARMING AND EXTREME WEATHER

A. Reliable Science is Based on Validating Theoretical Predictions With Observations, Not Consensus, Peer Review, Government Opinion or Cherry-Picked or Falsified Data
B. The Models Predicting Catastrophic Warming and Extreme Weather Fail the Key Scientific Test: They Do Not Work, and Would Never Be Used in Science.
C. 600 Million Years of CO2 and Temperature Data Contradict the Theory That High Levels of CO2 Will Cause Catastrophic Global Warming.
D. Atmospheric CO2 Is Now “Heavily Saturated,” Which in Physics Means More CO2 Will Have Little Warming Effect.
E. The Theory Extreme Weather is Caused by Fossil Fuels, CO2 and Other GHGs is Contradicted by the Scientific Method and Thus is Scientifically Invalid

 

 

 

 

 

Good Reasons to Distrust Climatists

The most recent case of climatists’ bad behavior is the retraction of a peer-reviewed paper analyzing the properties of CO2 as an IR active gas, concluding that additional levels of atmospheric CO2 will have negligible effect on temperatures.  From the Daily Sceptic:

Another important paper taking issue with the ‘settled’ climate narrative has been cancelled following a report in the Daily Sceptic and subsequent reposts that went viral across social media. The paper discussed the atmospheric ‘saturation’ of greenhouse gases such as carbon dioxide and argued that higher levels will not cause temperatures to rise. The work was led by the widely-published Polish scientist Dr. Jan Kubicki and appeared on Elsevier’s ScienceDirect website in December 2023. The paper has been widely discussed on social media since April 2024 when the Daily Sceptic reported on the findings. Interest is growing in the saturation hypothesis not least because it provides a coherent explanation for why life and the biosphere grew and often thrived for 600 million years despite much higher atmospheric levels of greenhouse gases. Alas for control freaks, it also destroys the science backing for the Net Zero fantasy.

Below are some comments responding to a Quora question, text in italics with my bolds and added images:

What are some reasons why some people do not believe in climate change or global warming despite scientific evidence? Is there any additional information that could help us understand their perspective?

Answer from Mike Jonas,  M.A. in Mathematics, Oxford University, UK, 

Good scientists do not lie and cheat to protect their science, they are happy to discuss their evidence and their findings, and they always understand that everything needs to be replicable and verifiable.

When Climategate erupted on the scene, and the climate scientists behind the man-made global warming narrative were found to have lied and cheated, all honest scientists thought that would be the end of it. Instead, what happened was that those climate scientists closed ranks and carried on, supported by a massive amount of government (ie, the public’s) money. One of the first things they did was to deflect Climategate by saying the emails involved had been hacked so should be ignored, but some of the people involved confirmed that all of the emails really were genuine.

It has been about 15 years since Climategate, and study after study has shown virtually all of the components of the man-made global warming narrative to be incorrect, even that none of the computer models used by the IPCC are fit for purpose,

And yet they maintained their closed ranks,
and the government money kept pouring in.

Did you know that the IPCC does not do any research (please do check that, on their web page About – IPCC they state “The IPCC does not conduct its own research”). It is, as its name says, an inter-governmental organisation, and it is run by and for governments. They say lots of persuasive sciency things, but the simple fact is that they cherry-pick and corrupt the science to achieve their ends. Regrettably, almost all the scientific societies are on the gravy train too. This is part of what the highly respected physicist Professor Hal Lewis said in his resignation letter to the American Physical Society (APS):

It is of course, the global warming scam, with the (literally) trillions of dollars driving it, that has corrupted so many scientists, and has carried APS before it like a rogue wave. It is the greatest and most successful pseudoscientific fraud I have seen in my long life as a physicist. Anyone who has the faintest doubt that this is so should force himself to read the ClimateGate documents, which lay it bare.

I don’t believe that any real physicist, nay scientist, can read that stuff without revulsion. I would almost make that revulsion a definition of the word scientist.

So what has the APS, as an organization, done in the face of this challenge?
It has accepted the corruption as the norm, and gone along with it.

If you want to find out more about this “greatest and most successful pseudoscientific fraud”, the website Watts Up With That? is a good place to start (the fraudsters absolutely hate it), and it links to many other good websites. It has the full text of Hal Lewis’ resignation letter at:

Hal Lewis: My Resignation From The American Physical Society – an important moment in science history

Answer from Susannah Moyer

It’s curious that climate science is the rare scientific field where dissenting scientists, those with contrarian views, are unwelcome and even ostracized.

There are some well known climate scientists that have doubts about the role of CO2 and man made global warming as it pertains to global temperature. They have raised the issue that computer generated prediction models have been inaccurate in predicting temperature patterns because the modeling requires assumptions that have not been shown to be accurate.

Here is a contrarian view from climate scientists who have published climate research results in Nature, which is no small feat:

McNider and Christy are professors of atmospheric science at the University of Alabama in Huntsville and fellows of the American Meteorological Society. Mr. Christy was a member of the Intergovernmental Panel on Climate Change that shared the 2007 Nobel Peace Prize with former Vice President Al Gore.

It is not a known fact by how much the Earth’s atmosphere will warm in response to this added carbon dioxide. The warming numbers most commonly advanced are created by climate computer models built almost entirely by scientists who believe in catastrophic global warming. The rate of warming forecast by these models depends on many assumptions and engineering to replicate a complex world in tractable terms, such as how water vapor and clouds will react to the direct heat added by carbon dioxide or the rate of heat uptake, or absorption, by the oceans.

We might forgive these modelers if their forecasts had not been so consistently and spectacularly wrong. From the beginning of climate modeling in the 1980s, these forecasts have, on average, always overstated the degree to which the Earth is warming compared with what we see in the real climate.

For instance, in 1994 we published an article in the journal Nature showing that the actual global temperature trend was “one-quarter of the magnitude of climate model results.” As the nearby graph shows, the disparity between the predicted temperature increases and real-world evidence has only grown in the past 20 years.

“Consensus” science that ignores reality can have tragic consequences if cures are ignored or promising research is abandoned. The climate-change consensus is not endangering lives, but the way it imperils economic growth and warps government policy making has made the future considerably bleaker. The recent Obama administration announcement that it would not provide aid for fossil-fuel energy in developing countries, thereby consigning millions of people to energy poverty, is all too reminiscent of the Sick and Health Board denying fresh fruit to dying British sailors.

Another questioner, Dr. Koonin was undersecretary for science in the Energy Department during President Barack Obama’s first term and is currently director of the Center for Urban Science and Progress at New York University. His previous positions include professor of theoretical physics and provost at Caltech, as well as chief scientist of BP, where his work focused on renewable and low-carbon energy technologies.

But—here’s the catch—those questions are the hardest ones to answer. They challenge, in a fundamental way, what science can tell us about future climates.

Firstly, even though human influences could have serious consequences for the climate, they are physically small in relation to the climate system as a whole. For example, human additions to carbon dioxide in the atmosphere by the middle of the 21st century are expected to directly shift the atmosphere’s natural greenhouse effect by only 1% to 2%. Since the climate system is highly variable on its own, that smallness sets a very high bar for confidently projecting the consequences of human influences.

A second challenge to “knowing” future climate is today’s poor understanding of the oceans. The oceans, which change over decades and centuries, hold most of the climate’s heat and strongly influence the atmosphere. Unfortunately, precise, comprehensive observations of the oceans are available only for the past few decades; the reliable record is still far too short to adequately understand how the oceans will change and how that will affect climate.

A third fundamental challenge arises from feedbacks that can dramatically amplify or mute the climate’s response to human and natural influences. One important feedback, which is thought to approximately double the direct heating effect of carbon dioxide, involves water vapor, clouds and temperature.

Climate Science Is Not Settled

Another group questioning what some consider “settled science”:

  • Claude Allegre, former director of the Institute for the Study of the Earth, University of Paris;
  • J. Scott Armstrong, cofounder of the Journal of Forecasting and the International Journal of Forecasting;
  • Jan Breslow, head of the Laboratory of Biochemical Genetics and Metabolism, Rockefeller University;
  • Roger Cohen, fellow, American Physical Society;
  • Edward David, member, National Academy of Engineering and National Academy of Sciences;
  • William Happer, professor of physics, Princeton;
  • Michael Kelly, professor of technology, University of Cambridge, U.K.;
  • William Kininmonth, former head of climate research at the Australian Bureau of Meteorology;
  • Richard Lindzen, professor of atmospheric sciences, MIT;
  • James McGrath, professor of chemistry, Virginia Technical University;
  • Rodney Nichols, former president and CEO of the New York Academy of Sciences;
  • Burt Rutan, aerospace engineer, designer of Voyager and SpaceShipOne;
  • Harrison H. Schmitt, Apollo 17 astronaut and former U.S. senator;
  • Nir Shaviv, professor of astrophysics, Hebrew University, Jerusalem;
  • Henk Tennekes, former director, Royal Dutch Meteorological Service;
  • Antonio Zichichi, president of the World Federation of Scientists, Geneva.

Although the number of publicly dissenting scientists is growing, many young scientists furtively say that while they also have serious doubts about the global-warming message, they are afraid to speak up for fear of not being promoted—or worse. They have good reason to worry. In 2003, Dr. Chris de Freitas, the editor of the journal Climate Research, dared to publish a peer-reviewed article with the politically incorrect (but factually correct) conclusion that the recent warming is not unusual in the context of climate changes over the past thousand years. The international warming establishment quickly mounted a determined campaign to have Dr. de Freitas removed from his editorial job and fired from his university position. Fortunately, Dr. de Freitas was able to keep his university job.

This is not the way science is supposed to work, but we have seen it before—for example, in the frightening period when Trofim Lysenko hijacked biology in the Soviet Union. Soviet biologists who revealed that they believed in genes, which Lysenko maintained were a bourgeois fiction, were fired from their jobs. Many were sent to the gulag and some were condemned to death.

Why is there so much passion about global warming, and why has the issue become so vexing that the American Physical Society (APS), from which Dr. Giaever resigned a few months ago, refused the seemingly reasonable request by many of its members to remove the word “incontrovertible” from its description of a scientific issue?

There are several reasons, but a good place to start is the old question
“cui bono?” Or the modern update, “Follow the money.”

 

 

 

 

Happer: Cloud Radiation Matters, CO2 Not So Much (2025)

This month van Wijngaarden and Happer published a new paper Radiation Transport in Clouds.

Last year William Happer spoke on Radiation Transfer in Clouds at the EIKE conference, and the video is above.  For those preferring to read, below is a transcript from the closed captions along with some key exhibits.  I left out the most technical section in the latter part of the presentation. Text in italics with my bolds.

William Happer: Radiation Transfer in Clouds

People have been looking at Clouds for a very long time in in a quantitative way. This is one of the first quantitative studies done about 1800. And this is John Leslie,  a Scottish physicist who built this gadget. He called it an Aethrioscope, but basically it was designed to figure out how effective the sky was in causing Frost. If you live in Scotland you worry about Frost. So it consisted of two glass bulbs with a very thin capillary attachment between them. And there was a little column of alcohol here.

The bulbs were full of air, and so if one bulb got a little bit warmer it would force the alcohol up through the capillary. If this one got colder it would suck the alcohol up. So he set this device out under the clear sky. And he described that the sensibility of the instrument is very striking. For the liquor incessantly falls and rises in the stem with every passing cloud. in fine weather the aethrioscope will seldom indicate a frigorific impression of less than 30 or more than 80 millesimal degrees. He’s talking about how high this column of alcohol would go up and down if the sky became overclouded. it may be reduced to as low as 15 refers to how much the sky cools or even five degrees when the congregated vapours hover over the hilly tracks. We don’t speak English that way anymore but I I love it.

The point was that even in 1800 Leslie and his colleagues knew very well that clouds have an enormous effect on the cooling of the earth. And of course anyone who has a garden knows that if you have a clear calm night you’re likely to get Frost and lose your crops. So this was a quantitative study of that.

Now it’s important to remember that if you go out today the atmosphere is full of two types of radiation. There’s sunlight which you can see and then there is the thermal radiation that’s generated by greenhouse gases, by clouds and by the surface of the Earth. You can’t see thermal radiation but you you can feel it if it’s intense enough by its warming effect. And these curves practically don’t overlap so we’re really dealing with two completely different types of radiation.

There’s sunlight which scatters very nicely and off of not only clouds but molecules; it’s the blue sky the Rayleigh scattering. Then there’s the thermal radiation which actually doesn’t scatter at all on molecules so greenhouse gases are very good at absorbing thermal radiation but they don’t scatter it. But clouds scatter thermal radiation and plotted here is the probability that you will find Photon of sunlight between you know log of its wavelength and the log of in this interval of the wavelength scale.

Since Leslie’s day two types of instruments have been developed to do what he did more precisely. One of them is called a pyranometer and this is designed to measure sunlight coming down onto the Earth on a day like this. So you put this instrument out there and it would read the flux of sunlight coming down. It’s designed to see sunlight coming in every direction so it doesn’t matter which angle the sun is shining; it’s uh calibrated to see them all.

Let me show you a measurement by a pyranometer. This is a actually a curve from a sales brochure of a company that will sell you one of these devices. It’s comparing two types of detectors and as you can see they’re very good you can hardly tell the difference. The point is that if you look on a clear day with no clouds you see sunlight beginning to increase at dawn it peaks at noon and it goes down to zero and there’s no sunlight at night. So half of the day over most of the Earth there’s no sunlight in the in the atmosphere.

Here’s a day with clouds, it’s just a few days later shown by days of the year going across. You can see every time a cloud goes by the intensity hitting the ground goes down. With a little clear sky it goes up, then down up and so on. On average at this particular day you get a lot less sunlight than you did on the clear day.

But you know nature is surprising. Einstein had this wonderful quote: God is subtle but he’s not malicious. He meant that nature does all of sorts of things you don’t expect, and so let me show you what happens on a partly cloudy day. Here so this is data taken near Munich. The blue curve is the measurement and the red curve is is the intensity on the ground if there were no clouds. This is a partly cloudy day and you can see there are brief periods when the sunlight is much brighter on the detector on a cloudy day than it is on the clear day. And that’s because coming through clouds you get focusing from the edges of the cloud pointing down toward your detector. That means somewhere else there’s less radiation reaching the ground. But this is rather surprising to most people. I was very surprised to learn about it but it just shows that the actual details of climate are a lot more subtle than you might think.

We know that visible light only happens during the daytime and stops at night. There’s a second type of important radiation which is the thermal radiation which is measured by a similar device. You have a silicon window that passes infrared, which is below the band gap of silicon, so it passes through it as though transparent. Then there’s some interference filters here to give you further discrimination against sunlight. So sunlight practically doesn’t go through this at all, so they call it solar solar blind since it doesn’t see the Sun.

But it sees thermal radiation very clearly with a big difference between this device and the sunlight sensing device I showed you. Because actually most of the time this is radiating up not down. Out in the open air this detector normally gets colder than the body of the instrument. And so it’s carefully calibrated for you to compare the balance of down coming radiation with the upcoming radiation. Upcoming is normally greater than down coming.

I’ll show you some measurements of the downwelling flux here; these are actually in Greenland in Thule and these are are watts per square meter on the vertical axis here. The first thing to notice is that the radiation continues day and night you can you if you look at the output of the pyrgeometer you can’t tell whether it’s day or night because the atmosphere is just as bright at night as it is during the day. However, the big difference is clouds: on a cloudy day you get a lot more downwelling radiation than you do on a clear day. Here’s a a near a full day of clear weather there’s another several days of clear weather. Then suddenly it gets cloudy. Radiation rises because the bottoms of the clouds are relatively warm at least compared to the clear sky. I think if you put the numbers In, this cloud bottom is around 5° Centigrade so it was fairly low Cloud. it was summertime in Greenland and this compares to about minus 5° for the clear sky.

So there’s a lot of data out there and there really is downwelling radiation there no no question about that you measure it routinely. And now you can do the same thing looking down from satellites so this is a picture that I downloaded a few weeks ago to get ready for this talk from Princeton and it was from Princeton at 6 PM so it was already dark in Europe. So this is a picture of the Earth from a geosynchronous satellite that’s parked over Ecuador. You are looking down on the Western Hemisphere and this is a filtered image of the Earth in Blue Light at 47 micrometers. So it’s a nice blue color not so different from the sky and it’s dark where the sun has set. There’s still a fair amount of sunlight over the United States and the further west.

Here is exactly the same time and from the same satellite the infrared radiation coming up at 10.3 which is right in the middle of the infrared window where there’s not much Greenhouse gas absorption; there’s a little bit from water vapor but very little, trivial from CO2.

As you can see, you can’t tell which side is night and which side is day. So even though the sun has set over here it is still glowing nice and bright. There’s sort of a pesky difference here because what you’re looking at here is reflected sunlight over the intertropical Convergence Zone. There are lots of high clouds that have been pushed up by the convection in the tropics and uh so this means more visible light here. You’re looking at emission of the cloud top so this is less thermal light so white here means less light, white there means more light so you have to calibrate your thinking. to

But the Striking thing about all of this: if you can see the Earth is covered with clouds, you have to look hard to find a a clear spot of the earth. Roughly half of the earth maybe is clear at any given time but most of it’s covered with clouds. So if anything governs the climate it is clouds and and so that’s one of the reasons I admire so much the work that Svensmark and Nir Shaviv have done. Because they’re focusing on the most important mechanism of the earth: it’s not Greenhouse Gases, it’s Clouds. You can see that here.

Now this is a single frequency let me show you what happens if you look down from a satellite and do look at the Spectrum. This is the spectrum of light coming up over the Sahara Desert measured from a satellite. And so here is the infrared window; there’s the 10.3 microns I mentioned in the previous slide it’s it’s a clear region. So radiation in this region can get up from the surface of the Sahara right up to outer space.

Notice that the units on these scales are very different; over the Sahara the top unit is 200, 150 over the Mediterranean and it’s only 60 over the South Pole. But at least the Mediterranean and the Sahara are roughly similar so the right side here these three curves on the right are observations from satellites and the three curves on the left are are calculations modeling that we’ve done. The point here is that you can hardly tell the difference between a model calculation and observed radiation.

So it’s really straightforward to calculate radiation transfer. If someone quotes you a number in watts per square centimeter you should take it seriously; that probably a good number. If they tell you a temperature you don’t know what to make about it. Because there’s a big step between going from watts per square centimeter to a temperature change. All the mischief in the whole climate business is going from watts per square centimeter to to Centigrade or Kelvin.

Now I will say just a few words about clear sky because that is the simplest. Then we’ll get on to clouds, the topic of this talk. This is a calculation with the same codes that I showed you in the previous slide which as you saw work very well. It’s worth spending a little time because this is the famous Planck curve that was the birth of quantum mechanics. There is Max Planck who figured out what the formula for that curve is and why it is that way. This is what the Earth would radiate at 15° Centigrade if there were no greenhouse gases. You would get this beautiful smooth curve the Planck curve. If you actually look at the Earth from the satellites you get a raggedy jaggedy black curve. We like to call that the Schwarzchild curve because Carl Schwarzchild was the person who showed how to do that calculation. Tragically he died during World War I, a Big Big loss to science.

There are two colored curves that I want to draw your attention. The green curve is is what Earth would radiate to space if you took away all the CO2 so it only differs from the black curve you know in the CO2 band here this is the bending band of CO2 which is the main greenhouse effect of CO2. There’s a little additional effect here which is the asymmetric stretch but it it doesn’t contribute very much. Then here is a red curve and that’s what happens if you double CO2.

So notice the huge asymmetry. If taking all 400 parts per million of CO2 away from the atmosphere causes this enormous change 30 watts per square meter, the difference between this green 307 and and the black 277, that’s 30 watts per square meter. But if you double CO2 you practically don’t make any change. This is the famous saturation of CO2. At the levels we have now doubling CO2, a 100% Increase of CO2 only changes the radiation to space by 3 watts per square meter. The difference between 274 for the red curve and 277 for the curve for today. So it’s a tiny amount: for 100% increase in CO2 a 1% decrease of radiation to space.

That allows you to estimate the feedback-free climate sensitivity in your head. I’ll talk you through the feedback-free climate free sensitivity. So doubling CO2 is a 1% decrease of radiation to space. If that happens then the Earth will start to warm up. But it will radiate as the fourth power of the temperature. So temperature starts to rise but if you’ve got a fourth power, the temperature only has to rise by one-quarter of a percent absolute temperature. So a 1% forcing in watts per square centimeter is a one-quarter percent of temperature in Kelvin. Since the ambient Kelvin temperature is about 300 Kelvin (actually a little less) a quarter of that is 75 Kelvin. So the feedback free equilibrium climate sensitivity is less than 1 Degree. It’s 0.75 Centigrade. It’s a number you can do in your head.

So when you hear about 3 centigrade instead of .75 C that’s a factor of four, all of which is positive feedback. So how is there really that much positive feedback? Because most feedbacks in nature are negative. The famous Le Chatelier principle which says that if you perturb a system it reacts in a way to to dampen the perturbation not increase it. There are a few positive feedback systems that we’re familiar with for example High explosives have positive feedback. So if the earth’s climate were like other positive feedback systems, all of them are highly explosive, it would have exploded a long time ago. But the climate has never done that, so the empirical observational evidence from geology is that the climate is like any other feedback system it’s probably negative Okay so I leave that thought with you and and let me stress again:

This is clear skies no clouds; if you add clouds all this does is
suppress the effects of changes of the greenhouse gas.

So now let’s talk about clouds and the theory of clouds, since we’ve already seen clouds are very important. Here is the formidable equation of transfer which has been around since Schwarzchild’s day. So some of the symbols here relate to the intensity, another represents scattering. If you have a thermal radiation on a greenhouse gas where it comes in and immediately is absorbed, there’s no scattering at all. If you hit a cloud particle it will scatter this way or that way, or some maybe even backwards.

So all of that’s described by this integral so you’ve got incoming light at One Direction and you’ve got outgoing light at a second Direction. And then at the same time you’ve got thermal radiation so the warm particles of the cloud are are emitting radiation creating photons which are coming out and and increasing the Earth glow the and this is represented by two parameters. Even a single cloud particle has an albedo, this is is the fraction of radiation that hits the cloud that is scattered as opposed to absorbed and being converted to heat. It’s a very important parameter for visible light and white clouds, typically 99% of the encounters are scattered. But for thermal radiation it’s much less. So water scatters thermal radiation only half as efficiently as shorter wavelengths.

The big problem is that in spite of all the billions of dollars that we have spent, these things which should be known and and would have been known if there hadn’t been this crazy fixation on carbon dioxide and greenhouse gases. And so we’ve neglected working on these areas that are really important as opposed to the trivial effects of greenhouse gases. Attenuation in a cloud is both scattering and absorption. Of course you have to solve these equations for every different frequency of the light because especially for molecules, there’s a strong frequency dependence.

In summary,  let me show you this photo which was taken by Harrison Schmitt who was a friend of mine on one of the first moonshots. It was taken in December and looking at this you can see that they were south of Madagascar when the photograph was taken. You can see it was Winter because here the Intertropical Convergence Zone is quite a bit south of the Equator; it’s moved Way South of India and Saudi Arabia. By good luck they had the sun behind them so they had the whole earth Irradiated.

There’s a lot of information there and and again let me draw your attention to how much of the Earth is covered with clouds. So only very small parts of the Earth can actually be directly affected by greenhouse gases, of the order of half. The takeaway message is that clouds and water vapor are much more important than greenhouse gases for earth’s climate. The second point is the reason they’re much more important: doubling CO2 as I indicated in the middle of the talk only causes a 1% difference of radiation to space. It is a very tiny effect because of saturation. You know people like to say that’s not so, but you can’t really argue that one, even the IPCC gets the same numbers that we do.

And you also know that covering half of the sky with clouds will decrease solar heating by 50%. So for clouds it’s one to one, for greenhouse gases it’s a 100 to one. If you really want to affect the climate, you want to do something to the clouds. You will have a very hard time making any difference with Net Zero with CO2 if you are alarmed about the warmings that have happened.

So one would hope that with all the money that we’ve spent trying to turn CO2 into a demon that some good science has come out of it. From my point of view this is a small part of it, this scattering theory that I think will be here a long time after the craze over greenhouse gases has gone away. I hope there will be other things too. You can point to the better instrumentation that we’ve got, satellite instrumentation as well as ground instrumentation. So that’s been a good investment of money. But the money we’ve spent on supercomputers and modeling has been completely wasted in my view.

Lacking Data, Climate Models Rely on Guesses

A recent question was posed on  Quora: Say there are merely 15 variables involved in predicting global climate change. Assume climatologists have mastered each variable to a near perfect accuracy of 95%. How accurate would a climate model built on this simplified system be?  Keith Minor has a PhD in organic chemistry, PhD in Geology, and PhD in Geology & Paleontology from The University of Texas at Austin.  He responded with the text posted below in italics with my bolds and added images.

I like the answers to this question, and Matthew stole my thunder on the climate models not being statistical models. If we take the question and it’s assumptions at face value, one unsolvable overriding problem, and a limit to developing an accurate climate model that is rarely ever addressed, is the sampling issue. Knowing 15 parameters to 99+% accuracy won’t solve this problem.

The modeling of the atmosphere is a boundary condition problem. No, I’m not talking about frontal boundaries. Thermodynamic systems are boundary condition problems, meaning that the evolution of a thermodynamic system is dependent not only on the conditions at t > 0 (is the system under adiabatic conditions, isothermal conditions, do these conditions change during the process, etc.?), but also on the initial conditions at t = 0 (sec, whatever). Knowing almost nothing about what even a fraction of a fraction of the molecules in the atmosphere are doing at t = 0 or at t > 0 is a huge problem to accurately predicting what the atmosphere will do in the near or far future. [See footnote at end on this issue.]

Edward Lorenz attempted to model the thermodynamic behavior of the atmosphere by using models that took into account twelve variables (instead of fifteen as posed by the questioner), and found (not surprisingly) that there was a large variability in the models. Seemingly inconsequential perturbations would lead to drastically different results, which diverged (euphemism for “got even worse”) the longer out in time the models were run (they still do). This presumably is the origin of Lorenz’s phrase “the butterfly effect”. He probably meant it to be taken more as an instructive hypothetical rather than a literal effect, as it is too often taken today. He was merely illustrating the sensitivity of the system to the values of the parameters, and not equating it to the probability of outcomes, chaos theory, etc., which is how the term has come to be known. This divergence over time is bad for climate models, which try to predict the climate decades from now. Just look at the divergence of hurricane “spaghetti” models, which operate on a multiple-week scale.

The sources of variability include:

♦  the inability of the models to handle water (the most important greenhouse gas in the atmosphere, not CO2) and processes related to it;
♦  e.g., models still can’t handle the formation and non-formation of clouds;
♦  the non-linearity of thermodynamic properties of matter (which seem to be an afterthought, especially in popular discussions regarding the roles that CO2 plays in the atmosphere and biosphere), and
♦  the always-present sampling problem.

While in theory it is possible to know what a statistically significant number of the air and water molecules are doing at any point in time (that would be a lot of atoms and molecules!), a statistically significant sample of air molecules is certainly not being sampled by releasing balloons twice a day from 90 some odd weather stations in the US and territories, plus the data from commercial aircraft, plus all of the weather data from around the World. Doubling this number wouldn’t help, i.e wouldn’t make any difference. Though there are some blind spots, such as northeast Texas that might benefit from having a radar in the area. So you have to weigh the cost of sampling more of the atmosphere versus the 0% increase in forecasting accuracy (within experimental error) that you would get by doing so.

I’ll go out on a limb and say that the NWS (National Weather Service) is actually doing pretty good job in their 5-day forecasts with the current data and technologies that they have (e.g., S-band radar), and the local meteorologists use their years of experience and judgment to refine the forecasts to their viewing areas. The old joke is that a meteorologist’s job is the one job where you can be wrong more than half the time and still keep your job, but everyone knows that they go to work most, if not all, days with one hand tied behind their back, and sometimes two! The forecasts are not that far off on average, and so meteorologists get my unconditional respect.

In spite of these daunting challenges, there are certainly a number of areas in weather forecasting that can be improved by increased sampling, especially on a local scale. For example, for severe weather outbreaks, the CASA project is being implemented using multiple, shorter range radars that can get multiple scan directions on nearby severe-warned cells simultaneously. This resolves the problem caused by the curvature of the Earth as well as other problems associated with detecting storm-scale features tens or hundreds of miles away from the radar. So high winds, hail, and tornadoes are weather events where increasing the local sampling density/rate might help improve both the models and forecasts.

Prof. Wurman at OU has been doing this for decades with his pioneering work with mobile radar (the so-called “DOW’s”). Let’s not leave out the other researchers who have also been doing this for decades. The strategy of collecting data on a storm from multiple directions at short distances, coupled with supercomputer capabilities, has been paying off for a number of years. As a recent example, Prof. Orf at UW Madison, with his simulation of the May 24th, 2011 El Reno, OK tornado (you’ve probably seen it on the Internet), has shed light on some of the “black box” aspects to how tornadoes form. [Video below is Leigh Orf 1.5 min segment for 2018 Blue Waters Symposium plenary session. This segment summarizes, in 90 seconds, some of the team’s accomplishments on the Blue Waters supercomputer over the past five years.]

Prof. Orf’s simulation is just that, and the resolution is around ~10 m (~33 feet), but it illustrates how increased targeted sampling can be effective in at least understanding the complex, thermodynamic processes occurring within a storm. Colleagues have argued that the small improvements in warning times in the last couple of decades are really due more to the heavy spotter presence these days rather than case studies of severe storms. That may be true. However, in test cases of the CASA system, it picked out the subtle boundaries along which the storms fired that did go unnoticed with the current network of radars. So I’m optimistic about increased targeted sampling for use in an early warning system.

These two examples bring up a related problem-too much data! As commented on by a local meteorologist at a TESSA meeting, one of the issues with CASA that will have to be resolved is how to handle/process the tremendous amounts of data that will be generated during a severe weather outbreak. This is different from a research project where you can take your data back to the “lab”. In a real-time system, such as CASA, you need to have the ability to process the volumes of data rapidly so a meteorologist can quickly make a decision and get that life-saving info to the public. This data volume issue may be less of a problem for those using the data to develop climate models.

So back to the Quora question, with regard to a cost-effective (cost-effect is the operational term) climate model or models (say an ensemble model) that would “verify” say 50 years from now, the sampling issue is ever present, and likely cost-prohibitive at the level needed to make the sampling statistically significant. And will the climatologist be around in 50 years to be “hoisted with their own petard” when the climate model is proven to be wrong? The absence of accountability is the other problem with these long-range models into which many put so much faith.

But don’t stop using or trying to develop better climate models. Just be aware of what variables they include, how well they handle the parameters, and what their limitations are. How accurate would a climate model built on this simplified system [edit: of 15 well-defined variables (to 95% confidence level)] be? Not very!

My Comment

As Dr. Minor explains, powerful modern computers can process detailed observation data to simulate and forecast storm activity.  There are more such tools for preparing and adapting to extreme weather events which are normal in our climate system and beyond our control.  He also explains why long-range global climate models presently have major limitations for use by policymakers.

Footnote Regarding Initial Conditions Problem

What About the Double Pendulum?

Trajectories of a double pendulum

comment by tom0mason at alerted me to the science demonstrated by the double compound pendulum, that is, a second pendulum attached to the ball of the first one. It consists entirely of two simple objects functioning as pendulums, only now each is influenced by the behavior of the other.

Lo and behold, you observe that a double pendulum in motion produces chaotic behavior. In a remarkable achievement, complex equations have been developed that can and do predict the positions of the two balls over time, so in fact the movements are not truly chaotic, but with considerable effort can be determined. The equations and descriptions are at Wikipedia Double Pendulum

Long exposure of double pendulum exhibiting chaotic motion (tracked with an LED)

But here is the kicker, as described in tomomason’s comment:

If you arrive to observe the double pendulum at an arbitrary time after the motion has started from an unknown condition (unknown height, initial force, etc) you will be very taxed mathematically to predict where in space the pendulum will move to next, on a second to second basis. Indeed it would take considerable time and many iterative calculations (preferably on a super-computer) to be able to perform this feat. And all this on a very basic system of known elementary mechanics.

Our Chaotic Climate System

 

 

IPCC Crusade Built on Science Mistakes

“Mistake” definition (American Heritage Dictionary)

Noun

  1. An error or fault resulting from defective judgment, deficient knowledge, or carelessness.
  2. A misconception or misunderstanding.

Five Major IPCC Science Mistakes

♦  Surface stations records have warmed mostly from urban heat sources, not IR-active gases.

♦  Solar climate forcing varies more than IPCC admits.

♦  Experiments show more CO2 does not make air warmer.

♦  On all time scales temperature changes lead and CO2 changes follow.

♦  IPCC climate models exclude natural climate factors to blame all warming on GHGs.

Mistakes on Temperature Records and Solar Forcing

The first two misconceptions are described in a recent paper by CERES (Center for Environmental Research and Earth Sciences).  My post below provides the details.

Overview of CERES Study

Our review suggests that the IPCC reports have inadequately accounted for two major scientific concerns when they were evaluating the causes of global warming since the 1850s:

1. The global temperature estimates used in the IPCC reports are contaminated by urban warming biases.
2.  The estimates of solar activity changes since the 1850s considered by the IPCC substantially downplayed a possible large role for the Sun.

We conclude that it is not scientifically valid for the IPCC to rule out the possibility that global warming might be mostly natural.

Fatal Flaw Discredits IPCC Science

By way of John Ray comes this Spectator Australia article A basic flaw in IPCC science.  Excerpts in italics with my bolds and added images.

Detailed research is underway that threatens to undermine the foundations of the climate science promoted by the IPCC since its First Assessment Report in 1992. The research is re-examining the rural and urban temperature records in the Northern Hemisphere that are the foundation for the IPCC’s estimates of global warming since 1850. The research team has been led by Dr Willie Soon (a Malaysian solar astrophysicist associated with the Smithsonian Institute for many years) and two highly qualified Irish academics – Dr Michael Connolly and his son Dr Ronan Connolly. They have formed a climate research group CERES-SCIENCE. Their detailed research will be a challenge for the IPCC 7th Assessment Report due to be released in 2029 as their research results challenge the very foundations of IPCC science.

The climate warming trend published by the IPCC is a continually updated graph based on the temperature records of Northern Hemisphere land surface temperature stations dating from the mid 19th Century. The latest IPCC 2021 report uses data for the period 1850-2018. The IPCC’s selection of Northern Hemisphere land surface temperature records is not in question and is justifiable. The Northern Hemisphere records provide the best database for this period. The Southern Hemisphere land temperature records are not that extensive and are sparse for the 19th and early 20th Century. It is generally agreed that the urban temperature data is significantly warmer than the rural data in the same region because of an urban warming bias. This bias is due to night-time surface radiation of the daytime solar radiation absorbed by concrete and bitumen. Such radiation leads to higher urban night-time temperatures than say in the nearby countryside. The IPCC acknowledges such a warming bias but alleges the increased effect is only 10 per cent and therefore does not significantly distort its published global warming trend lines.


Since 2018, Dr Soon and his partners have analysed the data from rural and urban temperature recording stations in China, the USA, the Arctic, and Ireland. The number of stations with reliable temperature records in these areas increased from very few in the mid-19th Century to around 4,000 in the 1970s before decreasing to around 2,000 by the 1990s. The rural temperature recording stations with good records peaked at 400 and are presently around 200.

Their analysis of individual stations needs to account for any variation in their exposure to the Sun due to changes in their location, OR shadowing due to the construction of nearby buildings, OR nearby vegetation growth. The analysis of rural temperature stations is further complicated as over time many are encroached by nearby cities. Consequently, the data from such stations needs to be shifted at certain dates from the rural temperature database to either an intermediate database or to a full urban database. Consequently, an accurate analysis of the temperature records of each recording station is a time-consuming task.


This new analysis of 4,000 temperature recording stations in China, the USA, the Arctic, and Ireland shows a warming trend of 0.89ºC per century in the urban stations that is 1.61 times higher that a warming trend of 0.55ºC per century in the rural stations. This difference is far more significant than the 10 per cent divergence between urban and rural stations alleged in the IPCC reports; a divergence explained by a potential flaw in the IPCC’s methodology. The IPCC uses a technique called homogenisation that averages the rural and urban temperatures in a particular region. This method distorts the rural temperature records as over 75 per cent of the temperature records used in this homogenisation methodology are urban stations. So, a methodology that attempts to statistically identify and correct some biases that may be in the raw data, in effect, leads to an urban blending of the rural dataset. This result is biased as it downgrades the actual values of each rural temperature station. In contrast, Dr Soon and his coworkers avoided homogenisation so the temperature trends they identify for each rural region are accurate as the rural data are not distorted by the readings from nearby urban stations.


The rural temperature trend measured by this new research is 0.55ºC per century and it indicates the Earth has warmed 0.9ºC since 1850. In contrast, the urban temperature trend measured by this new research is 0.89ºC per century and indicates a much higher warming of 1.5ºC since 1850. Consequently, a distorted urban warming trend has been used by the IPCC to quantify the warming of the whole of the Earth since 1850. The exaggeration is significant as the urban temperature record database used by the IPCC only represents the temperatures on 3-4 per cent of the Earth’s land surface area; an area less than 2 per cent of the Earth’s total surface area. During the next few years, Dr Willie Soon and his research team are currently analysing the meta-history of 800 European temperature recording stations. When this is done their research will be based on very significant database of Northern Hemisphere rural and urban temperature records from China, the USA, the Arctic, Ireland, and Europe.

This new research has unveiled another flaw in the IPCC‘s temperature narrative as trend lines in its revised temperature datasets are different from those published by the IPCC. For example, the rural records now show a marked warming trend in the 1930s and 1940s while there is only a slight warming trend in the IPCC dataset. The most significant difference is the existence of a marked cooling period in the rural dataset for the 1960s and 1970s that is almost absent in the IPCC’s urban dataset. This later divergence upsets the common narrative that rising carbon dioxide levels control modern warming trends. For, if carbon dioxide levels are the driver of modern warming, how can a higher rate of increasing carbon dioxide levels exist within a cooling period in the 1960s and 1970s while a lower increasing rate of carbon dioxide levels coincides with an earlier warming interval in the 1930s and 1940s? Or, in other words, how can carbon dioxide levels increasing at 1.7 parts per million per decade cause a distinct warming period in the 1930s and 1940s while a larger increasing rate of 10.63 parts per million per decade is associated with a distinct cooling period in the 1960s and 1970s! Consequently, the research of Willie Soon and his coworkers is discrediting, not only the higher rate of global warming trends specified in IPCC Reports, but also the theory that rising carbon dioxide levels explain modern warming trends; a lynchpin of IPCC science for the last 25 years.

Willie Soon and his coworkers maintain that climate scientists need to consider other possible explanations for recent global warming. Willie Soon and his coworkers point to the Sun, but the IPCC maintains that variations in Total Solar Irradiance (TSI) are over eons and not over shorter periods such as the last few centuries. For that reason, the IPCC point to changes in greenhouse gases as the most obvious explanation for global warming since 1850. In contrast, Willie Soon and his coworkers maintain there can be short-term changes in solar activity and, for example, refer to a period of no sunspot activity that coincided with the Little Ice Age in the 17th Century. They also point out there is still no agreed average figure for Total Solar Irradiance (TSI) despite 30 years of measurements taken by various satellites. Consequently, they contend research in this area is not settled.

The CERES-SCIENCE research project pioneered by Dr Willie Soon and the father-son Connolly team has questioned the validity of the high global warming trends for the 1850-present period that have been published by the IPCC since its first report in 1992. The research also queries the IPCC narrative that rising greenhouse gas concentrations, particularly carbon dioxide, are the primary driver of global warming since 1850. That narrative has been the foundation of IPCC climate science for the last 40 years. It will be interesting to see how the IPCC’s 7th Assessment Report in 2029 treats this new research that questions the very basis of IPCC’s climate science.

The paper is The Detection and Attribution of Northern Hemisphere Land Surface Warming (1850–2018) in Terms of Human and Natural Factors: Challenges of Inadequate Data. 

Abstract

A statistical analysis was applied to Northern Hemisphere land surface temperatures (1850–2018) to try to identify the main drivers of the observed warming since the mid-19th century. Two different temperature estimates were considered—a rural and urban blend (that matches almost exactly with most current estimates) and a rural-only estimate. The rural and urban blend indicates a long-term warming of 0.89 °C/century since 1850, while the rural-only indicates 0.55 °C/century. This contradicts a common assumption that current thermometer-based global temperature indices are relatively unaffected by urban warming biases.

Three main climatic drivers were considered, following the approaches adopted by the Intergovernmental Panel on Climate Change (IPCC)’s recent 6th Assessment Report (AR6): two natural forcings (solar and volcanic) and the composite “all anthropogenic forcings combined” time series recommended by IPCC AR6. The volcanic time series was that recommended by IPCC AR6. Two alternative solar forcing datasets were contrasted. One was the Total Solar Irradiance (TSI) time series that was recommended by IPCC AR6. The other TSI time series was apparently overlooked by IPCC AR6. It was found that altering the temperature estimate and/or the choice of solar forcing dataset resulted in very different conclusions as to the primary drivers of the observed warming.

Our analysis focused on the Northern Hemispheric land component of global surface temperatures since this is the most data-rich component. It reveals that important challenges remain for the broader detection and attribution problem of global warming: (1) urbanization bias remains a substantial problem for the global land temperature data; (2) it is still unclear which (if any) of the many TSI time series in the literature are accurate estimates of past TSI; (3) the scientific community is not yet in a position to confidently establish whether the warming since 1850 is mostly human-caused, mostly natural, or some combination. Suggestions for how these scientific challenges might be resolved are offered.

Mistake on CO2 Warming Effect

Thomas Allmendinger is a Swiss physicist educated at Zurich ETH whose practical experience is in the fields of radiology and elemental particles physics.  His complete biography is here.

His independent research and experimental analyses of greenhouse gas (GHG) theory over the last decade led to several published studies, including the latest summation The Real Origin of Climate Change and the Feasibilities of Its Mitigation, 2023, at Atmospheric and Climate Sciences journal. The paper is a thorough and detailed discussion of which I provide here the abstract and the excerpt describing the experiment.  Excerpts are in italics with my bolds and added images. Full post is Experimental Proof Nil Warming from GHGs.

Abstract

The actual treatise represents a synopsis of six important previous contributions of the author, concerning atmospheric physics and climate change. Since this issue is influenced by politics like no other, and since the greenhouse-doctrine with CO2 as the culprit in climate change is predominant, the respective theory has to be outlined, revealing its flaws and inconsistencies.

But beyond that, the author’s own contributions are focused and deeply discussed. The most eminent one concerns the discovery of the absorption of thermal radiation by gases, leading to warming-up, and implying a thermal radiation of gases which depends on their pressure. This delivers the final evidence that trace gases such as CO2 don’t have any influence on the behaviour of the atmosphere, and thus on climate.

But the most useful contribution concerns the method which enables to determine the solar absorption coefficient βs of coloured opaque plates. It delivers the foundations for modifying materials with respect to their capability of climate mitigation. Thereby, the main influence is due to the colouring, in particular of roofs which should be painted, preferably light-brown (not white, from aesthetic reasons).

It must be clear that such a drive for brightening-up the World would be the only chance of mitigating the climate, whereas the greenhouse doctrine, related to CO2, has to be abandoned. However, a global climate model with forecasts cannot be aspired to since this problem is too complex, and since several climate zones exist.

4. Thermal Gas Absorption Measurements

If the warming-up behaviour of gases has to be determined by temperature measurements, interference by the walls of the gas vessel should be regarded since they exhibit a significantly higher heat capacity than the gas does, which implicates a slower warming-up rate. Since solid materials absorb thermal radiation stronger than gases do, the risk exists that the walls of the vessel are directly warmed up by the radiation, and that they subsequently transfer the heat to the gas. And finally, even the thin glass-walls of the thermometers may disturb the measurements by absorbing thermal radiation.

By these reasons, quadratic tubes with a relatively large profile (20 cm) were used which consisted of 3 cm thick plates from Styrofoam, and which were covered at the ends by thin plastic foils. In order to measure the temperature course along the tube, mercury-thermometers were mounted at three positions (beneath, in the middle, and atop) whose tips were covered with aluminum foils. The test gases were supplied from steel cylinders being equipped with reducing valves. They were introduced by a connecter during approx. one hour, because the tube was not gastight and not enough consistent for an evacuation. The filling process was monitored by means of a hygrometer since the air, which had to be replaced, was slightly humid. Afterwards, the tube was optimized by attaching adhesive foils and thin aluminum foils (see Figure 13). The equipment and the results are reported in [21].

Figure 13. Solar-tube, adjustable to the sun [21].

The initial measurements were made outdoor with twin-tubes in the presence of solar light. One tube was filled with air, and the other one with carbon-dioxide. Thereby, the temperature increased within a few minutes by approx. ten degrees till constant limiting temperatures were attained, namely simultaneously at all positions. Surprisingly, this was the case in both tubes, thus also in the tube which was filled with ambient air. Already this result delivered the proof that the greenhouse theory cannot be true. Moreover, it gave rise to investigate the phenomenon more thoroughly by means of artificial, better defined light.

Figure 14. Heat-radiation tube with IR-spot [21].

Accordingly, the subsequent experiments were made using IR-spots with wattages of 50 W, 100 W and 150W which are normally employed for terraria (Figure 14). Particularly the IR-spot with 150 W lead to a considerably higher temperature increase of the included gas than it was the case when sunlight was applied, since its ratio of thermal radiation was higher. Thereby, variable impacts such as the nature of the gas could be evaluated.

Due to the results with IR-spots at different gases (air, carbon-dioxide, the noble gases argon, neon and helium), essential knowledge could be gained. In each case, the irradiated gas warmed up until a stable limiting temperature was attained. Analogously to the case of irradiated coloured solid plates, the temperature increased until the equilibrium state was attained where the heat absorption rate was identically equal with the heat emission rate.

Figure 15. Time/temperature-curves for different gases [21] (150 W-spot, medium thermometer-position).

As evident from the diagram in Figure 15, the initial observation made with sunlight was approved that pure carbon-dioxide was warmed up almost to the same degree as air does (whereby ambient air only scarcely differed from a 4:1 mixture between nitrogen and oxygen). Moreover, noble gases absorb thermal radiation, too. As subsequently outlined, a theoretical explanation could be found thereto.

Conclusion

Finally, the theoretically suggested dependency of the atmospheric thermal radiation intensity on the atmospheric pressure could be empirically verified by measurements at different altitudes, namely in Glattbrugg (430 m above sea level and on the top of the Furka-pass (2430 m above sea level), both in Switzerland, delivering a so-called atmospheric emission constant A ≈ 22 W·m−2•bar−1•K−0.5. It explained the altitude-paradox of the atmospheric temperature and delivered the definitive evidence that the atmospheric behavior, and thus the climate, does not depend on trace gases such as CO2. However, the atmosphere thermally reradiates indeed, leading to something similar to a Greenhouse effect. But this effect is solely due to the atmospheric pressure.

Mistake on Warming Prior to CO2 Rising

Changes in CO2 follow changes in global temperatures on all time scales, from last month’s observations to ice core datasets spanning millennia. Since CO2 is the lagging variable, it cannot logically be the cause of temperature, the leading variable. It is folly to imagine that by reducing human emissions of CO2, we can change global temperatures, which are obviously driven by other factors.  Most recent post on this:

10/2024 Update Recent Warming Spike Drives Rise in CO2

Mistake on Models Bias Against Natural Factors

Figure 1. Anthropic and natural contributions. (a) Locked scaling factors, weak Pre Industrial Climate Anomalies (PCA). (b) Free scaling, strong PCA

In  2009, the iconic email from the Climategate leak included a comment by Phil Jones about the “trick” used by Michael Mann to “hide the decline,” in his Hockey Stick graph, referring to tree proxy temperatures  cooling rather than warming in modern times.  Now we have an important paper demonstrating that climate models insist on man-made global warming only by hiding the incline of natural warming in Pre-Industrial times.  The paper is From Behavioral Climate Models and Millennial Data to AGW Reassessment by Philippe de Larminat.  H/T No Tricks Zone. Excerpts in italics with my bolds.

Abstract

Context. The so called AGW (Anthropogenic Global Warming), is based on thousands of climate simulations indicating that human activity is virtually solely responsible for the recent global warming. The climate models used are derived from the meteorological models used for short-term predictions. They are based on the fundamental and empirical physical laws that govern the myriad of atmospheric and oceanic cells integrated by the finite element technique. Numerical approximations, empiricism and the inherent chaos in fluid circulations make these models questionable for validating the anthropogenic principle, given the accuracy required (better than one per thousand) in determining the Earth energy balance.

Aims and methods. The purpose is to quantify and simulate behavioral models of weak complexity, without referring to predefined parameters of the underlying physical laws, but relying exclusively on generally accepted historical and paleoclimate series.

Results. These models perform global temperature simulations that are consistent with those from the more complex physical models. However, the repartition of contributions in the present warming depends strongly on the retained temperature reconstructions, in particular the magnitudes of the Medieval Warm Period and the Little Ice Age. It also depends on the level of the solar activity series. It results from these observations and climate reconstructions that the anthropogenic principle only holds for climate profiles assuming almost no PCA neither significant variations in solar activity. Otherwise, it reduces to a weak principle where global warming is not only the result of human activity, but is largely due to solar activity.  Full post is here:

Climate Models Hide the Paleo Incline

Latest INM Climate Model Projections Triggered by Scenario Inputs

The latest climate simulation from the Russian INM was published in April 2024: Simulation of climate changes in Northern Eurasia by two versions of the INM RAS Earth system model. The paper includes discussing how results are driven greatly by processing of cloud factors.  But first for context readers should be also aware of influences from scenario premises serving as model input, in this case  SSP3-7.0.

Background on CIMP Scenario  SSP3-7.0

A recent paper reveals peculiarities with this scenario.  Recognizing distinctiveness of SSP3-7.0 for use in impact assessments by Shiogama et al (2024).  Excerpts in italics with my bolds and added images.

Because recent mitigation efforts have made the upper-end scenario of the future GHG concentration (SSP5-8.5) highly unlikely, SSP3-7.0 has received attention as an alternative high-end scenario for impacts, adaptation, and vulnerability (IAV) studies. However, the ‘distinctiveness’ of SSP3-7.0 may not be well-recognized by the IAV community. When the integrated assessment model (IAM) community developed the SSP-RCPs, they did not anticipate the limelight on SSP3-7.0 for IAV studies because SSP3-7.0 was the ‘distinctive’ scenario regarding to aerosol emissions (and land-use land cover changes). Aerosol emissions increase or change little in SSP3-7.0 due to the assumption of a lenient air quality policy, while they decrease in the other SSP-RCPs of CMIP6 and all the RCPs of CMIP5. This distinctive high-aerosol-emission design of SSP3-7.0 was intended to enable climate model (CM) researchers to investigate influences of extreme aerosol emissions on climate.

SSP3-7.0 Prescribes High Radiative Forcing

SSP3-7.0 Presumes High Aerosol Emissions

Aerosol Emissions refer to Black Carbon, Organic Carbon, SO2 and NOx.

•  Aerosol emissions increase or change little in SSP3-7.0 due to the assumption of a lenient air quality policy, while they decrease in the other SSP-RCPs of CMIP6 and all the RCPs of CMIP5.

• This distinctive high-aerosol-emission design of SSP3- 7.0 was intended to enable AerChemMIP to investigate the consequences of continued high levels of aerosol emissions on climate.

SSP3-7.0 Supposes Forestry Deprivation

• Decreases in forest area were also substantial in SSP3- 7.0, unlike in the other SSP-RCPs.
• This design enables LUMIP to analyse the climate influences of extreme land-use and land-cover changes.

SSP3-7.0 Projects High Population Growth in Poorer Nations

Global population (left) in billions and global gross domestic product (right) in trillion US dollars on a purchasing power parity (PPP) basis. Data from the SSP database; chart by Carbon Brief using Highcharts.

SSP3-7.0 Projects Growing Use of Coal Replacing Gas and Some Nuclear

My Summary:  Using this scenario presumes high CO2 Forcing (Wm2), high aerosol emissions and diminished forest area, as well as much greater population and coal consumption. Despite claims to the contrary, this is not a “middle of the road” scenario, and a strange choice for simulating future climate metrics due to wildly improbable assumptions.

How Two Versions of a Reasonable INM Climate Model Respond to SSP3-7.0

The preceding information regarding the input scenario provides a context for understanding the output projections from INMCM5 and INMCM6.  Simulation of climate changes in Northern Eurasia by two versions of the INM RAS Earth system model. Excerpts in italics with my bolds and added images.

Introduction

The aim of this paper is the evaluation of climate changes during last several decades in the Northern Eurasia, densely populated region with the unprecedentedly rapid climate changes, using the INM RAS climate models. The novelty of this work lies in the comparison of model climate changes based on two versions of the same model INMCM5 and INMCM6, which differ in climate sensitivities ECS and TCR, with data from available observations and reanalyses. By excluding other factors that influence climate reproduction, such as different cores of GCM components, major discrepancies in description of physical process or numerical schemes, the assessment of ECS and TCR role in climate reproduction can be the exclusive focus. Also future climate projections for the middle and the end of 21st century in both model versions are given and compared.

After modification of physical parameterisations, in the model version INMCM6 ECS increased from 1.8K to 3.7K (Volodin, 2023), and TCR increased from 1.3K to 2.2K. Simulation of present-day climate by INMCM6 Earth system model is discussed in Volodin (2023). A notable increase in ECS and TCR is likely to cause a discrepancy in the simulation of climate changes during last decades and the simulation of future climate projections for the middle and the end of 21st century made by INMCM5 and INMCM6.

About 20% of the Earth’s land surface and 60% of the terrestrial land cover north of 40N refer to Northern Eurasia (Groisman et al, 2009). The Hoegh-Guldberg et al (2018) states that the topography and climate of the Eurasian region are varied, encompassing a sharply continental climate with distinct summer and winter seasons, the northern, frigid Arctic environment and the alpine climate on Scandinavia’s west coast. The Atlantic Ocean and the jet stream affect the climate of western Eurasia, whilst the Mediterranean region, with its hot summers, warm winters, and often dry spells, influences the climate of the southwest. Due to its location, the Eurasian region is vulnerable to a variety of climate-related natural disasters, including heatwaves, droughts, riverine floods, windstorms, and large-scale wildfires.

Historical Runs

One of the most important basic model experiments conducted within the CMIP project in order to control the model large-scale trends is piControl (Eyring et al, 2016). With 1850 as the reference year, PiControl experiment (Eyring et al, 2016) is conducted in conditions chosen to be typical of the period prior to the onset of large-scale industrialization. Perturbed state of the INMCM model at the end of the piControl is taken as the initial condition for historical runs. The historical experiment is conducted in the context of changing external natural and anthropogenic forcings. Prescribed time series include:

♦  greenhouse gases concentration,
♦  the solar spectrum and total solar irradiance,
♦  concentrations of volcanic sulfate aerosol in the stratosphere, and
♦  anthropogenic emissions of SO2, black, and organic carbon.

The ensemble of historical experiments consists of 10 members for each model version. The duration of each run is 165 model years from 1850 to 2014.

SSP3-7.0 Scenario

Experiments are designed to simulate possible future pathways of climate evolution based on assumptions about human developments including: population, education, urbanization, gross domestic product (GDP), economic growth, rate of technological developments, greenhouse gas (GHG) and aerosol emissions, energy supply and demand, land-use changes, etc. (Riahi et al, 2016). Shared Socio-economic Pathways or “SSP” vary from very ambitious mitigation and increasing shift toward sustainable practices (SSP1) to fossil-fueled development (SSP5) (O’Neill et al, 2016).

Here we discuss climate changes for scenario SSP3-7.0 only, to avoid presentation large amount of information. The SSP3-7.0 scenario reflects the assumption on the high GHG emissions scenario and priority of regional security, leading to societies that are highly vulnerable to climate change, combined with relatively high forcing level (7.0 W/m2 in 2100). On this path, by the end of the century, average temperatures have risen by 3.0–5.5◦C above preindustrial values (Tebaldi et al, 2021). The ensembles of historical runs with INMCM5 and INMCM6 were prolonged for 2015-2100 using scenario SSP3-7.0.

Observational data and data processing

Model near surface temperature and specific humidity changes were compared with ERA5 reanalysis data (Hersbach et al, 2020), precipitation data were compared with data of GPCP (Adler et al, 2018), sea ice extent and volume data were compared with satellite obesrvational data NSIDC (Walsh et al, 2019) and the Pan-Arctic Ice Ocean Modeling and Assimilation System (PIOMAS) (Schweiger et al, 2011) respectively, land snow area was compared with National Oceanic and Atmospheric Administration Climate Data Record (NOAA CDR) of Snow Cover Extent (SCE) reanalysis (Robinson et al, 2012) based on the satellite observational dataset Estilow et al (2015). Following Khan et al (2024) Northern Eurasia is defined as land area lying within boundaries of 35N–75N, 20E–180E. Following IPCC 6th Assessment Report (Masson-Delmotte et al, 2021), the following time horizons are distinguished: the recent past (1995– 2014), near term (2021–2040), mid-term (2041–2060), and long term (2081–2100). To compare observed and model temperature and specific humidity changes in the recent past, data for years 1991–2020 were compared with data for years 1961–1990.

Near surface air temperature change

Fig. 1 Annual near surface air temperature change in Northern Eurasia with respect to 1995–2014 for INMCM6 (red), INMCM5 (blue) and ERA5 reanalysis (Hersbach et al, 2020)(black), K. Orange and lightblue lines show ensemble spread.

Despite different ECS, both model versions show (Fig. 1) approximately the same warming over Northern Eurasia by 2010–2015, similar to observations. However, projections of Northern Eurasia temperature after year 2040 differ. By 2100, the difference in 2-m air temperature anomalies between two model versions reaches around 1.5 K. The greater value around 6.0 K is achieved by a model with higher sensitivity. This is consistent with Huusko et al (2021); Grose et al (2018); Forster et al (2013), which confirmed that future projections show a stronger relationship than historical ones between warming and climate sensitivity. In contrast to feedback strength, which is more important in forecasting future temperature change, historical warming is more associated with model forcing. Both INMCM5 and INMCM6 show distinct seasonal warming patterns. Poleward of about 55N the seasonal warming is more pronounced in winter than in summer (Fig. 2). That means the smaller amplitude of the seasonal temperature cycle in 1991– 2020 compared to 1961–1990. The same result was shown in Dwyer et al (2012) and Donohoe and Battisti (2013). The opposite situation is observed during the hemispheric summer, where stronger warming is observed over the Mediterranean region (Seager et al, 2014; Kr¨oner et al, 2017; Brogli et al, 2019), subtropics and midlatitudinal regions of the Pacific Ocean, leading to an amplification of the seasonal cycle. The spatial patterns of projected warming in winter and summer in model historical experiments for 1991-2020 relative to 1961-1990 are in a good agreement with ERA5 reanalysis data, although for ERA5 the absolute values of difference are greater.

East Atlantic/West Russia (EAWR) Index

The East Atlantic/West Russia (EAWR) pattern is one of the most prominent large-scale modes of climate variability, with centers of action on the Caspian Sea, North Sea, and northeast China. The EOF-analysis identifies the EAWR pattern as the tripole with different signs of pressure (or 500 hPa geopotential height) anomalies encompassing the aforementioned region.

In this study, East Atlantic/ West Russia (EAWR) index was calculated as the projection coefficient of monthly 500 hPa geopotential height anomalies to the second EOF of monthly reanalysis 500 hPa geopotential height anomalies over the region 20N–80N, 60W–140E.

Fig. 5 Time series of June-July-August 5-year mean East Atlantic/ West Russia (EAWR) index. Maximum and minimum of the model ensemble are shown as a dashed lines. INMCM6 and INMCM5 ensemble averaged indices are plotted as a red and blue solid lines, respectively.  The ERA5 (Hersbach et al, 2020) EAWR index is shown in green.

[Note: High EAWR index indicates low pressure and cooler over Western Russia, high pressure and warmer over Europe. Low EAWR index is the opposite–high pressure and warming over Western Russia, low pressure and cooling over Europe.]

East Atlantic/ West Russia (EAWR) index Time series of EAWR index can be seen in Fig. 5. Since the middle of 1990s the sign of EAWR index has changed from positive to negative according to reanalysis data. Both versions of the INMCM reproduce the change in the sign of EAWR index. Therefore, the corresponding climate change in the Mediterranean and West Russia regions should be expected. Actually, the difference in annual mean near-surface temperature and specific humidity between 2001–2020 and 1961–1990 shows warmer and wetter conditions spreading from the Eastern Mediterranean to European Russia both for INMCM6 and INMCM5 with the largest difference being observed for the new version of model.

Fig. 6 Annual mean near surface temperature, K (left) and specific humidity, kg/kg (right) in 2001– 2020 with respect to 1961–1990 for INMCM6 (a,b) and INMCM5 (c,d).

Fig. 7 Annual precipitation change (% with respect to 1995–2014) in Northern Eurasia for INMCM6 (red), INMCM5 (blue) and GPCP analysis (Adler et al, 2018) (black). Orange and lightblue lines show ensemble spread.

Discussion and conclusions

Climate changes during the last several decades and possible climate changes until 2100 over Northern Eurasia simulated with climate models INMCM5 and INMCM6 are considered. Two model versions differ in parametrisations of cloudiness, aerosol scheme, land snow cover and atmospheric boundary layer, isopycnal diffusion discretisation and dissipation scheme of the horizontal components of velocity. These modifications in atmosphere and ocean blocks of the model have led to increase of ECS to 3.7 K and TCR to 2.2 K, mainly due to modification of cloudiness parameterisation.

Comparison of model data with available observations and reanalysis show that both models simulate observed recent temperature and precipitation changes consistently with observational datasets. The decrement of seasonal temperature cycle amplitude poleward of about 55N and its increase over the Mediterranean region, subtropics, and mid-latitudinal Pacific Ocean regions are two distinct seasonal warming patterns that are displayed by both INMCM5 and INMCM6. In the long-term perspective, the amplification of difference in projected warming during June-JulyAugust (JJA) and December-January-February (DJF) increases. Both versions of the INMCM reproduce the observed change in the sign of EAWR index from positive to negative in the middle of 1990s, that allows to expect correct reproduction of the corresponding climate change in the Mediterranean and West Russia regions.

Specifically, the enhanced precipitation in the North Eurasian region since the mid-1990s has led to increased specific humidity over the Eastern Mediterranean and European Russia, which is simulated by the INMCM5 and INMCM6 models. Both versions of model correctly reproduce the precipitation change and continue its increasing trend onwards.

Both model versions simulate similar temperature, precipitation, Arctic sea ice extent in 1990–2040 in spite of INMCM5 having much smaller ECS and TCR than INMCM6. However, INMCM5 and INMCM6 show differences in the long-term perspective reproduction of climate changes. After 2040, model INMCM6 simulated stronger warming, stronger precipitation change, stronger Arctic sea ice and land snow extent decrease than INMCM5.

My Comment

So both versions of the model replicate well the observed history.  And when fed the SSP3-7.0 inputs, both project a warmer, wetter world out to 2100; INMCM5 reaches 4.5C and INMCM6 gets to 6.0C.  The scenario achieves the desired high warming, and the cloud enhancements in version 6 amplify it.  I would like to see a similar experiment done with the actual medium scenario SSP2-4.5.

Happer: Cloud Radiation Matters, CO2 Not So Much

Earlier this month William Happer spoke on Radiation Transfer in Clouds at the EIKE conference, and the video is above.  For those preferring to read, below is a transcript from the closed captions along with some key exhibits.  I left out the most technical section in the latter part of the presentation. Text in italics with my bolds.

William Happer: Radiation Transfer in Clouds

People have been looking at Clouds for a very long time in in a quantitive way. This is one of the first quantitative studies done about 1800. And this is John Leslie,  a Scottish physicist who built this gadget. He called it an Aethrioscope, but basically it was designed to figure out how effective the sky was in causing Frost. If you live in Scotland you worry about Frost. So it consisted of two glass bulbs with a very thin capillary attachment between them. And there was a little column of alcohol here.

The bulbs were full of air, and so if one bulb got a little bit warmer it would force the alcohol up through the capillary. If this one got colder it would suck the alcohol up. So he set this device out under the clear sky. And he described that the sensibility of the instrument is very striking. For the liquor incessantly falls and rises in the stem with every passing cloud. in fine weather the aethrioscope will seldom indicate a frigorific impression of less than 30 or more than 80 millesimal degrees. He’s talking about how high this column of alcohol would go up and down if the sky became overclouded. it may be reduced to as low as 15 refers to how much the sky cools or even five degrees when the congregated vapours hover over the hilly tracks. We don’t speak English that way anymore but I I love it.

The point was that even in 1800 Leslie and his colleagues knew very well that clouds have an enormous effect on the cooling of the earth. And of course anyone who has a garden knows that if you have a clear calm night you’re likely to get Frost and lose your crops. So this was a quantitative study of that.

Now it’s important to remember that if you go out today the atmosphere is full of two types of radiation. There’s sunlight which you can see and then there is the thermal radiation that’s generated by greenhouse gases, by clouds and by the surface of the Earth. You can’t see thermal radiation but you you can feel it if it’s intense enough by its warming effect. And these curves practically don’t overlap so we’re really dealing with two completely different types of radiation.

There’s sunlight which scatters very nicely and off of not only clouds but molecules; it’s the blue sky the Rayleigh scattering. Then there’s the thermal radiation which actually doesn’t scatter at all on molecules so greenhouse gases are very good at absorbing thermal radiation but they don’t scatter it. But clouds scatter thermal radiation and plotted here is the probability that you will find Photon of sunlight between you know log of its wavelength and the log of in this interval of the wavelength scale.

Since Leslie’s day two types of instruments have been developed to do what he did more precisely. One of them is called a pyranometer and this is designed to measure sunlight coming down onto the Earth on a day like this. So you put this instrument out there and it would read the flux of sunlight coming down. It’s designed to see sunlight coming in every direction so it doesn’t matter which angle the sun is shining; it’s uh calibrated to see them all.

Let me show you a measurement by a pyranometer. This is a actually a curve from a sales brochure of a company that will sell you one of these devices. It’s comparing two types of detectors and as you can see they’re very good you can hardly tell the difference. The point is that if you look on a clear day with no clouds you see sunlight beginning to increase at dawn it peaks at noon and it goes down to zero and there’s no sunlight at night. So half of the day over most of the Earth there’s no sunlight in the in the atmosphere.

Here’s a day with clouds, it’s just a few days later shown by days of the year going across. You can see every time a cloud goes by the intensity hitting the ground goes down. With a little clear sky it goes up, then down up and so on. On average at this particular day you get a lot less sunlight than you did on the clear day.

But you know nature is surprising. Einstein had this wonderful quote: God is subtle but he’s not malicious. He meant that nature does all of sorts of things you don’t expect, and so let me show you what happens on a partly cloudy day. Here so this is data taken near Munich. The blue curve is the measurement and the red curve is is the intensity on the ground if there were no clouds. This is a partly cloudy day and you can see there are brief periods when the sunlight is much brighter on the detector on a cloudy day than it is on the clear day. And that’s because coming through clouds you get focusing from the edges of the cloud pointing down toward your detector. That means somewhere else there’s less radiation reaching the ground. But this is rather surprising to most people. I was very surprised to learn about it but it just shows that the actual details of climate are a lot more subtle than you might think.

We knnow that visible light only happens during the daytime and stops at night. There’s a second type of important radiation which is the thermal radiation which is measured by a similar divice. You have a silicon window that passes infrared, which is below the band gap of silicon, so it passes through it as though transparent. Then there’s some interference filters here to give you further discrimination against sunlight. So sunlight practically doesn’t go through this at all, so they call it solar solar blind since it doesn’t see the Sun.

But it sees thermal radiation very clearly with a big difference between this device and the sunlight sensing device I showed you. Because actually most of the time this is radiating up not down. Out in the open air this detector normally gets colder than the body of the instrument. And so it’s carefully calibrated for you to compare the balance of down coming radiation with the upcoming radiation. Upcoming is normally greater than downcoming.

I’ll show you some measurements of the downwelling flux here; these are actually in Greenland in Thule and these are are watts per square meter on the vertical axis here. The first thing to notice is that the radiation continues day and night you can you if you look at the output of the pyrgeometer you can’t tell whether it’s day or night because the atmosphere is just as bright at night as it is during the day. However, the big difference is clouds: on a cloudy day you get a lot more downwelling radiation than you do on a clear day. Here’s a a near a full day of clear weather there’s another several days of clear weather. Then suddenly it gets cloudy. Radiation rises because the bottoms of the clouds are relatively warm at least compared to the clear sky. I think if you put the numbers In, this cloud bottom is around 5° Centigrade so it was fairly low Cloud. it was summertime in Greenland and this compares to about minus 5° for the clear sky.

So there’s a lot of data out there and there really is downwelling radiation there no no question about that you measure it routinely. And now you can do the same thing looking down from satellites so this is a picture that I downloaded a few weeks ago to get ready for this talk from Princeton and it was from Princeton at 6 PM so it was already dark in Europe. So this is a picture of the Earth from a geosynchronous satellite that’s parked over Ecuador. You are looking down on the Western Hemisphere and this is a filtered image of the Earth in Blue Light at 47 micrometers. So it’s a nice blue color not so different from the sky and it’s dark where the sun has set. There’s still a fair amount of sunlight over the United States and the further west.

Here is exactly the same time and from the same satellite the infrared radiation coming up at 10.3 which is right in the middle of the infrared window where there’s not much Greenhouse gas absorption; there’s a little bit from water vapor but very little, trivial from CO2.

As you can see, you can’t tell which side is night and which side is day. So even though the sun has set over here it is still glowing nice and bright. There’s sort of a pesky difference here because what you’re looking at here is reflected sunlight over the intertropical Convergence Zone. There are lots of high clouds that have been pushed up by the convection in the tropics and uh so this means more visible light here. You’re looking at emission of the cloud top so this is less thermal light so white here means less light, white there means more light so you have to calibrate your thinking. to

But the Striking thing about all of this: if you can see the Earth is covered with clouds, you have to look hard to find a a clear spot of the earth. Roughly half of the earth maybe is clear at any given time but most of it’s covered with clouds. So if anything governs the climate it is clouds and and so that’s one of the reasons I admire so much the work that Svensmark and Nir Shaviv have done. Because they’re focusing on the most important mechanism of the earth: it’s not Greenhouse Gases, it’s Clouds. You can see that here.

Now this is a single frequency let me show you what happens if you look down from a satellite and do look at the Spectrum. This is the spectrum of light coming up over the Sahara Desert measured from a satellite. And so here is the infrared window; there’s the 10.3 microns I mentioned in the previous slide it’s it’s a clear region. So radiation in this region can get up from the surface of the Sahara right up to outer space.

Notice that the units on these scales are very different; over the Sahara the top unit is 200, 150 over the Mediterranean and it’s only 60 over the South Pole. But at least the Mediterranean and the Sahara are roughly similar so the right side here these three curves on the right are observations from satellites and the three curves on the left are are calculations modeling that we’ve done. The point here is that you can hardly tell the difference between a model calculation and observed radiation.

So it’s really straightforward to calculate radiation transfer. If someone quotes you a number in watts per square centimeter you should take it seriously; that probably a good number. If they tell you a temperature you don’t know what to make about it. Because there’s a big step between going from watts per square centimeter to a temperature change. All the mischief in the whole climate business is going from watts per square centimeter to to Centigrade or Kelvin.

Now I will say just a few words about clear sky because that is the simplest. Then we’ll get on to clouds, the topic of this talk. This is a calculation with the same codes that I showed you in the previous slide which as you saw work very well. It’s worth spending a little time because this is the famous Planck curve that was the birth of quantum mechanics. There is Max Planck who figured out what the formula for that curve is and why it is that way. This is what the Earth would radiate at 15° Centigrade if there were no greenhouse gases. You would get this beautiful smooth curve the Planck curve. If you actually look at the Earth from the satellites you get a raggedy jaggedy black curve. We like to call that the Schwarzchild curve because Carl Schwarzchild was the person who showed how to do that calculation. Tragically he died during World War I, a Big Big loss to science.

There are two colored curves that I want to draw your attention. The green curve is is what Earth would radiate to space if you took away all the CO2 so it only differs from the black curve you know in the CO2 band here this is the bending band of CO2 which is the main greenhouse effect of CO2. There’s a little additional effect here which is the asymmetric stretch but it it doesn’t contribute very much. Then here is a red curve and that’s what happens if you double CO2.

So notice the huge asymmetry. If taking all 400 parts per million of CO2 away from the atmosphere causes this enormous change 30 watts per square meter, the difference between this green 307 and and the black 277, that’s 30 watts per square meter. But if you double CO2 you practically don’t make any change. This is the famous saturation of CO2. At the levels we have now doubling CO2, a 100% Increase of CO2 only changes the radiation to space by 3 watts per square meter. The difference between 274 for the red curve and 277 for the curve for today. So it’s a tiny amount: for 100% increase in CO2 a 1% decrease of radiation to space.

That allows you to estimate the feedback-free climate sensitivity in your head. I’ll talk you through the feedback-free climate free sensitivity. So doubling CO2 is a 1% decrease of radiation to space. If that happens then the Earth will start to warm up. But it will radiate as the fourth power of the temperature. So temperature starts to rise but if you’ve got a fourth power, the temperature only has to rise by one-quarter of a percent absolute temperature. So a 1% forcing in watts per square centimeter is a one-quarter percent of temperature in Kelvin. Since the ambient Kelvin temperature is about 300 Kelvin (actually a little less) a quarter of that is 75 Kelvin. So the feedback free equilibrium climate sensitivity is less than 1 Degree. It’s 0.75 Centigrade. It’s a number you can do in your head.

So when you hear about 3 centigrade instead of .75 C that’s a factor of four, all of which is positive feedback. So how is there really that much positive feedback? Because most feedbacks in nature are negative. The famous Le Chatelier principle which says that if you perturb a system it reacts in a way to to dampen the perturbation not increase it. There are a few positive feedback systems that were’re familiar with for example High explosives have positive feedback. So if the earth’s climate were like other positive feedback systems, all of them are highly explosive, it would have exploded a long time ago. But the climate has never done that, so the empirical observational evidence from geology is that the climate is like any other feedback system it’s probably negative Okay so I leave that thought with you and and let me stress again:

This is clear skies no clouds; if you add clouds all this does is
suppress the effects of changes of the greenhouse gas.

So now let’s talk about clouds and the theory of clouds, since we’ve already seen clouds are very important. Here is the formidable equation of transfer which has been around since Schwarzchild’s day. So some of the symbols here relate to the intensity, another represents scattering. If you have a thermal radiation on a greenhouse gas where it comes in and immediately is absorbed, there’s no scattering at all. If you hit a cloud particle it will scatter this way or that way, or some maybe even backwards.

So all of that’s described by this integral so you’ve got incoming light at One Direction and you’ve got outgoing light at a second Direction. And then at the same time you’ve got thermal radiation so the warm particles of the cloud are are emitting radiation creating photons which are coming out and and increasing the Earth glow the and this is represented by two parameters. Even a single cloud particle has an albedo, this is is the fraction of radiation that hits the cloud that is scattered as opposed to absorbed and being converted to heat. It’s a very important parameter for visible light and white clouds, typically 99% of the encounters are scattered. But for thermal radiation it’s much less. So water scatters thermal radiation only half as efficiently as shorter wavelengths.

The big problem is that in spite of all the billions of dollars that we have spent, these things which should be known and and would have been known if there hadn’t been this crazy fixation on carbon dioxide and greenhouse gases. And so we’ve neglected working on these areas that are really important as opposed to the trivial effects of greenhouse gases. Attenuation in a cloud is both scattering and absorption. Of course you have to solve these equations for every different frequency of the light because especially for molecules, there’s a strong frequency dependence.

In summary,  let me show you this photo which was taken by Harrison Schmitt who was a friend of mine on one of the first moonshots. It was taken in December and looking at this you can see that they were south of Madagascar when the photograph was taken. You can see it was Winter because here the Intertropical Convergence Zone is quite a bit south of the Equator; it’s moved Way South of India and Saudi Arabia. By good luck they had the sun behind them so they had the whole earth Irradiated.

There’s a lot of information there and and again let me draw your attention to how much of the Earth is covered with clouds. So only very small parts of the Earth can actually be directly affected by greenhouse gases, of the order of half. The takeaway message is that clouds and water vapor are much more important than greenhouse gases for earth’s climate. The second point is the reason they’re much more important: doubling CO2 as I indicated in the middle of the talk only causes a 1% difference of radiation to space. It is a very tiny effect because of saturation. You know people like to say that’s not so, but you can’t really argue that one, even the IPCC gets the same numbers that we do.

And you also know that covering half of the sky with clouds will decrease solar heating by 50%. So for clouds it’s one to one, for greenhouse gases it’s a 100 to one. If you really want to affect the climate, you want to do something to the clouds. You will have a very hard time making any difference with Net Zero with CO2 if you are alarmed about the warmings that have happened.

So one would hope that with all the money that we’ve spent trying to turn CO2 into a demon that some good science has come out of it. Fom my point of view this is a small part of it, this scattering theory that I think will be here a long time after the craze over greenhouse gases has gone away. I hope there will be other things too. You can point to the better instrumentation that we’ve got, satellite instrumentation as well as ground instrumentation. So that’s been a good investment of money. But the money we’ve spent on supercomputers and modeling has been completely wasted in my view.

 

 

Intro to Climate Fallacies

First an example of how classical thought fallacies derail discussion from any search for meaning. H/T Jim Rose.

Then I took the liberty to change the discussion topic to climate change, by inserting typical claims heard in that context.

Below is a previous post taking a deeper dive into the fallacies that have plagued global warming/climate change for decades

Background Post Climatism is a Logic Fail

Two fallacies in particular ensure meaningless public discussion about climate “crisis” or “emergency.” H/T to Terry Oldberg for comments and writings prompting me to post on this topic.

One corruption is the numerous times climate claims include fallacies of Equivocation. For instance, “climate change” can mean all observed events in nature, but as defined by IPCC all are 100% caused by human activities.  Similarly, forecasts from climate models are proclaimed to be “predictions” of future disasters, but renamed “projections” in disclaimers against legal liability.  And so on.

A second error in the argument is the Fallacy of Misplaced Concreteness, AKA Reification. This involves mistaking an abstraction for something tangible and real in time and space. We often see this in both spoken and written communications. It can take several forms:

♦ Confusing a word with the thing to which it refers

♦ Confusing an image with the reality it represents

♦ Confusing an idea with something observed to be happening

Examples of Equivocation and Reification from the World of Climate Alarm

“Seeing the wildfires, floods and storms, Mother Nature is not happy with us failing to recognize the challenges facing us.” – Nancy Pelosi

Mother Nature’ is a philosophical construct and has no feelings about people.

“This was the moment when the rise of the oceans began to slow and our planet began to heal …”
– Barack Obama

The ocean and the planet do not respond to someone winning a political party nomination. Nor does a planet experience human sickness and healing.

“If something has never happened before, we are generally safe in assuming it is not going to happen in the future, but the exceptions can kill you, and climate change is one of those exceptions.” – Al Gore

The future is not knowable, and can only be a matter of speculation and opinion.

“The planet is warming because of the growing level of greenhouse gas emissions from human activity. If this trend continues, truly catastrophic consequences are likely to ensue. “– Malcolm Turnbull

Temperature is an intrinsic property of an object, so temperature of “the planet” cannot be measured. The likelihood of catastrophic consequences is unknowable. Humans are blamed as guilty by association.

“Anybody who doesn’t see the impact of climate change is really, and I would say, myopic. They don’t see the reality. It’s so evident that we are destroying Mother Earth. “– Juan Manuel Santos

“Climate change” is an abstraction anyone can fill with subjective content. Efforts to safeguard the environment are real, successful and ignored in the rush to alarm.

“Climate change, if unchecked, is an urgent threat to health, food supplies, biodiversity, and livelihoods across the globe.” – John F. Kerry

To the abstraction “Climate Change” is added abstract “threats” and abstract means of “checking Climate Change.”

Climate change is the most severe problem that we are facing today, more serious even than the threat of terrorism.” -David King

Instances of people killed and injured by terrorists are reported daily and are a matter of record, while problems from Climate Change are hypothetical

 

Corollary: Reality is also that which doesn’t happen, no matter how much we expect it to.

Climate Models Are Built on Fallacies

 

A previous post Chameleon Climate Models described the general issue of whether a model belongs on the bookshelf (theoretically useful) or whether it passes real world filters of relevance, thus qualifying as useful for policy considerations.

Following an interesting discussion on her blog, Dr. Judith Curry has written an important essay on the usefulness and limitations of climate models.

The paper was developed to respond to a request from a group of lawyers wondering how to regard claims based upon climate model outputs. The document is entitled Climate Models and is a great informative read for anyone. Some excerpts that struck me in italics with my bolds and added images.

Climate model development has followed a pathway mostly driven by scientific curiosity and computational limitations. GCMs were originally designed as a tool to help understand how the climate system works. GCMs are used by researchers to represent aspects of climate that are extremely difficult to observe, experiment with theories in a new way by enabling hitherto infeasible calculations, understand a complex system of equations that would otherwise be impenetrable, and explore the climate system to identify unexpected outcomes. As such, GCMs are an important element of climate research.

Climate models are useful tools for conducting scientific research to understand the climate system. However, the above points support the conclusion that current GCM climate models are not fit for the purpose of attributing the causes of 20th century warming or for predicting global or regional climate change on timescales of decades to centuries, with any high level of confidence. By extension, GCMs are not fit for the purpose of justifying political policies to fundamentally alter world social, economic and energy systems.

It is this application of climate model results that fuels the vociferousness
of the debate surrounding climate models.

Evolution of state-of-the-art Climate Models from the mid 70s to the mid 00s. From IPCC (2007)

Evolution of state-of-the-art Climate Models from the mid 70s to the mid 00s. From IPCC (2007)

The actual equations used in the GCM computer codes are only approximations of
the physical processes that occur in the climate system.

While some of these approximations are highly accurate, others are unavoidably crude. This is because the real processes they represent are either poorly understood or too complex to include in the model given the constraints of the computer system. Of the processes that are most important for climate change, parameterizations related to clouds and precipitation remain the most challenging, and are the greatest source of disagreement among different GCMs.

There are literally thousands of different choices made in the construction of a climate model (e.g. resolution, complexity of the submodels, parameterizations). Each different set of choices produces a different model having different sensitivities. Further, different modeling groups have different focal interests, e.g. long paleoclimate simulations, details of ocean circulations, nuances of the interactions between aerosol particles and clouds, the carbon cycle. These different interests focus their limited computational resources on a particular aspect of simulating the climate system, at the expense of others.


Overview of the structure of a state-of-the-art climate model. See Climate Models Explained by R.G. Brown

Human-caused warming depends not only on how much CO2 is added to the atmosphere, but also on how ‘sensitive’ the climate is to the increased CO2. Climate sensitivity is defined as the global surface warming that occurs when the concentration of carbon dioxide in the atmosphere doubles. If climate sensitivity is high, then we can expect substantial warming in the coming century as emissions continue to increase. If climate sensitivity is low, then future warming will be substantially lower.

In GCMs, the equilibrium climate sensitivity is an ‘emergent property’
that is not directly calibrated or tuned.

While there has been some narrowing of the range of modeled climate sensitivities over time, models still can be made to yield a wide range of sensitivities by altering model parameterizations. Model versions can be rejected or not, subject to the modelers’ own preconceptions, expectations and biases of the outcome of equilibrium climate sensitivity calculation.

Further, the discrepancy between observational and climate model-based estimates of climate sensitivity is substantial and of significant importance to policymakers. Equilibrium climate sensitivity, and the level of uncertainty in its value, is a key input into the economic models that drive cost-benefit analyses and estimates of the social cost of carbon.

Variations in climate can be caused by external forcing, such as solar variations, volcanic eruptions or changes in atmospheric composition such as an increase in CO2. Climate can also change owing to internal processes within the climate system (internal variability). The best known example of internal climate variability is El Nino/La Nina. Modes of decadal to centennial to millennial internal variability arise from the slow circulations in the oceans. As such, the ocean serves as a ‘fly wheel’ on the climate system, storing and releasing heat on long timescales and acting to stabilize the climate. As a result of the time lags and storage of heat in the ocean, the climate system is never in equilibrium.

The combination of uncertainty in the transient climate response (sensitivity) and the uncertainties in the magnitude and phasing of the major modes in natural internal variability preclude an unambiguous separation of externally forced climate variations from natural internal climate variability. If the climate sensitivity is on the low end of the range of estimates, and natural internal variability is on the strong side of the distribution of climate models, different conclusions are drawn about the relative importance of human causes to the 20th century warming.

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (ΔT), 1861-2000. The thin dashed line represents annual ΔT, the bold line—its 13-year smoothing, and the line constructed from rectangles—WFC (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009

Anthropogenic (human-caused) climate change is a theory in which the basic mechanism is well understood, but whose potential magnitude is highly uncertain.

What does the preceding analysis imply for IPCC’s ‘extremely likely’ attribution of anthropogenically caused warming since 1950? Climate models infer that all of the warming since 1950 can be attributed to humans. However, there have been large magnitude variations in global/hemispheric climate on timescales of 30 years, which are the same duration as the late 20th century warming. The IPCC does not have convincing explanations for previous 30 year periods in the 20th century, notably the warming 1910-1945 and the grand hiatus 1945-1975. Further, there is a secular warming trend at least since 1800 (and possibly as long as 400 years) that cannot be explained by CO2, and is only partly explained by volcanic eruptions.

CO2 relation to Temperature is Inconsistent.

Summary

There is growing evidence that climate models are running too hot and that climate sensitivity to CO2 is on the lower end of the range provided by the IPCC. Nevertheless, these lower values of climate sensitivity are not accounted for in IPCC climate model projections of temperature at the end of the 21st century or in estimates of the impact on temperatures of reducing CO2 emissions.

The climate modeling community has been focused on the response of the climate to increased human caused emissions, and the policy community accepts (either explicitly or implicitly) the results of the 21st century GCM simulations as actual predictions. Hence we don’t have a good understanding of the relative climate impacts of the above (natural factors) or their potential impacts on the evolution of the 21st century climate.

Footnote:

There are a series of posts here which apply reality filters to attest climate models.  The first was Temperatures According to Climate Models where both hindcasting and forecasting were seen to be flawed.

Others in the Series are:

Sea Level Rise: Just the Facts

Data vs. Models #1: Arctic Warming

Data vs. Models #2: Droughts and Floods

Data vs. Models #3: Disasters

Data vs. Models #4: Climates Changing

Climate Medicine

Climates Don’t Start Wars, People Do

virtual-reality-1920x1200

Beware getting sucked into any model, climate or otherwise.

 

Clauser’s Case: GHG Science Wrong, Clouds the Climate Thermostat

This post provides a synopsis of Dr. John Clauser’s Clintel presentation last May.  Below are the texts from his slides gathered into an easily readable format. The two principal takeways are (in my words):

A.  IPCC’s Green House Gas Science is Flawed and Untrustworthy

B.  Clouds are the Thermostat for Earth’s Climate, Not GHGs.

Part I Climate Change is a Myth.

  • The IPCC and its collaborators have been tasked with computer modeling and observationally measuring two very important numbers – the Earth’s so-called power imbalance, and its power-balance feedback-stability strength. They have grossly botched both tasks, in turn, leading them to draw the wrong conclusion.
  • I assert that the IPCC has not proven global warming! On the contrary, observational data are fully consistent with no global warming. Without global warming, there is no climate-change crisis!
  • Their computer modeling (GISS) of the climate is unable to simulate the Earth’s surface temperature history, let alone predict its future.
  • Their computer modeling (GISS) is unable to simulate anywhere near the Earth’s albedo (sunlight reflectivity). The computer simulated sunlight reflected power and associated power imbalance error, are typically about fourteen times bigger than the claimed measured power imbalance, and about twenty five times bigger than the claimed measured power imbalance error range.
  • The IPCC’s observational data are wildly self-inconsistent and/or are fully consistent with no global warming.
  • The IPCC’s observational data claim an albedo for cloudy skies that is inconsistent with direct measurements by a factor of two. Alternatively, their data significantly violate conservation of energy.
  • Scientists performing the power-balance measurements admit that the available methodologies are incapable of measuring a net power imbalance with anywhere near the desired accuracy. This difficulty is due to huge temporal and spatial fluctuations of the imbalance, along with gross under-sampling of the data.
  • The observational data they report are self-inconsistent and are visibly dishonestly fudged to claim warming. The fudged final reported values, herein highlighted and exposed, are an example of the proverbial proliferation of bad pennies.
  • NOAA’s claims that there is an observed increase in extreme weather events are bogus. Their own published data disprove their own arguments. A 100 year history of extreme weather event frequency, plotted frontwards in time is virtually indistinguishable from the same historical data plotted backwards in time.
  • In Part II, I present the cloud-thermostat feedback mechanism. My new mechanism dominantly controls and stabilizes the Earth’s climate and temperature. The IPCC has not previously considered this mechanism. The IPCC ignores cloud-cover variability.

The IPCC’s two sacred tasks – both botched!

  1. The IPCC and its collaborators have been tasked with computer modeling and observationally measuring two very important numbers – the Earth’s so-called power imbalance, and its power-balance feedback-stability strength.
  2. The Earth’s net power imbalance is its sunlight heating power (its power-IN), minus its two components of cooling power – reflected sunlight and reradiated infrared power (its power-OUT).
  3. Based on their claimed power imbalance and global-warming assertion, the IPCC and its collaborators assemble a house of cards argument that forebodes an impending climate change apocalypse/ catastrophe.
  4. Additionally, the IPCC and its contributors calculate the strength of naturally occurring feedback mechanisms that presently stabilize the Earth’s temperature and climate
  5. They claim only marginal effectiveness for these mechanisms, and correspondingly assert that there is a “tipping point”, whereinafter further added greenhouse gasses catastrophically cause what amounts to a thermal-runaway of the Earth’s temperature.
  6. The IPCC scapegoats atmospheric greenhouse gasses as the cause of global warming, and further mandates that trillions of dollars must be spent to stop greenhouse gas release into the environment with a so-called “zero-carbon” policy.
  7. The IPCC also mandates multi-trillion dollar per year geoengineering projects including Solar Radiation Management Systems to stabilize the Earth’s climate and CO2 capture projects to reduce the atmospheric CO2 levels.
  8. I assert that the IPCC and its contributors have not proven global warming, whereupon their house of cards collapses.
  9. My cloud thermostat mechanism’s net feedback “strength” (the IPCC’s 2nd sacred task to estimate) is anywhere from -5.7 to -12.7 W/m2/K (depending on the assumed cloud albedo, 0.36 vs. 0.8), compared to the IPCC’s botched best estimate for their mechanisms of -1.1 W/m2/K. My mechanism’s overwhelmingly dominant strength confirms that it is the dominant feedback mechanism controlling the Earth’s climate.
  10. Correspondingly, I confidently assert that the climate crisis is a colossal trillion-dollar hoax.

The IPCC’s computer modeling uses flawed physics to estimate the Earth’s temperature history

• The above graph is copied from [AR5, (IPCC, 2013) Fig 11.25].
• It shows the IPCC’s CMIP5 computer modeling of the Earth’s temperature “anomaly”. The various computed curves display the earth’s predicted (colored) and historical (gray) “temperature anomaly”.
• The solid black curve is the observed temperature anomaly
• Note that all 40+ models are incapable of simulating the Earth’s past temperature history. The total disarray and total lack of reliability among the CMIP5 predictions was first highlighted by Steve Koonin (former White House science advisor to Barack Obama) in his recent book- Unsettled? What climate science tells us, what it doesn’t, and why it matters.
• Something is obviously very wrong with the physics incorporated within the computer models, and their predictions are totally unreliable.
• Albedo is the fraction of sunlight power that is directly reflected by the Earth back out into space (OSR=100 W/m2 portion of power-OUT)


• The above Figure, copied from Stephens et al. (2015), shows the IPCC’s CMIP5 computer modeling (colored curves) of the Earth’s mean annual albedo temporal variation. The solid black curve is the Earth’s albedo measured by satellite radiometry. (The variation is not sinusoidal.)
• The added scale shows the associated reflected sunlight power. It assumes a constant solar irradiance – 340 W/m2
• Note that the IPCC’s computer modeling is grossly incapable of simulating the observed Earth’s reflected power, and especially incapable of simulating that power’s dramatic temporal fluctuation.
• The actual power’s annual variation is actually much greater than is shown by this Figure by about 18 W/m2, due to the ellipticity of the Earth’s orbit and the associated sinusoidal temporal variation of the so-called solar constant.
• Despite more than 10 W/m2 gross errors in the computer simulation’s calculated reflected power, as is shown on the Figure, the IPCC [AR6 (2021)] still claims that it has computer simulated and precisely measured this power, yielding an imbalance that is equal to 0.7 ± 0.2 W/m2 – Huh?

The IPCC’s observational data are consistent with NO global warming

• Power-IN is the sunlight power incident on the Earth. The IPCC and climate scientists call it Short Wavelength (SW) Radiation. It is about 340 Watts per square meter of the Earth’s surface area. (It is not actually constant, but varies ± 9 W/m2.)
• Power-OUT has two components:
• One component is the sunlight energy that is directly reflected by the Earth back out into space, whereinafter it can no longer heat the planet. That component is claimed by the IPCC to be about 100 W/m2.
• The other component is the far-infrared heat radiated into space by a hot planet. It is claimed to be about 240 W/m2. The IPCC calls the far-infrared heat radiation component, Long Wavelength (LW) Radiation.
• Measuring the power imbalance consists of measuring power-IN, measuring power-OUT and subtracting. Simple enough? Not really. The problem is that power-IN, and power-OUT are huge numbers, and that the difference between them is miniscule – 0.2% of power-IN. That miniscule difference is the net imbalance that is sought, both experimentally and theoretically.

Unfortunately, it is so small that it is very difficult, if not impossible, to measure to the desired accuracy, 0.1 W/m2, or 0.03% of power-IN. It is much tougher to measure when power-IN and power-OUT are both also hugely varying in a seemingly random irreproducible fashion. Large variations occur both in time and in space over the surface of the Earth. As noted in a previous slide, this grossly under-sampled fluctuation is about 28 W/m2, compared with the IPCC’s claimed imbalance, 0.7 ± 0.2 W/m2.
• A variety of methods has been employed to measure these powers. They include satellite radiometry, (the ERBE, and CERES Terra and Aqua satellites), ocean heat content (OHC) measured using the ARGO buoy chain and XBT water sampling by ships, and finally by ground sunlight observations using the Baseline Surface Radiation Network (BSRN).
• The various measured values are all in wild disagreement with each other. Importantly, none of the reported data actually show a convincing net warming power imbalance. Importantly, much of the reported data are totally fudged in a manner that dishonestly changes them from showing no warming to showing warming!

AR6 Power-flow Diagrams

Critiques of Power-Flow Diagrams by Trenberth et al. (2010, 2014)

• Satellites measure the Top of Atmosphere energy balance, while Ocean Heat Content data apply to the surface energy balance. One may legitimately mix power-flux data at the two different altitudes, if and only if one fully understands all of the power-flow processes in the atmosphere that occur between the surface and the Top of Atmosphere. If the latter requirement is not true, then one ends up with an “apples to oranges” comparison.
• Trenberth et al. (2010, 2014) are highly critical of Loeb, Stephens, L’Ecuyer, and Hansen’s claimed “understanding” of the associated connection between the power flows at these two altitudes.
• Trenberth and Fasullo (2010) point to a huge “missing energy” indicated by the difference between the satellite data and the OHC data power-imbalance calculations, and specifically ask “Where exactly does the energy go?”
• Hansen et al. (2011) dismiss Trenberth and Fasullo’s alleged missing energy as being simply due to satellite calibration errors.
• Trenberth Fasullo and Balmesada (2014) further note that despite various considerations of the surface power balance, significant unresolved discrepancies remain, and they are skeptical of the power imbalance claims.
• In effect, Trenberth et al. are the earliest “whistle blowers” to the above-mentioned data fudges.

Part I –The Climate Change Myth– Conclusions

1. The IPCC and its contributors claim the Earth has a net-warming energy imbalance. I show here that those claims are false.
2. The IPCC bases its claims on computer modeling of the Earth’s atmosphere, and on observational data from a variety of observational modalities. Both the computer models and the observational data are grossly flawed, and fudged.
3. The IPCC’s computer modeling and its predictions are totally unreliable. There is something clearly very wrong with the physics incorporated within these computer models. Since the computer models can’t even explain the past, why should anyone trust their prediction for the future?
4. Not one of the observational modalities for measuring the Earth’s power imbalance convincingly shows net global warming.
5. I show where various observers and the IPCC have dishonestly fudged their reported data, and have dishonestly changed it from showing No Warming, to showing Warming. Crucially important data fudges are revealed here and highlighted in red. If you don’t believe me, check my arithmetic.
6. The IPCC and NOAA further claim that the purported power imbalance has already caused an increase in dangerous extreme weather events. NOAA’s own data disprove their own claims.
7. I thus offer Great News. Despite what you may have heard from the IPCC and others, there is no real climate crisis! The planet is NOT in peril!
8. The IPCC’s (and NOAA’s) claims are a hoax. Trillions of dollars are being wasted.

Part II – The cloud thermostat 

1. So what is really happening? Why is the earth’s climate actually as stable as it really is?
2. The cloud thermostat mechanism is clearly the overwhelmingly dominant climate controlling feedback mechanism that controls stabilizes the Earth’s climate and temperature. It thereby prevents global warming and climate change.
3. The cloud-thermostat mechanism provides very powerful feedback that stabilizes the Earth’s climate and temperature. It great strength obtains from the observed large fluctuation of the Earth’s power imbalance.
4. The mechanism gains its strength from the Earth’s observed very large cloud-cover variation. The power imbalance is actually observed to be continuously strongly fluctuating by anywhere between 18 to 55 W/m2.
5. Clouds modulate the outgoing Shortwave power and therefore control the Earth’s power imbalance, minimally with a 18 W/m2 available power range (ignoring the added 18 W/m2 solar-constant variation), which is minimally 26 times the IPCC’s 0.7 W/m2 claimed power imbalance, and 45 times the IPCC’s ± 0.2 W/m2 power imbalance error range.
6. The above numbers use the IPCC’s assumed data parameters. With more realistic assumptions, the cloud-thermostat mechanism controls the Earth’s power imbalance with a 73 W/m2 available power range, which is 100 times bigger than the IPCC’s 0.7 W/m2 claimed power imbalance, and 180 times bigger than the IPCC’s ± 0.2 W/m2 power-imbalance total error range.
7. This seemingly random fluctuation of the power imbalance is not random at all, but is actually a crucial part of a thermostat-like feedback mechanism that controls and stabilizes the Earth’s climate and
temperature. It is observed by King et al. (2013) and by Stephens et al. (2015) to be quasi-periodic,
8. Just like the thermostat in your home, the power-imbalance is never zero. The furnace or AC is always either ON or OFF. The thermostat simply modulates the heating/cooling duty cycle.

Features of the cloud thermostat mechanism

1. In preparation for the introduction of this model, I first describe important, underappreciated, but conspicuous properties of clouds – their variability and their strong reflectivity of sunlight (SW radiation).
2. I show that the cloud-thermostat mechanism involves the dominant (73%) use of sunlight energy by the planet.
3. I show that when the cloud-thermostat mechanism is viewed as a form of climate-stabilizing negative feedback, it is by far the most powerful of any such mechanism heretofore considered.
4. The IPCC estimates that the net stabilizing feedback strength or the Earth’s climate, including the destabilizing feedback strength of greenhouses is about -1 W/m2/ºC.
5. I show that the cloud thermostat feedback increases the net natural stabilizing feedback strength to about anywhere between -7 W/m2/ºC and -14 W/m2/ºC, depending on the assumptions used.

There are 5 important take-home messages to be gleaned from these satellite photographs.

1. Clouds reflect dramatically more sunlight than the rest of the planet does!
2. Clouds of all types appear bright white!
3. The photos (along with a large number of careful measurements) strongly suggest that the average cloud reflectivity (of sunlight) is about 0.8 – 0.9. (For comparison, white paper has a reflectivity of ≈ 0.99.) [Wild et al.(2019) claim that cloud reflectivity is 0.36.]
4. The rest of the planet appears much darker than the clouds. The average reflectivity of land (green and brown areas) and ocean (dark blue areas) is ≈ 0.16.
5.Cloud coverage area is highly variable over the Earth.

What does sunlight mostly do when it reaches the Earth’s surface?

• It is commonly believed that sunlight that is absorbed by the Earth’s surface simply warms the surface. That may be true over land. But land represents only about 30% of the surface.
• Oceans cover 70% of the Earth’s surface. Correspondingly, about 70% of incoming sunlight falls on the oceans. Virtually all of the Earth’s exposed water surface occurs in the oceans.
• Following the AR6 power-flow diagram, 160 W/m2is absorbed by the whole Earth, meaning that roughly 70% X 160 = 112 W/m2 is absorbed by oceans.
• The AR6 power-flow diagram indicates that 82 W/m2 is used for evaporating water, and not for heating the surface.
• Since clouds are mostly produced over the oceans (because that’s where the exposed water is), then 82/112 = 73% of the input energy absorbed by the Earth’s oceans is used, not for warming the Earth, but instead simply for making clouds.

How does the cloud thermostat work?

1. Recall that the IPCC’s AR6 power-flow map asserts that 73% of the input energy absorbed by the Earth’s oceans is used, not for warming the Earth, but instead simply for evaporating seawater and making clouds, rather than for raising the Earth’s surface temperature. Recall that the Earth has a strongly varying cloud cover and albedo.
2. Temperature control of the Earth’s surface by this mechanism works exactly the same way as does a common home thermostat. A thermostat automatically corrects a structure’s temperature in the presence of varying modest heat leaks. For the earth, the presence of significant CO2 in the earth’s atmosphere, manmade or not, provides, in fact, a very small heat leak (at most, about 2 W/m2).  Note that, just like the Earth, the power imbalance for a thermostatically controlled system is never zero. It is always fully heating or fully cooling.
3. How does the cloud thermostat work? When the Earth’s cloud-cover fraction is too high, then the earth’s surface temperature is too low. Why? Clouds produce shadows. Cloudy days are cooler than sunny days. A high cloud-cover fraction equals a highly shadowed area. With reduced sunlight reaching the ocean’s surface and lower temperature, the evaporation rate of seawater is reduced. The cloud production rate over ocean (70% of the earth) is low because sunlight is needed to evaporate seawater. The earth’s too-high cloud-cover fraction obediently starts to decrease. Very quickly, cloud-cover fraction decreases, the temperature increases. The Earth’s cloud-cover fraction is no longer too high. Equilibrium cloud cover and temperature are restored.
4. When the Earth’s cloud-cover fraction is too low, the surface temperature is then too high, then the reverse process occurs. With low cloud cover, lots of sunlight reaches the ocean surface. Increased sunlit area then evaporates more seawater. The cloud-production rate obediently increases and the cloud-cover fraction is no longer too low . Equilibrium cloud cover and temperature are again restored.
5. Depending of one’s assumption regarding cloud reflectivity (albedo), the cloud thermostat mechanism has anywhere between 18 and 55 W/m2 power available from cloud-fraction variability to overcome a wimpy 0.7 W/m2 heat leak (allegedly blamed on greenhouse gasses) and to stabilize the Earth’s temperature, no matter what the greenhouse gas atmospheric concentration is!
6. These two fluctuating opposing processes, when in equilibrium, provide an equilibrium cloud-cover fraction, and an equilibrium average temperature. The earth thus has a built in thermostat!

Feedback strength of the cloud thermostat mechanism

1. The resulting cloud-thermostat mechanism’s feedback parameter is now readily evaluated under the two scenarios associated with two choices for cloud albedo. The details of the calculation are shown in Appendix D.
2. Using the AR6 choice for cloud albedo, αClouds = 0.36, we have λClouds ≈ – 5.7 W/m2 K, which is 1.7 times larger than (the misnamed) λ Planck , heretofore the strongest feedback term.
3. Alternatively, using the more reasonable choice for cloud albedo, αClouds = 0.8, we have λClouds ≈ -12.7 W/m2 K, which is 3.8 times larger than (the misnamed) λPlanck.
4. These values are plotted as an extension of the AR6 Figure 7.1, which shows the feedback strength for various mechanisms. The total system strength is shown in the left-hand column.
5. Viewed as a temperature-control feedback mechanism, in either scenario, the cloud thermostat has the strongest negative (stabilizing) feedback of any mechanism heretofore considered.
6. It very powerfully controls and stabilizes the Earth’s climate and temperature.

Part II – Conclusions

1. I have introduced here the cloud-thermostat mechanism. It is clearly the overwhelmingly dominant climate controlling feedback mechanism that controls stabilizes the Earth’s climate and temperature. It thereby prevents global warming and climate change.
2. The IPCC’s 2021 AR6 report (p.978) claims that climate stabilizing natural feedback mechanisms have a net (total) stabilizing strength of -1.16 ± 0.6 W/m2/K. My cloud feedback mechanism has a net stabilizing strength of anywhere between -5.7 to -12.7 W/m2/K, depending of one’s assumptions regarding the albedo of clouds.
3. My cloud thermostat mechanism provides nature’s own Solar Radiation Management System. This mechanism already exists. It is built in to nature’s own cloud factory. It works very well to stabilize the Earth’s temperature on a long term basis. And, it is free!

“Recommendations for policy makers”

1. There is no climate crisis! There is, however, a very real problem with providing a decent standard of living to the world’s now enormous population. There is indeed an energy shortage crisis. The latter is being unnecessarily exacerbated by what, in my opinion, is incorrect climate science, and by
government’s associated incorrect muddled response to it.
2. Government and business are currently needlessly spending trillions of dollars on efforts to limit the greenhouse gasses, CO2 and CH4, in the Earth’s atmosphere.
3. CO2 and CH4 are not pollutants. They must be removed from every list of defined pollutants. They have a negligible effect on the climate. Trillions of dollars can be saved by this one simple measure alone! Additionally, the CO2 Coalition points out that atmospheric CO2 is actually beneficial.
4. I recommend that all efforts to limit environmental carbon should be terminated immediately! Trillions of dollars can be saved by eliminating carbon caps, carbon credits, carbon sequestration, carbon footprints, zero-carbon targets, carbon taxes, anti-carbon policies and fossil-fuel limits, in energy policy and elsewhere.

Climatists Deny Natural Warming Factors

After a recent contretemps at Climate Etc. with CO2 warmists, I was again reminded how insistent are zero carbon zealots to deny multiple natural climate factors, in order to attribute all modern warming to humans burning hydrocarbons. A large part of this blindness comes from constraints dictated by the IPCC to climate model builders.  Simply put, natural causes of warming (and cooling) are systematically excluded from CIMP models for the sake of the narrative blaming humans for all climate activity: “Climate Change is real, dangerous and man-made.”  A previous post later on analyzes how models deceive by excluding natural forcings.

Let’s start with a paper that seeks objectively to consider both internal and external climate forcings, including human and natural processes.  The paper by Bokuchava & Semenov was published last October and is behind a paywall at Springer.  An open access copy is here:  Factors of natural climate variability contributing to the Early 20th Century Warming in the Arctic.  Excerpt in italics with my bolds and added images.

Abstract

The warming in the first half of the 20th century in the Northern Hemisphere (NH) (early 20th century warming (ETCW)) was comparable in magnitude to the current warming, but occurred at a time when the growth rate of the greenhouse gas (GG) concentration in the atmosphere was 4–5 times slower than in recent decades. The mechanisms of the early warming are still a subject of discussion. The ETCW was most pronounced in the high latitudes of the NH, and the recent reconstructions consistently indicate a significant negative anomaly of the Arctic sea ice area during early warming period linked with enhanced Atlantic water inflow to the Arctic and amplified warming in high latitudes of the NH.

Assessing the contributions of internal variability and external natural and anthropogenic factors to this climatic anomaly is key for understanding historical and modern climate dynamics. This paper considers mechanisms of ETCW associated with various internal variability and external anthropogenic and natural factors. An analysis of the findings on the topic of long-term studies of climate variations in the NH during the period of instrumental observations does not allow one to attribute the ETCW to one particular mechanism of internal climate variability or external forcing of the climate.

Most likely, this event was caused by a combined effect of long-term climatic fluctuations in the North Atlantic and the North Pacific with a noticeable contribution of external radiative forcing associated with a decrease in volcanic activity, changes in solar activity, and an increase in GG concentration in the atmosphere due to anthropogenic emissions. Furthermore, this climate variation in high latitudes of the NH has been enhanced by a number of positive feedbacks. An overview of existing research is given, as are the main mechanisms of internal and external climate variability in the NH in the early 20th century. Despite the fact that the internal variability of the climate system is apparently the main mechanism that explains the ETCW, the quantitative assessment of the contribution of each factor remains uncertain, since it significantly depends on the initial conditions in the models and the lack of instrumental data in the early 20th century, especially in polar latitudes.

Figure 1. 30-year moving trends in global surface air temperature
(°C / 30 years) according to Berkley dataset [4]

The main cause of the recent warming is considered to be due to the anthropogenic forcing  primarily the carbon dioxide (CO2) concentration growth causing a greenhouse effect [5]. But the role of CO2 for ETCW could not be as important since this period precedes the time of the accelerating growth of radiative forcing by greenhouse gases (GHG). This GHG increase after 1950s is also inconsistent with the global SAT decline from 1940s to 1970s.

Numerical experiments with different climate model generations [6,7] show that modern warming is well reproduced when averaged over model ensembles (indicating external influence as major factor). The ETCW amplitude, despite the increasing accuracy of model simulations, still differs significantly in climate models. This may indicate the important role of internal climate variability [2], as well as the uncertainty of results of model experiments due to incorrectly specified forcing.

The majority of studies [8,9] agree that such a strong warming can be explained by a combination of internal climate system variability as quasi-periodic oscillation or random climate fluctuation with increasing global temperature in the background associated with external anthropogenic and natural forcings (increased GHGs emissions and a pause in volcanic eruptions, in particular).

This paper provides an overview of the existing hypotheses that may explain ECTW, describes the main mechanisms of internal climate variability during the twentieth century, in particular in the Arctic region.

Figure 2. Average annual SAT (°C) anomalies in the period 1900-2015,
according to Berkley observational dataset (5-year running mean), global (black curve),
Northern Hemisphere (blue curve), Southern Hemisphere (orange curve),
NH high latitudes (60°-90° N) (red curve), and NH high latitudes
without 5-yr running mean smoothing (gray curve)

Internal variability in the Arctic can be enhanced by positive radiation feedbacks [12], including surface albedo – temperature feedback, which can strongly impact the absorption of solar shortwave radiation. This mechanism manifests itself during prolonged warm periods, mainly in autumn, when a growing ice-free ocean surface with low albedo absorbs more solar radiation and warms the upper ocean layer that leads to further sea ice melting [10]. This positive radiation feedback contributes to the faster temperature increase in the Arctic. This phenomenon is now well-known as “Arctic (or Polar) Amplification”.

However, other positive feedbacks also play major roles in the Arctic Amplification. There are positive feedbacks related to long-wave radiation, for instance, an increase of water vapor content and cloud cover leads to a greenhouse effect, which is more pronounced at high latitudes [13], as well as dynamic feedbacks, which imply strengthened oceanic and atmospheric ocean heat transfer to the Arctic in the conditions of the shrinking sea ice extent [14,15].

Arctic Amplification may also be a consequence of non-local mechanisms such as enhanced northward latent heat transfer in the warmer atmosphere [16] Quasi-periodic fluctuations of North Atlantic sea surface temperature (SST) of 60-80 year time scale [17] suggest a possible role of oceanic heat transfer as a driver of long-term SAT anomalies in the Arctic that can be enhanced by positive feedbacks [18].

Thus, the amplitude of SST oscillations in the NH polar latitudes can be a combination of both regional response to global climate change and the formation of internal oscillations in the ocean atmosphere system.

Natural internal factors – ocean-atmosphere system variability
Atmosphere circulation variability

Figure 3. Winter Arctic (60°-90°N) SAT anomalies for according to
Berkley observations (5-year running mean) (black curve); NAO index (pink curve),
PNA index (blue curve) according to HadSLP2.0 dataset [25]

The North Atlantic Oscillation (NAO) and the closely related Arctic Oscillation (AO) is the dominant mode of large-scale winter atmospheric variability in the North Atlantic, characterized by sea level pressure dipole with one center over Greenland (Icelandic minimum) and another center of the opposite sign in the North Atlantic mid latitudes (Azores maximum). NAO controls the strength and direction of westerly winds and the position of storm tracks in the North Atlantic sector, thus crucially impacting the European climate [23].

During the first two decades of the 20th century, the positive phase of NAO was expressed in a stronger than usual zonal circulation over the North Atlantic (Fig. 3). The long-term dominance of these atmospheric circulation pattern led to an advection of heat to the northeastern part of the North Atlantic. However, the NAO transition to the negative phase after 1920s and in general inconsistency between NAO and Arctic SAT variations in the first half of the 20th century do not support an hypothesis of NAO contribution to the ETCW warming [24].

The Pacific North American Oscillation index (PNA) characterizes the pressure gradient between the North Pacific (Aleutian minimum) and the East of North America (Canadian maximum) and is related to fluctuations of North Pacific zonal flow. An important feature of PNA in the context of the ETCW is that both (positive and negative) PNA phases may contribute to atmospheric heat advection to the Arctic. In the 1930s and 1950s, the negative phase (Fig. 3) led to the transfer of warm air masses to the pole across the northwestern Pacific Ocean, and the positive phase of the 1940s forced increased zonal transfer to the Western coast of Canada and Alaska [8]. PNA is strongly influenced by the Pacific Southern Oscillation (El Nino Southern Oscillation – ENSO) – the positive index phase is associated with the El Nino phenomena, and the negative with La Niña events.

Atmospheric circulation in the mid-latitudes of the Pacific Ocean may also depend on fluctuations of the Pacific trade winds [28]. The trade winds weakening is manifested in the SAT growth in Pacific mid-latitudes, which coincides on the time scale with the warming of 1910-1940s in the high Arctic latitudes and in the lowering of temperatures during the cooling period between 1940s and 1970s when the strength of the trade winds had been increasing.

Ocean circulation variability

Figure 4. Winter Arctic (60°-90°N) SAT anomalies according to
Berkley dataset (5-year running mean, black curve); AMO index (pink curve),
PDO index (blue curve) according to HadiSST2.0 dataset [37]

Arctic Amplification in the 20th century, including ETCW period can be associated not only with an increase of atmospheric heat transport, but also with an enhancement of ocean heat inflow in the North Atlantic to the extratropical latitudes of the NH from its equatorial part [30].

Instrumental data show that SST variability in the North Atlantic during the 20th century was dominated by cyclic fluctuations on time scales of 50-80 years, showing two warm periods in the 1930s-1940s and at the end of the 20th century and two cold periods in the beginning of the century and in the 1960s-1970s. SST oscillations in the North Atlantic are called Atlantic Multidecadal Oscillation (AMO). The observational data also indicate AMO-like cycles in the Arctic SAT (Fig. 4).

Paleo-reconstructions of AMO [33] demonstrate that strong, low-frequency (60-100 years) SSTnvariability is a robust feature of the North Atlantic climate over the past five centuries. There are also indications of a significant correlation between Arctic sea ice area and AMO index including a sharp change during ECTW period [34].

There is another pronounced internal climate variability that may act synchronously with AMO. This is the Pacific Decadal Oscillation (PDO), which reflects a variability of the Pacific SSTs north of 20° N and has 20-40 years periodicity [35]. PDO might have played an equally important role in the heat advection to the Arctic in the middle of the century. Several current studies [36,29] suggest the synchronous phase shift of AMO and PDO largely contributed to the accelerated Arctic warming, both the ongoing and ETCW.

Сonclusions

Understanding the mechanisms of ETCW and subsequent cooling is a key to determine the relative contribution of internal natural variability to global climate change on multi-decadal time scale. Studies of climate changes in high latitudes in the mid-twentieth century allows us to identify a number of possible mechanisms involving natural variability and positive feedbacks in the Arctic climate system that may partially explain ETCW.

Based on the recent literature it can be concluded that internal oceanic variability, together with additional impact of natural atmospheric circulation variations are important factors for ETCW. Recently, a number of results indicating the Pacific Ocean as a source of multidecadal fluctuation both on a global scale and in high latitudes has increased. Howewer, assessment of a relative contribution to ETCW in the Atlantic and Pacific sectors remains uncertain.

Climate model simulations [9,43,44] argue that the internal variability of the ocean-atmosphere system cannot explain the entire amplitude of temperature fluctuations in the first half of the 20th century as a single factor, and must act in combination with external forcings (solar and volcanic activity), positive feedbacks in the Arctic climate system, and anthropogenic factors. Quantifying the contribution of each factor still remains a matter of debate.

Climate Deception:  Models Hide the Paleo Incline

Figure 1. Anthropgenic and natural contributions. (a) Locked scaling factors, weak Pre Industrial Climate Anomalies (PCA). (b) Free scaling, strong PCA

In  2009, the iconic email from the Climategate leak included a comment by Phil Jones about the “trick” used by Michael Mann to “hide the decline,” in his Hockey Stick graph, referring to tree proxy temperatures  cooling rather than warming in modern times.  Now we have an important paper demonstrating that climate models insist on man-made global warming only by hiding the incline of natural warming in Pre-Industrial times.  The paper is From Behavioral Climate Models and Millennial Data to AGW Reassessment by Philippe de Larminat.  H/T No Tricks Zone. Excerpts in italics with my bolds.

Abstract

Context. The so called AGW (Anthropogenic Global Warming), is based on thousands of climate simulations indicating that human activity is virtually solely responsible for the recent global warming. The climate models used are derived from the meteorological models used for short-term predictions. They are based on the fundamental and empirical physical laws that govern the myriad of atmospheric and oceanic cells integrated by the finite element technique. Numerical approximations, empiricism and the inherent chaos in fluid circulations make these models questionable for validating the anthropogenic principle, given the accuracy required (better than one per thousand) in determining the Earth energy balance.

Aims and methods. The purpose is to quantify and simulate behavioral models of weak complexity, without referring to predefined parameters of the underlying physical laws, but relying exclusively on generally accepted historical and paleoclimate series.

Results. These models perform global temperature simulations that are consistent with those from the more complex physical models. However, the repartition of contributions in the present warming depends strongly on the retained temperature reconstructions, in particular the magnitudes of the Medieval Warm Period and the Little Ice Age. It also depends on the level of the solar activity series. It results from these observations and climate reconstructions that the anthropogenic principle only holds for climate profiles assuming almost no PCA neither significant variations in solar activity. Otherwise, it reduces to a weak principle where global warming is not only the result of human activity, but is largely due to solar activity.

Discussion

GCMs (short acronym for AOCGM: Atmosphere Ocean General Circulation Models, or for Global Climate model) are fed by series related to climate drivers. Some are of human origin: fossil fuel combustion, industrial aerosols, changes in land use, condensation trails, etc. Others are of natural origin: solar and volcanic activities, Earth’s orbital parameters, geomagnetism, internal variability generated by atmospheric and oceanic chaos. These drivers, or forcing factors, are expressed in their own units: total solar irradiance (W m–2), atmospheric concentrations of GHG (ppm), optical depth of industrial or volcanic aerosols (dimless), oceanic indexes (ENSO, AMO…), or by annual growth rates (%). Climate scientists have introduced a metric in order to characterize the relative impact of the different climate drivers on climate change. This metric is that of radiative forcings (RF), designed to quantify climate drivers through their effects on the terrestrial radiation budget at the top of the atmosphere (TOA).

However, independently of the physical units and associated energy properties of the RFs, one can recognize their signatures in the output and deduce their contributions. For example, volcanic eruptions are identifiable events whose contributions can be quantified without reference to either their assumed radiative forcings, or to physical modeling of aerosol diffusion in the atmosphere. Similarly, the Preindustrial Climate Anomalies (PCA) gathering the Medieval Warm Period (MWP) and the Little Ice Age (LIA), shows a profile similar to that of the solar forcing reconstructions. Per the methodology proposed in this paper, the respective contributions of the RF inputs are quantified through behavior models, or black-box models.

Now, Figures 1-a and 1-b presents simulations obtained from the models identified under two different sets of assumptions, detailed in sections 6 and 7 respectively.

Figure 1. Anthropgenic and natural contributions. (a) Locked scaling factors, weak Pre Industrial Climate Anomalies (PCA). (b) Free scaling, strong PCA

In both cases, the overall result for the global temperature simulation (red) fits fairly well with the observations (black).  Curves also show the forcing contributions to modern warming (since 1850). From this perspective, the natural (green) and anthropogenic (blue) contributions are in strong contradiction between panels (a) and (b). This incompatibility is at the heart of our work.

Simulations in panel (a) are calculated per section 6, where the scaling multipliers planned in the model are locked to unity, so that the radiative forcing inputs are constrained to strictly comply with the IPCC quantification. The remaining parameters of the black-box model are adjusted in order to minimize the deviation between the observations (black curve) and the simulated outputs (red). Per these assumptions, the resulting contributions (blue vs. green) comply with the AGW principle. Also, the conformity of the results with those of the CMIP supports the validity of the type of behavioral model adopted for our simulations.

Paleoclimate Temperatures

Although historically documented the Medieval Warm Period (MWP) and the Little Ice Age (LIA) don’t make consensus about their amplitudes and geographic extensions [2, 3]. In Fig. 7.1-c of the First Assessment Report of IPCC, a reconstruction from showed a peak PCA amplitude of about 1.2 °C [4]. Then later on, a reconstruction by the so-called ‘hockey stick graph’, was reproduced five times in the IPCC Third Assessment Report (2001), wherein there was no longer any significant MWP [5].

After, 2003 controversies reference to this reconstruction had disappeared from subsequent IPCC reports:it is not included among the fifteen paleoclimate reconstructions covering the millennium period listed in the fifth report (AR5, 2013) [6]. Nevertheless, AR6 (2021) revived a hockey stick graph reconstruction from a consortium initiated by a network “PAst climate chanGES” [7,8]. The IPCC assures (AR6, 2.3.1.1.2): “this synthesis is generally in agreement with the AR5 assessment”.

Figure 2 below puts this claim into perspective. It shows the fifteen reconstructions covering the preindustrial period accredited by the IPCC in AR5 (2013, Fig. 5.7 to 5.9, and table 5.A.6), compiled (Pangaea database) by [7]. Visibly, the claimed agreement of the PAGES2k reconstruction (blue) with the AR5 green lines does not hold.

Figure 2. Weak and strong preindustrial climate anomalies, respectively from AR5 (2013) in green and AR6 (2021) in blue.

Conclusion

In section 8 above, a set of consistent climate series is explored, from which solar activity appears to be the main driver of climate change. To eradicate this hypothesis, the anthropogenic principle requires four simultaneous assessments:

♦  A strong anthropogenic forcing, able to account for all of the current warming.
♦  A low solar forcing.
♦  A low internal variability.
♦  The nonexistence of significant pre-industrial climate anomalies, which could indeed be explained by strong solar forcing or high internal variability.

None of these conditions is strongly established, neither by theoretical knowledge nor by historical and paleoclimatic observations. On the contrary, our analysis challenges them through a weak complexity model, fed by accepted forcing profiles, which are recalibrated owning to climate observations. The simulations show that solar activity contributes to current climate warming in proportions depending on the assessed pre-industrial climate anomalies.

Therefore, adherence to the anthropogenic principle requires that when reconstructing climate data, the Medieval Warming Period and the Little Ice Age be reduced to nothing, and that any series of strongly varying solar forcing be discarded. 

Background on Disappearing Paleo Global Warming

The first graph appeared in the IPCC 1990 First Assessment Report (FAR) credited to H.H.Lamb, first director of CRU-UEA. The second graph was featured in 2001 IPCC Third Assessment Report (TAR) the famous hockey stick credited to M. Mann.

Rise and Fall of the Modern Warming Spike