Let’s Scrub Away Climate Change

Carbon Dioxide Scrubber

Ross McKitrick says Jeff Bezos has put enough money on the table to vanish climate change concerns, except for those who won’t let go. He writes at Financial Post: It’s never enough with climate activists — even a staggering $10 billion from Jeff Bezos. Excerpts in italics with my bolds.

Observers might conclude activists don’t care about the climate per se but instead want to impose a big-government central planning regime

Jeff Bezos, the mega-billionaire founder/owner of Amazon, just announced he will give US$10 billion to “fight climate change.” According to CNN, this followed immense pressure from his employees to take action. And, as is inevitable with this issue, as soon as he made the announcement his activist employees declared it wasn’t enough.

“We applaud Jeff Bezos’s philanthropy, but one hand cannot give what the other is taking away,” their group sniffed. “Will Jeff Bezos show us true leadership or will he continue to be complicit in the acceleration of the climate crisis, while supposedly trying to help?”

It is never enough with climate activists. Bezos’s US$10 billion is a staggering sum. But it’s also a drop in the bucket compared to what governments have spent over the past two decades on the climate issue. Yet activists keep complaining governments aren’t doing anything, either.

One begins to suspect they are not being up front about what they really want.

Politically minded observers might conclude activists do not care about the climate per se but instead want to impose a big-government central planning regime — for which the supposed climate emergency is merely a pretext. Any response to their demands that leaves the market system intact is therefore inadequate.

So where should Bezos direct his money?

If he really wants to make the world a better place, he should fund the invention of a low-cost carbon scrubber. If ever someone could invent a device that filters carbon dioxide out of a smokestack or tailpipe and turns it into a stable solid that can be cheaply disposed of or even used for another purpose, all for under $5 or $10 per tonne, the entire climate change issue would vanish.

Such a scrubber would mean we could carry on using fossil fuels while decoupling them from greenhouse gas emissions. We would continue getting all the benefits of cheap fossil energy without any climate side-effects. This is what we did with sulphur dioxide. The invention of sulphur scrubbers meant we could keep enjoying the benefits of fossil energy without the harm of acid rain. Now let’s do the same with carbon dioxide.

The only reason climate change is such a big, intractable worldwide issue is precisely that we cannot currently decouple fossil fuel use from carbon dioxide emissions, so trying to achieve deep emission reductions means imposing harsh costs on the world economy. But if carbon dioxide could be cheaply reduced while we continued to burn fossil fuels, that problem would be resolved.

Prototype Anti-smog Device

Once you realize this, you can then complete the thought-experiment by posing the question: Who would be the saddest people in the world if a cheap carbon-scrubber were invented? Answer: climate activists. They would almost certainly be bitterly crestfallen if ever an inexpensive technological fix resolved the climate issue. I say this because they so often give the impression their real motivation is not concern about the climate but rather a strange abhorrence of the modern world. The giveaway is their angry reaction to any information showing climate change isn’t a crisis — even though they of all people should be most cheered when such research appears.

Here is a useful litmus test for whether you or someone you know is an environmentally conscious person who wants to take a responsible stance on the climate issue.

Suppose Bezos funds a project that does invent a cheap carbon-scrubber and he gives away the technology so that overnight the need for climate policy vanishes (other than a requirement to use the scrubber). Our entire apparatus of climate policy would then become unnecessary. Ethanol mandates, electric vehicle subsidies, energy efficiency regulations, pipeline bans, the coal phaseout, natural gas bans for new homes, the oilsands emissions cap, et cetera — all of it could be eliminated and carbon dioxide emissions would plummet nonetheless. The Paris treaty would be redundant. There would be no more “conferences of the parties,” no more UN summits, and an end to the vast climate bureaucracies around the world — all of it replaced by quick, cheap and easy emission reductions. The litmus question:

Would that strike you as wonderful news or leave you bereft, your purpose in life lost?

“Wonderful news” is the correct answer. If you got it wrong, please stop blocking roads and railways and get some psychological help.

Ross McKitrick is a professor of economics at the University of Guelph.

Not to mention no more Fridays for the Future.

Greta Preaches Hellfire and Brimstone at Bristol

Her talking points:

  • This is an emergency.
  • People are already suffering and dying from it.
  • It will get worse.
  • Politicians, the media and those in power are ignoring the emergency
  • Elected officials make words and promises but do nothing.
  • I will not be silenced while the world is on fire, will you?
  • We must be the adults in the room since world leaders behave like children.
  • Thanks to climate activists Bristol Airport will not be expanded.
  • Activism works, so I’m telling you to act.
  • We will not back down from people in power who betray and fail us.
  • We are the change that is coming whether you like it or not.


How does she know those things?  Well, she really, really believes it in her heart, so followers apparently need no evidence beyond her certainty.  It is looking and sounding more and more like the Weathermen in 1969.  Some bad stuff came out of that, starting with words and gatherings similar to these.

Footnote from Madebyjimbob:

US Gas Crushes Wind and Solar

Jude Clemente reports at Forbes The Obvious Reality Of More U.S. Oil And Natural Gas
Excerpts in italics with my bolds.

Natural gas overwhelmingly dominates the U.S. electric power system, double second place coal. Gas is cleaner, cheaper, more flexible, and more reliable. Gas will supply over 40% of our power this summer and is racing toward being 50% of total generation capacity. Just think about the scale of that. For every 100 power plants in America that create electricity, 50 of them will run on natural gas (see Figure above). Further, the International Energy Agency has specifically credited the rise of gas in our power system as the reason why we are slashing CO2 emissions faster than any other country ever.

Understanding this reality, we must continue to resist growing “energy unrealism.” More bluntly, a fracking ban would be the worst policy for American economic, energy, and environmental security ever “nightmared” possible. Fracking accounts for some 80% of U.S. gas production and will represent almost all of incremental domestic supply.

So why do some presidential candidates want to slash $7.1 trillion and 19 million jobs from the U.S. economy from 2021 to 2025?

Indeed, the “shoot yourself in the foot” energy policies of California, New York, and the New England states cannot be allowed to go national. Even though gas is their primary source of electricity, these states have installed anti-production and anti-pipeline policies. Thus, their power prices are 50% or more above the national average and they are addicted to energy imported from other states. Massachusetts has been forced to import natural gas by ship from Russia over the previous two winters.

Too illustrate, like too often eating out at an expensive restaurant, California in some years has been importing a staggering 95% of its gas and 33% of its electricity. Talk about unsustainable. As we saw with the 2018 “Yellow Vest” riots in Paris against carbon taxes, Americans will not stand for such purposely installed expensive energy.

As fracking is set to make the U.S. the world’s largest oil and gas exporter, “Fiona Hill educates Democrats: Fracking hurts Putin.”

Further, fracking has soared U.S. crude oil production 160% to over 13 million b/d since 2008. This is a huge deal since oil remains our most vital source of energy, lacking any material substitute whatsoever. The U.S. Department of Energy has consistently modeled this to be true: since oil is an inelastic good, even drastic rises in pricing have little impact on demand (see Figure below).

This is hardly a surprise since overly expensive electric cars still account for just 1-2% of annual U.S. passenger car purchases. No kidding. The average Tesla buyer makes $400,000 a year – seven times the national average. Quietly even worse, “Taxpayer subsidies for electric vehicles only help the wealthy,” and “The U.S. Should Ban All Electric Cars in the Interest of National Security.”

The oil industry knows that it must tread lightly these days but is also wisely banking on oil as the ultimate indispensable product: “oil cannot not be used.” As for oil’s much reported on “social license to operate,” a reality check: “Exxon Isn’t the Oil User. You Are.”

For our own supply, fracking accounts for some 80% of U.S. oil production, and fracking will yield basically all new output for decades to come. The U.S. Department of Energy reports that fracking has the potential to skyrocket U.S. crude output another 50% or so to nearly 20 million b/d.

Indeed, OPEC’s and Vladimir Putin’s worst nightmare come true.

Figure 12: Figure 9 with Y-scale expanded to 100% and thermal generation included, illustrating the magnitude of the problem the G20 countries still face in decarbonizing their energy sectors. (Thermal refers to energy from oil, gas and coal.)

See Also  What If the US Banned Fracking?

Climateers Tilting at Windmills



Storm Ciara, Caused by Meghan?

Trains, flights and ferries have been cancelled and weather warnings issued across the United Kingdom as a storm with hurricane-force winds up to 129 km/h (80 mph) battered the region.

A strange new twist in climate science attribution of blame for extreme weather.  From the NewsThump Storm Ciara causing chaos across the nation: is it Meghan’s fault?  Excerpts in italics with my bolds.

Storm Ciara is causing travel and infrastructure chaos across the country and experts are suggesting that this first American sounding storm of the year could be the fault of Meghan Markle, Duchess of Sussex.

“It is certainly conceivable that Meghan has some sort of weather control device that she has used to bring Storm Ciara to Great Britain,” explained Simon Williams, a man in the pub who regularly watches Britain’s Wildest Weather.

“Either that or she’s got strange witchy powers that allow her to control the weather like she controls poor Harry.

“Whichever it is, it’s definitely her fault. I mean, look at the evidence – the storm hits just a few days after they moved to Canada.

“Mark my words, this storm wreaking havoc to the country that was fine till she came along? All Meghan’s fault.”

Popular trumpet of idiocy Piers Morgan was so apoplectic with rage that he simultaneously soiled himself, vomited and bleed from various orifices at the mere mention of the Duchess.

“Meghan. Royals. Arrogant,” he spluttered incoherently before calming down enough to make the following statement.

“Now look, I’m not racist,” said the man who is only angry at the black one.

“But this is what happens when you let people from different cultures into the Royal Family, they don’t really understand the ancient sophisticated British royal way of life, so they go around causing big storms.”

It is expected that the storm could bring further disruption to those communities recovering from last year’s floods.

Which were also definitely Meghan’s fault.



Arctic Ice Moment of Truth

For ice extent in the Arctic, the bar is set at 15M km2. The average in the last 13 years occurs on day 62 at 15.04M before descending. Six of the last 13 years were able to clear 15M, but recently only 2014 and 2016 ice extents cleared the bar at 15M km2; the others came up short.

During February MASIE and SII both show ice extent hovering around the 13 year average, matching it exactly on day 52 at 14.85M km2. Other recent years were lower until 2019 caught up, before dropping off in the final week of the month.  We shall see what this year does with only 10 to 14 days left before the March maximum is recorded.

Region 2020052 Day 052 Average 2020-Ave. 2018052 2020-2018
 (0) Northern_Hemisphere 14875470 14857903 17567 14312247 563223
 (1) Beaufort_Sea 1070655 1070222 433 1070445 210
 (2) Chukchi_Sea 965972 964814 1158 955104 10868
 (3) East_Siberian_Sea 1087137 1087039 98 1087120 18
 (4) Laptev_Sea 897845 897824 21 897845 0
 (5) Kara_Sea 906378 917433 -11055 917969 -11591
 (6) Barents_Sea 648148 613817 34332 552077 96071
 (7) Greenland_Sea 538698 613963 -75264 428606 110092
 (8) Baffin_Bay_Gulf_of_St._Lawrence 1502218 1495888 6330 1757430 -255211
 (9) Canadian_Archipelago 854282 853074 1209 853109 1174
 (10) Hudson_Bay 1259931 1260881 -950 1260838 -907
 (11) Central_Arctic 3246709 3213870 32839 3150241 96468
 (12) Bering_Sea 731776 685013 46763 194708 537067
 (13) Baltic_Sea 25524 104858 -79334 94201 -68677
 (14) Sea_of_Okhotsk 1117881 1022253 95628 1060733 57148

As reported previously, Pacific sea ice is a big part of the story this year.  Out of the last 13 years, on day 52 only two years had Okhotsk ice extent higher than 2020, and only four years had higher Bering ice. Those surpluses offset a small deficit in Greenland Sea ice.

Typically, Arctic ice extent loses 67 to 70% of the March maximum by mid September, before recovering the ice in building toward the next March.

What will the ice do this year?  Where will 2020 rank in the annual Arctic Ice High Jump competition?

Drift ice in Okhotsk Sea at sunrise.


Extinction Rebels Work on Wall Street

Raising alarms about the climate extincting humans is not only fun, but profitable. Just ask J.P. Morgan who recently put out a treatise pumping up the alarm.  Tyler Durden writes at Zero Hedge “The Human Race Could Go Extinct”: JPMorgan Fearmongers Climate Change Impact In Leaked Report. Excerpts in italics with my bolds.

A new explosive report from JP Morgan was leaked out this week titled “Risky business: the climate and the macroeconomy” warns climate change poses a significant macroeconomic risk to the world economy and could result in a “catastrophic” event.

“The response to climate change should be motivated not only by central estimates of outcomes but also by the likelihood of extreme events (from the tails of the probability distribution). We cannot rule out catastrophic outcomes where human life as we know it is threatened,” the report advised its top clients.

JPM’s David Mackie and Jessica Murray, the authors of the report, said: “climate change would not only impact GDP and welfare directly but would also have indirect effects via morbidity, mortality, famine, water stress, conflict, and migration.”

They said the impact of climate had been underestimated by governments, adding:

“Something will have to change at some point if the human race is going to survive.”

The reason for this elaborate scheme is that after the 2008 financial crisis, where financial elites were bailed out and the middle class was left to rot, convincing the average person that money printing is needed once more would be a difficult task.

So again, financial elites created a fake climate change crisis to offer a policy prescription of money printing to protect their asset bubbles, but simultaneously, make everyone believe that it’s to transform the global economy into a much greener trajectory to save the planet.

“If no steps are taken to change the path of emissions, the global temperature will rise, rainfall pat-terns will change creating both droughts and floods, wildfires will become more frequent and more intense, sea levels will rise, heat-related morbidity and mortality will increase, oceans will become more acidic, and storms and cyclones will become more frequent and more intense.  And as these changes occur, life will become more difficult for humans and other species on the planet.”

And if you care to read JPM’s leaked report, here it is:


Two of Four Seasons Gone, Because Climate Change

From the NewsThump, Vivaldi’s Four Seasons reduced to Two in light of climate change.  Excerpts in italics with my bolds.

Antonio Vivaldi’s Four Seasons concerti will now consist of just two seasons, because that’s all there are these days.

The conductor of the London Philharmonic Orchestra Simon Williams commented, “The current generation doesn’t really understand the idea of four different seasons of weather, so in a bid to appeal to a modern audience we’ve updated the concerti to be more representative of 300 years of environmental damage.

“Overall, the suite will be much more discordant and unpredictable. The part representing winter will be doubled in length and feature woodwind solos to signify the yearly rising of floodwaters. A choir of scientists will sing the aria ‘Te Lo Abbiamo Detto’ (We Told You So).

Spring and autumn will be done away with altogether.

“Summer has also been greatly extended, as has the ‘languor caused by the heat’ bit, occasionally broken up strings phrases to represent next door’s kids on the trampoline.”


Climate Models Fail from Radiative Obsession

Peter Stallinga provides a thorough analysis explaining why models based purely on radiative heat exhanges fail without incorporating other thermodynamic processes.  This post is a synopsis of the structure of his position without the extensive mathematical expressions of the relationships discussed.  The full text of his paper can be accessed by linking to the title below.

Comprehensive Analytical Study of the Greenhouse Effect of the Atmosphere
By Peter Stallinga. Atmospheric and Climate Sciences, 2020 H/T NoTricksZone. Excerpts in italics with my bolds


Climate change is an important societal issue. Large effort in society is spent on addressing it. For adequate measures, it is important that the phenomenon of climate change is well understood, especially the effect of adding carbon dioxide to the atmosphere. In this work, a theoretical fully analytical study is presented of the so-called greenhouse effect of carbon dioxide. The effect of this gas in the atmosphere itself was already determined as being of little importance based on empirical analysis. In the current work, the effect is studied both phenomenologically and analytically.

In a new approach, the atmosphere is solved by taking both radiative as well as thermodynamic processes into account. The model fully fits the empirical data and an analytical equation is given for the atmospheric behavior. Upper limits are found for the greenhouse effect ranging from zero to a couple of mK per ppm CO2. (mK is 1/1000 degree Kelvin). It is shown that it cannot explain the observed correlation of carbon dioxide and surface temperature. This correlation, however, is readily explained by Henry’s Law (outgassing of oceans), with other phenomena insignificant.

Finally, while the greenhouse effect can thus, in a rudimentary way, explain the behavior of the atmosphere of Earth, it fails describing other atmospheres such as that of Mars. Moreover, looking at three cities in Spain, it is found that radiation balances only cannot explain the temperature of these cities. Finally, three data sets with different time scales (60 years, 600 thousand years, and 650 million years) show markedly different behavior, something that is inexplicable in the framework of the greenhouse theory.

The Greenhouse Effect

The greenhouse effect is phenomenologically introduced and compared to the alternative explanation for the data, namely Henry’s Law. The observed correlations between temperature and CO2 are presented; these are the data we are going to attempt to explain.

Henry’s Law

The correlation between temperature and [CO2] is readily explained by another phenomenon, called Henry’s Law: The capacity of liquids to hold gases in solution is depending on temperature. When oceans heat up, the capacity decreases and the oceans thus release CO2 (and other gases) into the atmosphere. When we quantitatively analyze this phenomenon, we see that it perfectly fits the observations, without the need of any feedback [1]. We thus now have an alternative hypothesis for the explanation of the observations presented by Al Gore.

The greenhouse effect can be as good as rejected and Henry’s Law stays firmly standing. We concluded that the effect of anthropogenic CO2 on the climate is negligible and the effect of the ocean temperature on atmospheric [CO2] is exactly, both sign and magnitude, equal to that as expected on basis of Henry’s Law [1].

Contemporary Correlation

Correlations are best shown in correlation plots; instead of both shown as a time series, they are better shown as one vs. the other; if they are correlated, a straight line should result. Figure 2(b) shows a correlation plot of the same data as used for Figure 2(a) (but no averaging). We see that there is an apparent correlation between the two datasets and we can fit a line to them to find the coefficient. The value is 10.2 mK/ppm. (See Table 2 for a summary of all data sets and models described here).

This experimental value of 10.2 mK/ppm is highly interesting. It is neither close to the value expected for the greenhouse effect (1.4 mK/ppm), nor close to the value of Henry’s Law (100 mK/ppm). Even stranger, it is also not close to the value of the 600-ka ice core data (95 mK/ppm).

A missing response might be explained in a relaxation model. After all, induced changes take time to materialize; the system needs time to settle to the new equilibrium value. However, too big responses compared to the model are not possible. A more likely cause for the divergence between model and data is that the correlation is merely coincidental [2]. In a Henry’s-Law (HL) analysis, the CO2 has no effect on the temperature, but a concurrent temperature rise is merely a coincidence. That is, because the [CO2] rise in contemporary data is possibly of anthropogenic origin and not (much) caused by the temperature rise. In the HL framework, the ca. 0.8 degree temperature rise has contributed a meager 8 ppm to the CO2 in the atmosphere. The rest might be coming from anthropogenic sources, or from nature itself.

If, on the other hand, we want to attribute the temperature rise to CO2, we must build-in a delay, since most of the effect of the alleged greenhouse effect has apparently not occurred, yet. Using the value of 95 mK/ppm (Table 2), the 80 ppm of Figure 2(b) should have produced (will produce?) a staggering 7.6 degree temperature rise. A meager 0.8 degrees is observed after 60 years (13 mK/a. Figure 2). From these data we can deduce a virtual relaxation time τ , namely a relaxation time of about half a millennium. We can thus expect a temperature rise of some tenths of a degree per decade in the coming centuries.

This causes a problem. Having set out to explain things in known physical laws, we have no idea what physical process might be the origin of this relaxation. The radiation balance of the atmosphere has a relaxation time of about a month and a half, as evidenced by the 30-day belay between shortest/longest day and coldest/warmest day [1]. No millennium-scale relaxation mechanisms are easily identifiable in the atmosphere, where the GHE resides. We can even exclude that the oceans act as thermal sinks for the heat generated in the atmosphere, because then the effect in the atmosphere would initially be even larger than theoretically predicted, in an effect called overshoot. We thus conclude that, upon scrutiny, the (alleged) greenhouse effect also has to be rejected as a hypothesis to explain contemporary data.

Can the data be explained by Henry’s Law, instead? The observed correlation is 10 mK/ppm, or conversely 100 ppm/K. That is a factor 10 too big for Henry’s Law, and relaxation processes can only make the effect smaller. We thus exclude Henry’s Law as an explanation for the contemporary steady [CO2] rise in the atmosphere, it is not caused by the steady rise in temperature.

A Radiative Greenhouse Model

A “classic” greenhouse model is presented where energy transfer is uniquely by radiation. First with the atmosphere as single body, then with the atmosphere as an infinite set of identical layers, and finally with a multi-layer model in which the atmosphere is in thermodynamic equilibrium, so that layers get thinner and colder upwards. 

Absorption in the Atmosphere

An intrinsic assumption we make here is that all incoming energy comes from radiation from the Sun. Heat coming from the Earth itself, from below the crust, is too small (about 50 mW/m2 ; estimation from the authors) to be significant. Moreover, all heat must be dissipated to the universe by radiation only. Things as evaporation of hot molecules from the top of the atmosphere is too insignificant. This is a rather trivial assumption and will therefore not be further justified. Perhaps more questionable is the assumption that the atmosphere is a well mixed chamber, meaning that all gases occur in the same ratios everywhere. The theoretical greenhouse effect is governed by optical absorption and emission processes in the atmosphere. As such, the Beer-Lambert rule of absorption plays an important role.

The greenhouse effect is often erroneously presented as being caused by the fact that σ x (and thus also α , τ and absorbance) depends on the wavelength. That visible sunlight can reach the surface and infrared terrestrial light cannot easily escape through the atmosphere. This schematic presentation is wrong, however, as we will discuss. It does not matter how and where the solar radiation is absorbed or terrestrial IR radiation is absorbed and reemitted in the atmosphere, if sunlight reaches the surface or not, etc. The only thing that matters is the amount of radiation received by Earth (that is, the amount that is not reflected directly back into space) and amount of IR radiation emitted

Since the absorption coefficient depends on capture cross-sections as well as concentrations, pumping CO2 in the atmosphere might increase heat absorption in the atmosphere and might thus heat it up. However, at first sight, the effect is probably minimal, because nearly all infrared light is already absorbed; at the top of the atmosphere, according to the Beer-Lambert Equation (Equation (2)): J h( ) ≈ 0. We thus do not expect much effect from adding CO2 to the atmosphere as long as there are other channels open for emission. Imagine: if 99% of the light possibly absorbed is already absorbed (say, 350 W/m2 ), doubling CO2 in the atmosphere will not double the absorption (to 700 W/m2 ), but just add something close to 1% to it (3.5 W/m2 ), as Equation (2) tells us. We call this the “forcing” of the atmosphere.

We must remark at this moment that 254K is the temperature of Earth as seen from outer space. Irrespective of any greenhouse or other effect. If we, from outer space, point a radiometer at the planet it will have a temperature signature of 254.0 K. (If we also include the visible light that is reflected (aS), then the total radiation power is equal to that of an black sphere with temperature 4 T S = = σ 278.3 K ). The greenhouse effect does not change the apparent temperature of the planet as seen from outer space, but only that of a hidden layer (e.g., the solid surface). Radiation into space effectively comes from the atmosphere at an altitude where the temperature is 254.0 K, which is in the troposphere at about 6 km height.

In this analysis we assume that radiation balances are determining the atmosphere. The temperature can then be found from the radiation intensity by the inverse function of Stefan-Boltzmann. With the emission α (UDz + )d linearly depending on height, this results in a fourth-root curve of height, with T = 288 K at the surface ( z = 0 ) and about 210 K (−60˚C) at the top of the atmosphere. This is obviously incorrect. In the extreme, in this model attributing all greenhouse effect to CO2, doubling c = [CO2] will double α and that will result in a temperature of 0 T = 313.2 K, or in other words, a sensitivity of ∆∆ = T0 2 [CO 25.1 K 350 ppm 71.7mK ppm ].

A Non-Uniform Atmosphere

In the analysis above, it was assumed that the atmosphere was a homogeneous well-mixed closed box of height h with constant properties everywhere: pressure, concentrations, absorption constants, etc. (Except for a radiation gradient of Figure 4(b)). A real atmosphere, on the other hand, has no upper boundary and is contained by gravity on one side, making it ever thinner with height. We can calculate the properties of such a system, starting with the ideal gas law.

Figure 6. Adiabatic, static atmosphere without heat sources or sinks. (a) Temperature as a function of height, the slope of this linear curve is called the lapse rate and is −6.49 K/km. The black open circles and “+” signs are US standard atmosphere (Refs. [8] and [14], respectively). The green line is Equation (36). The vertical line at 254 K is where the outward radiation apparently is coming from; (b) Pressure as a function of height. The black dots are from Ref. [8]. The blue dashed line is the classical barometric equation, Equation (25), and the green solid line is Equation (38); (c) Density as a function of height, according to Equation (39). (Parameters used: 26 m 4.82 10 kg, − = × p c =1.51 kJ kg K, ⋅ 0 T =15 C, ˚ 0 P =1013.2 hPa, 2 g = 9.81 m s ). The curves here are indistinguishable from the empirical data; the atmosphere is apparently in thermodynamic equilibrium.

We have to make the very important observation that these curves and equations here are independent of any absorption of radiation in the atmosphere, for instance the greenhouse effect discussed earlier (Figure 4(b)). The lapse rate is not caused by radiation, but is a thermodynamic property of the atmosphere. That is, as long as there is thermodynamic equilibrium in the atmosphere, the curves are as given by the above three formulas that are determined by only the surface temperature T0 and total air weight 0P , and physical properties m and p c of the gas and the gravitational constant g. It might, of course, be the case that the atmosphere is not in equilibrium. In that case, mass and heat can be transported through convection, diffusion, conduction and radiation. The latter two only heat (and no mass).

These results are phenomenologically equal to the case of the closed-box Beer-Lambert model. It has the same linear radiation forcing behavior of D(0), depending linearly on total amount of absorbant in the atmosphere. For instance, doubling the total atmosphere with all constituents in it doubles P0 and that doubles the downward radiation (Equation (45)). Moreover, the dependence of temperature on total amount of [CO2] is likewise of the same behavior. If CO2 is the only gas contributing to the greenhouse effect in the atmosphere, the above number would imply a climate sensitivity of ∆T/∆[CO2] = 
(25.1 K) /(350 ppm) = 72 mK ppm.  Now, estimates are given that CO2 contributes to about 3.62% of the greenhouse effect [18]. Substituting x x σ σ ′ = 1.0362 gives a temperature of 289.2 K, and thus a climate sensitivity of s  =∆T/ ∆[CO2 ] =  (1.0 K)/( 350 ppm) =  2.9 mK ppm.

Failure of the Model

Now, as mentioned before, we have to conclude that this entire idea of an analysis is wrong, and is only performed here to show what values we might get on basis of the (faulty) analysis. Because of before-mentioned heat creep and thermodynamic equilibrium in general, it does not matter where and how the planet is heated (where radiation is absorbed, etc.), what matters is only the total amount of power, (1− a S) absorbed in the Earth system and U h( ) emitted by it, in equilibrium they are equal.

For ease of calculation we put it all at the surface, but it makes no difference whatsoever. Thus, as a consequence, and even more important to observe, radiation coming from the surface, U (0) , or anywhere else from the atmosphere itself should not be taken into account, since it does not add anything to the total energy input, it is merely a way of redistribution of internal heat, just like transport by convection and evaporation, etc., the final distribution which is given by the equations given here based on ideal gas laws.

The idea that gases in the atmosphere work as some sort of “mirror” to reflect heat back to the surface is incorrect, because a very efficient and functional “mirror” already exists: Because the atmosphere is in thermodynamic equilibrium (as evidenced by the perfect fit of thermodynamic-equilibrium equations to empirical reality, see Figure 6) adding a heat flux F to the system from the top of the atmosphere to the bottom (see Figure 8)—for instance an absorption of IR and reemission downwards—will be fully counteracted by an equal flux F from the bottom of the atmosphere to the top. The net effect will always be zero!

Adding internal radiation to the radiation balance only, and ignoring the annulling other effects, would be allowing an atmosphere to boot-strap itself, heating itself somehow. No object can heat itself to higher temperatures, when it is already in thermal equilibrium. The only radiation that matters is the one coming from the Sun, and it does not matter where and how it enters into the heat balance of the atmosphere. Heat creep (convection, evaporation and radiation) will redistribute the heat to result in the distribution given here (Figure 6); According to Sorokhtin 66.56% by convection, 24.90% by condensation and 8.54% by radiation [19].

An added process (by multiple absorption-emission) might, at best, speed up the equilibration. There is nothing CO2 would add to the current heat balance in the atmosphere, if the outward radiation no longer comes from the surface of the planet, but from a layer high up in the atmosphere. As long as the radiation does not come from the surface, making the layer blacker (more emissive) will radiate—“mirror”—more heat downwards, which is irrelevant (since it will only speed up the rate of thermalization), but also more heat upwards (F’ in Figure 8), cooling down that layer and thermodynamically the surface layer and the entire planet with it! Opening a radiative channel to the cold universe will rather cool an object.

Thermodynamic-Radiative Atmospheric Model

A thermodynamic-radiative model is presented in which each part of the atmosphere is in thermodynamic equilibrium and, moreover, exchanges heat not only by radiation, but also by other ways, such as convection, etc.

This brings us to the final model that will be presented here. It is based on combining the thermodynamic and radiative analyses given above. Considering the fact that the atmosphere is in thermodynamic equilibrium, we must assume that absorbed radiation does get assimilated by the heat bath, just like in classic Beer-Lambert theory. Radiation absorbed is distributed instantaneously all over the atmosphere and the surface. The surface emits with 4 0 σT , a part λ is passing directly to the universe unhindered, 1− λ is absorbed. The atmosphere emits too; the total emissivity equal to ε. Since, for radiation properties, it does not matter where the absorbing/emitting mass resides (z), but only how much radiation it receives and what temperature it has, we start by describing the atmosphere ρ and T not as a function of height z, but as a function of temperature T.

The total energy in the atmosphere can easily be calculated. Since in thermodynamic equilibrium all air packages have the same specific energy given by p c T gz + , independent of z, all mass must have specific energy (per kg) equal to that at z = 0, namely p 0 c T . The total thermodynamic energy of the atmosphere is then total p 0 E cTM = . The atmosphere tries to shed energy by radiation, and receives energy from the surface. The surface also tries to shed energy, either by radiation into the universe, or by transfer to the atmosphere somehow (conduction, etc.). This is a intricate interplay of energy transfer.

We go back to the Beer-Lambert analysis. Once again, this is justified by the assumption that internal absorption is simply recycled back into the system. The heat is rapidly distributed to maintain thermodynamic equilibrium. Radiation leaving the surface 4 σT0 is partly absorbed by the atmosphere and partly transmitted. The absorbed energy goes back to the heat bath p 0 cTM and is redistributed therein. It does not heat up this bath (that would be counting the solar radiation twice), but reabsorbed energy is heat that was not allowed to escape the system and is thus blocked from cooling it.

We can now make a plot of the total radiation coming out of the atmosphere as a function of the total optical depth the atmosphere (τ σ= xM m ). Combining Equations (52) and (55), Figure 9 shows the fraction which is the planetary emissivity  , as a function of optical optical depth τ = x σ M m, and thermodynamic molecular heat capacity p η = mc k .

Figure 9. Left: Earth planetary emissivity of the fraction of the radiation emitted by the surface that is escaping from the top of the atmosphere. 1 is like a black body, 0 is white. The red curve is the contribution from the surface and the blue curve from the atmosphere. The sum is  , shown by the green curve. Right: The effect of augmenting the specific heat p c of air is an increment of the planetary emissivity and thus cooling of the surface if η = mc k p increases, but also a heating up of the atmosphere from where now more radiation originates. (Gray lines are the situation of left figure, colored lines are for a doubling of p c ).

The effect of the atmosphere can be found by knowing the thermodynamic parameter and the optical parameter, more precisely the molecular heat capacity, η = mc k p , and optical depth, x τ σ= M m. The latter is complicated, since it is not simply a matter of a linear function of M. We have to know the complete spectrum. But before we continue, it has to be pointed out that the emissivity Equation (Equation (56)) is a monotonously decreasing function of τ for any value of η . That means that increasing the optical depth of the atmosphere will always reduce the emissivity and increase the surface temperature, in contrast to earlier thoughts we had. Reabsorbed radiation goes back to the heat bath and gets a second change to be radiated out to the universe, maybe through another channel: Emitted at a wavelength for which the atmosphere is more transparent, or maybe from a place higher up in the atmosphere.

Comparing to Reality

Section 5 discusses some test cases of the model. They show mixed success. 


The relevant parameters of the NASA Mars Fact Sheet [22] are given in Table 4. Because most of the atmosphere consists of CO2 (sic), the average molecular mass of molecules is close to the value of CO2 (44.01 g/mol), the fact sheet gives 43.34 g/mol [23]. Therefore, the atmosphere has 3.955 kmol/m2 of molecules. 95.32% of that is CO2, that is 3.770 kmol/m2 . That is much more than above any point on Earth. Yet, the effect of all that CO2 is unmeasurable. The black body (atmosphereless) temperature is bb, T ♂ = 209.8 K, which can be calculated on basis of the solar irradiance 2 W♂ = 586.2 W m and the albedo of Mars ( a♂ = 0.250 ): 4 σTbb,♂ = (1 4 − a W ♂) ♂ . The real temperature of the Mars surface is just this 210 K, making the measured greenhouse effect within the measurement error.

Of course, what is important is not so much how much CO2 is absorbing, but how much is the atmosphere in its entirety absorbing. Better to say, how much it is letting through. Even with CO2 fully saturated, nearly all radiated heat easily escapes the atmosphere. A tiny unmeasurable effect remains. Now, doubling it will have no effect. Imagine 1% of the spectrum is covered, in which part 90% of the radiation is absorbed. Thus 99.1% of all radiation escapes. Doubling this constituent will make the absorption in that 1% part only go to 99%; still 99.01% of all radiation escapes. In this particular case of Mars, CO2 has little effect in whatever quantity it is in the atmosphere.


Ångström, in his classical work, wrote “[…] it is clear, first, that no more than about 16 percent of earth’s radiation can be absorbed by atmospheric carbon dioxide, and secondly, that the total absorption is very little dependent on the changes in the atmospheric carbon dioxide content, as long as it is not smaller than 0.2 of the existing value” [5].

This basically states that there is no further contribution to the greenhouse effect from CO2 for concentrations above, about, 60 ppm. It is another way of saying that a radiation window that is closed cannot further contribute to greenhouse effect. That is, as long as there are other windows still open. This is not true, however, because absorption lines have tails that can never saturate, absorption can be much larger. A HITRAN simulation of only CO2 absorption (with concentrations as found in the Earth atmosphere) results in absorption of 17.4% (See Figure 11; direct emission 82.6%), close to the value estimated by Ångström. Yet, as we have seen, for Mars, with 30 times more CO2, this absorption is 26.4%. It shows the complexity of the subject. This manuscript is not about numerical simulations, but about analytical understanding of the greenhouse effect. We will now make an empirical estimation here in the framework of our analytical model.

The real emissivity of Earth (surface plus atmosphere) can easily be determined on basis of empirical data, according to Equation (57), where we defined the emissivity through the ratio of radiation coming out to the radiation emitted by the surface.  Taking the thermodynamic parameters ( p m c, ) as constant and as used before (Table 3), from the equations we find that this value occurs for an optical depth of τ = 0.754 (See the green dot of Figure 9). We can even see that 77.7% (0.470) of the radiation going into space comes from the surface and 22.3% (0.135) from the atmosphere. This contrasts the notion mentioned earlier that the radiation comes from 6 km altitude (most still comes from the surface), and also contrasts the values of 28.5% direct emission found earlier for a radiation-only atmosphere. Yet, we also see that if the opacity of the atmosphere increases, more radiation, also in absolute terms, comes from the atmosphere. This is the cooling effect of the atmosphere described earlier.

Figure 13. Schematic picture of what happens at Earth. Two channels, one (A) if fully open and one (B) is nearly closed. Closing B further has little-to-no effect.

However, the situation is much closer to situation (f) of Figure 10. That is, a part of the spectrum emission is fully open, and the part of CO2 is as good as closed (shown black in the figure). This situation is depicted in Figure 13, with an open channel A and a closed channel B. Now, further closing the channel (B) that is already as good as closed has little-to-no effect, as long as a significant part of the rest of the spectrum is open.

Note that this is not envisaged in a radiation balance analysis.

In that model, once a heat package has “decided” to opt for a certain wavelength, the only way to make it out of the atmosphere is by multiple-emission-absorptions events, or crawling its way back to the surface. As such, adding CO2 to a closed channel still has a lot of impact. In the current thermodynamic-radiative model, absorbed radiation is given back to the heat bath that is the surface plus the atmosphere ( p 0 cTM ), from where it can have a second chance of escaping into the universe, by the same or by a different channel. To say it in another way, the best-case climate sensitivity of CO2 is zero. The optical length of CO2 in the atmosphere is about 25 m. That is, 25 meters up in the air the radiation emitted by the surface in the spectrum of CO2 is already attenuated by a factor e. In this 25-m layer resides only 1/1773th part of the atmosphere (and CO2). The total transmission of the entire atmosphere is thus exp 1773 (− ) which any calculator shows as zero. Doubling the CO2 in the atmosphere, will have no measurable effect, exp 3546 0. (− ≈) Heat had no chance escaping to the universe through this channel, and now even less so. As long as there is a sliver of the emission spectrum for which the atmosphere is transparent the effect of doubling agents such as CO2 that have a spectrum that is close to saturation is close to nil. This is the lower limit of the effect.


In this thermodynamic analysis, the temperature at a point on the planet is for a certain radiative input (1 , − a S) mainly determined by the altitude z. The radiative greenhouse effect states it mainly depends on the total amount of carbon dioxide floating above the point. To test these hypotheses, we can look at cities with the same or similar radiative solar input, at the same latitude on the planet, but at different altitudes. Without doing an exhaustive study, we take as example three neighboring cities in the south of Spain,namely Sevilla, Córdoba and Granada, each at a different elevation (Table 5).

The question now is, why is Granada not much warmer in 2019 than Sevilla was in 1951? It is actually still colder. This seriously undermines the idea that carbon dioxide is determining the temperature on our planet. Figure 14 plots the temperature of these cities versus the carbon content above them. The linear regression quality parameter is R2 = 0.21. Meaning, temperature is not well correlated with [CO2].


When we look at reality, the greenhouse effect on Venus is enormous. Comparing the real temperature  737 K with the blackbody temperature based on the solar radiance and albedo of only 226.6 K, we determine that the emissivity of Venus is very small: 0.00894. For these values we see that the radiation no longer comes from the surface at all, but from the atmosphere, instead, that is very opaque with an optical depth τ of about 81 (see Figure 12). Venus connects to the universe at a high altitude in the atmosphere. The heat finds it way to the surface by thermodynamic means, resulting in a high surface temperature.

Geological Time Scales

Figure 16. (a) (Holocene) ten-thousand-year time scale data of temperature (blue curve) and [CO2] (purple curve) of Greenland; (b) A correlation curve shows an absence of correlation between the two on this time scale. Plots made with the help of WebPlotDigitizer 4.2 [29] from a plot of based on data of GISP 2 (temperature) and EPICA C (CO2), found at Climate4You [31].

Another experiment nature throws at us is the geological time scale, from times long before humans appeared on this planet. Figure 15 shows these data and it is obvious that in this time scale there is no correlation between carbon-dioxide concentrations in the air and surface temperatures. In fact, a fitting to the data results in a correlation of dT/ d[CO2] = -0.43 mK ppm,  which is probably accidental, because we do not know any theory that might explain an inverse correlation between the two quantities. A similar non-correlation we see in the Holocene data of Greenland, see Figure 16. Here a (pseudo)correlation of dT/ d[CO2] = -0.43 mK ppm, is found, two orders-of-magnitude more than in the paleontological data of Figure 15. These results undermine the hypothesis that only CO2 is climate forcing, something that some climatologists claim.

Other Effects: Feedback, Delay, Water

Section 6 augments the model by including feedback and secondary effects, such as water. It also tries to establish relaxation times of the system.


We might think that the atmosphere did not have time yet to reach the new equilibrium. The observed effects are then always less than the one calculated. For the greenhouse effect, however, we need to explain that the signal is larger than theory predicts. Considering the fact that our calculations may be wrong, we can, still, make an estimation about how long it takes to reach the equilibrium, and turn the observed short-term contemporary 10.2 mK/ppm into the observed long-term 95 mK/ppm. The specific heat capacity of air is p c = ⋅ 1.51 kJ K kg. The pressure is 1013.2 Pa, so the mass density is 3 2 0 M Pg = = × 10.3 10 kg m . We found a radiative forcing of 2 w = 8.6 mW m per ppm (Equation (61)), and a temperature effect of ∆T/∆ [CO2] = 3.3 mK ppm (Equation (60)). The characteristic temperature adjustment time of just the atmosphere alone is then  46 days,  That is about a month and a half, a value very similar to the one we found empirically in the phase shift of yearly-periodic solar radiation and temperature data, namely about 1.2 months [32] and a simple relaxation analysis of daily temperature variations which gives 23 days [1]. We can thus exclude any substantial delay effects in the greenhouse effect [CO2] → T on the time scales of the contemporary and ice-core-drilling datasets, 60 a and 600 ka, respectively. On the other hand, we can expect long delays between T and [CO2] in the framework of Henry’s Law. Imagine the atmosphere warms up for some reason (maybe solar activity). This warmed up air must then heat up the relevant layer of the ocean and expose this layer to the surface where the surplus CO2 can outgas.

Water Effect

Water has several effects. First it will change the specific heat of the air ( p c ) and thus the lapse rate of the atmosphere, Equation (36). This will slightly lower the temperature at the surface. As a secondary effect, it will increase dramatically the absorption cross section for infrared, and this will change the ground temperature T0 . The feedback effect of water on outgoing radiation is beyond the Barkhausen criterion and thus, any addition of water to the atmosphere, however tiny, would result in a run-away scenario. However, this effect is fully canceled by the albedo effect, resulting in a net negative effect of water on the temperature.


We have analyzed here the greenhouse effect, using fully analytical techniques, without reverting anywhere to finite-elements calculations. This gave important insight into the phenomenon. An important conclusion is that the analysis in terms of radiation-balances-only cannot explain the situation in the atmosphere. In the extreme case, a differential equation of layers with absorption coefficients, etc., gave the same results as a much simpler 2-box mixed chamber model. However, the underlying assumptions in these calculations are not physical.

Therefore we set out to model the greenhouse effect ab initio, and came up with the thermodynamic-radiation model. The atmosphere is close to thermodynamic equilibrium and based on that we can calculate where and how radiation is absorbed and emitted. This model can explain phenomenologically and analytically how big the effect of the atmosphere is, specifically Equations (56) and (58).

Continuing with the reasoning, we find that the alleged greenhouse effect cannot explain the empirical data—orders of magnitude are missing. There where Henry’s Law—outgassing of oceans—easily can explain all observed phenomena.

Moreover, the greenhouse hypothesis—as presented here—cannot explain the atmosphere on Mars, nor can it explain the geological data, where no correlation between [CO2] and temperature is observed. Nor can it explain why a different correlation is observed in contemporary data of the last 60 years compared to historical data (600 thousand years).

We thus reject the anthropogenic global warming (AGW) hypothesis, both on basis of empirical grounds as well as a theoretical analysis

Oceans Erase Last Summer’s Warming

The best context for understanding decadal temperature changes comes from the world’s sea surface temperatures (SST), for several reasons:

  • The ocean covers 71% of the globe and drives average temperatures;
  • SSTs have a constant water content, (unlike air temperatures), so give a better reading of heat content variations;
  • A major El Nino was the dominant climate feature in recent years.

HadSST is generally regarded as the best of the global SST data sets, and so the temperature story here comes from that source, the latest version being HadSST3.  More on what distinguishes HadSST3 from other SST products at the end.

The Current Context

The chart below shows SST monthly anomalies as reported in HadSST3 starting in 2015 through January 2020.
A global cooling pattern is seen clearly in the Tropics since its peak in 2016, joined by NH and SH cycling downward since 2016.  In 2019 all regions had been converging to reach nearly the same value in April.

Then  NH rose exceptionally by almost 0.5C over the four summer months, in August exceeding previous summer peaks in NH since 2015.  In the 4 succeeding months, that warm NH pulse has reversed sharply.  January NH anomaly is little changed from December.  SH and Tropics SSTs bumped upward since September, but despite that the global anomaly dropped a little due to strong NH cooling. Now the Global anomaly is the same as June 2019.

Note that higher temps in 2015 and 2016 were first of all due to a sharp rise in Tropical SST, beginning in March 2015, peaking in January 2016, and steadily declining back below its beginning level. Secondly, the Northern Hemisphere added three bumps on the shoulders of Tropical warming, with peaks in August of each year.  A fourth NH bump was lower and peaked in September 2018.  As noted above, a fifth peak in August 2019 exceeded the four previous upward bumps in NH.

And as before, note that the global release of heat was not dramatic, due to the Southern Hemisphere offsetting the Northern one.  The major difference between now and 2015-2016 is the absence of Tropical warming driving the SSTs.

A longer view of SSTs

The graph below  is noisy, but the density is needed to see the seasonal patterns in the oceanic fluctuations.  Previous posts focused on the rise and fall of the last El Nino starting in 2015.  This post adds a longer view, encompassing the significant 1998 El Nino and since.  The color schemes are retained for Global, Tropics, NH and SH anomalies.  Despite the longer time frame, I have kept the monthly data (rather than yearly averages) because of interesting shifts between January and July.

1995 is a reasonable (ENSO neutral) starting point prior to the first El Nino.  The sharp Tropical rise peaking in 1998 is dominant in the record, starting Jan. ’97 to pull up SSTs uniformly before returning to the same level Jan. ’99.  For the next 2 years, the Tropics stayed down, and the world’s oceans held steady around 0.2C above 1961 to 1990 average.

Then comes a steady rise over two years to a lesser peak Jan. 2003, but again uniformly pulling all oceans up around 0.4C.  Something changes at this point, with more hemispheric divergence than before. Over the 4 years until Jan 2007, the Tropics go through ups and downs, NH a series of ups and SH mostly downs.  As a result the Global average fluctuates around that same 0.4C, which also turns out to be the average for the entire record since 1995.

2007 stands out with a sharp drop in temperatures so that Jan.08 matches the low in Jan. ’99, but starting from a lower high. The oceans all decline as well, until temps build peaking in 2010.

Now again a different pattern appears.  The Tropics cool sharply to Jan 11, then rise steadily for 4 years to Jan 15, at which point the most recent major El Nino takes off.  But this time in contrast to ’97-’99, the Northern Hemisphere produces peaks every summer pulling up the Global average.  In fact, these NH peaks appear every July starting in 2003, growing stronger to produce 3 massive highs in 2014, 15 and 16.  NH July 2017 was only slightly lower, and a fifth NH peak still lower in Sept. 2018.

The highest summer NH peak came in 2019, only this time the Tropics and SH are offsetting rather adding to the warming. Since 2014 SH has played a moderating role, offsetting the NH warming pulses. Now in January 2020 last summer’s unusually high NH SSTs have been erased. (Note: these are high anomalies on top of the highest absolute temps in the NH.)

What to make of all this? The patterns suggest that in addition to El Ninos in the Pacific driving the Tropic SSTs, something else is going on in the NH.  The obvious culprit is the North Atlantic, since I have seen this sort of pulsing before.  After reading some papers by David Dilley, I confirmed his observation of Atlantic pulses into the Arctic every 8 to 10 years.

But the peaks coming nearly every summer in HadSST require a different picture.  Let’s look at August, the hottest month in the North Atlantic from the Kaplan dataset.
The AMO Index is from from Kaplan SST v2, the unaltered and not detrended dataset. By definition, the data are monthly average SSTs interpolated to a 5×5 grid over the North Atlantic basically 0 to 70N. The graph shows warming began after 1992 up to 1998, with a series of matching years since. Because the N. Atlantic has partnered with the Pacific ENSO recently, let’s take a closer look at some AMO years in the last 2 decades.
This graph shows monthly AMO temps for some important years. The Peak years were 1998, 2010 and 2016, with the latter emphasized as the most recent. The other years show lesser warming, with 2007 emphasized as the coolest in the last 20 years. Note the red 2018 line is at the bottom of all these tracks. The black line shows that 2019 began slightly cooler, then tracked 2018, then rose to match previous summer pulses, before dropping the last four months to be slightly above 2018 and below other years.


The oceans are driving the warming this century.  SSTs took a step up with the 1998 El Nino and have stayed there with help from the North Atlantic, and more recently the Pacific northern “Blob.”  The ocean surfaces are releasing a lot of energy, warming the air, but eventually will have a cooling effect.  The decline after 1937 was rapid by comparison, so one wonders: How long can the oceans keep this up? If the pattern of recent years continues, NH SST anomalies may rise slightly in coming months, but once again, ENSO which has weakened will probably determine the outcome.

Footnote: Why Rely on HadSST3

HadSST3 is distinguished from other SST products because HadCRU (Hadley Climatic Research Unit) does not engage in SST interpolation, i.e. infilling estimated anomalies into grid cells lacking sufficient sampling in a given month. From reading the documentation and from queries to Met Office, this is their procedure.

HadSST3 imports data from gridcells containing ocean, excluding land cells. From past records, they have calculated daily and monthly average readings for each grid cell for the period 1961 to 1990. Those temperatures form the baseline from which anomalies are calculated.

In a given month, each gridcell with sufficient sampling is averaged for the month and then the baseline value for that cell and that month is subtracted, resulting in the monthly anomaly for that cell. All cells with monthly anomalies are averaged to produce global, hemispheric and tropical anomalies for the month, based on the cells in those locations. For example, Tropics averages include ocean grid cells lying between latitudes 20N and 20S.

Gridcells lacking sufficient sampling that month are left out of the averaging, and the uncertainty from such missing data is estimated. IMO that is more reasonable than inventing data to infill. And it seems that the Global Drifter Array displayed in the top image is providing more uniform coverage of the oceans than in the past.


USS Pearl Harbor deploys Global Drifter Buoys in Pacific Ocean

Jordan Peterson’s Anguish Deserves Our Empathy

Prof. Jordan Peterson in March 2018. Craig Robertson /Postmedia/File

Rex Murphy writes at National Post Rex Murphy: Jordan Peterson’s personal torment Excerpts in italics with my bolds.

I cannot think of any politician, thinker or novelist who has sent so much comfort and aspiration to so many people

It is very difficult to believe that there is anyone, except the young, who has not experienced serious illness or been witness to the suffering of a loved one. It is, alas, part of the human condition. Those who have endured such moments do not need to hear what they are like or how profoundly unsettling and painful they are.

We suffer most when those who are closest to us suffer. That is our pain, but it is also our glory that we often feel more deeply about those who we hold closest to our hearts than we feel for ourselves. No one escapes such moments. And when they do occur, people outside the close circle of the man or woman caught in pain or in peril of death register an instinctive sympathy for those so entoiled.

These are, I grant, sombre thoughts, but they occur naturally following a viewing of Mikhaila Peterson’s video about her father, Jordan Peterson’s, afflictions during the past year. Mikhaila Peterson brought dignity and poise to what was clearly an effort of great weight. I think it also worth remarking that it was good of her to be so open and direct about his circumstances, as the many who have found some strength and direction from her father’s words were anxiously waiting for some news.

It was an abrupt moment when Jordan Peterson’s world tour was halted and very many were left wondering and worrying about him. One of the singular features of Peterson’s fame was how many people felt a genuine concern for his personal welfare.

I cannot claim to be a friend of Peterson’s, except in the sense that I would wish to be so. I have met him only once, for an interview, which he gave while in some fragility and which I willingly would have foregone, and assured him so. This is not to note any generosity on my part, which is trivial in the context of his condition, but to highlight that he did the interview because he had committed to it. His doing so was a signal of his character. This is a man who’s willing to put aside his own considerations to oblige a commitment.

But the signal I received was that of a man who, while in personal turmoil, and who then was facing the possible death of his wife, in concurrence with his own fatigue following the explosion of his fame, would honour a minor engagement that could easily be deferred, because he thought he should. Unlike him, I would not have had the strength to carry it through.

But to go beyond that one small moment, I write this now as a gesture of good will towards the person I called a “good man” at the end of that exchange. He’s having a very hard time. It may be a small thing, but I think it’s only fair that someone who has helped so many others through their hard times should be offered a little acknowledgement and appreciation during his own. I seriously hope it’s not presumptuous on my part.

I cannot think of any politician, thinker or novelist who has sent so much comfort and aspiration to so many people. Peterson is not a missionary. He preaches no creed. But out of his deep reflections, his clinical experience, his dedicated exploration of why so many people are “offside,” “removed” or “isolated” in modern society, he has found some elements of a message that revives their hope and reinvigorates their sense of dignity.

I have met many people who have been spoken to by Jordan Peterson’s words. When I interviewed him, it was surely his wife and his family’s torment that were at the front of his mind. But he also carried another burden: the memories of all those he had met, however briefly, who transferred some of their pain to him. It is a true sorrow that the expense of energy he gave to his message has taxed him so, and that the tribulations of his family life combined with his fatigue came at such a cost to himself.

I am sure that in passing on regards and best wishes to Peterson that I am but a single voice speaking the words of thousands. I note that some very shallow and mean people are finding joy in Peterson’s struggle. May that joy fill their cup, as it is the vinegar of cheap minds and cheaper souls. Enjoy yourselves. Put it on your resumés that in the absence of any other purpose in life, you like to mock the pain of a better person and insult his family in a woeful moment.


I was appreciative of Jordan Peterson’ incisive interviews and articles, and his book Maps of Meaning inspired me a year ago to write a series of posts highlighting insights from his journey.  I excerpted text that spoke to me and added images of contempory examples in parallel to his discussions of classical myths. The first post provides an overview as well as the first theme.  Titles are links to the posts.

Cosmic Dichotomy: Peterson’s Pearls (1)

Cosmic Arena: Peterson’s Pearls (2)

Cosmic Heroes: Peterson’s Pearls (3)

Cosmic Evil: Peterson’s Pearls (4)

Cosmic Rebirth: Peterson’s Pearls (5)