Value of Decarbonizing Pledges? Net Zero.

There are two reasons why Bill Gates and hundreds of Corporations and many countries are backtracking on commitments to decarbonize.  One is disbelieving the false advertising that the planet is in danger and can be saved by Net Zero efforts. Second is sobering up to the fact that decarbonizing the world is an impossible fantasy.  This post includes content from Gary Abernathy on the first point and some quotes from Vaclav Smil’s recent paper on the second.

  1.  Abernathy writes at Real Clear Energy In practice, ‘Net Zero’ Was Exactly How Much Such Pledges Were Worth.  Excerpts in italics with my bolds and added images.

The public “net zero” pledges by countless corporate and political entities in recent years were always baffling. How could the United States or much of the industrialized world reach “net zero” emissions without destroying modern living?

As a reminder, “net zero” is a term coined to illustrate a goal of “eliminating greenhouse gas emissions produced by human activities, which is accomplished by decreasing global emissions and abating them from the atmosphere,” as defined by Net0.com, a company that describes itself as “the market leader in AI-First Sustainability, enabling governments and enterprises worldwide to enhance their environmental performance and decarbonize profitably.”

Net0 posits that “the global scientific community agrees that to mitigate the most severe impacts of climate change, we must reduce worldwide net human-generated carbon dioxide emissions by approximately 45 percent from their 2010 levels by the year 2030 and achieve net zero emissions by around 2050.”

In a political atmosphere shaming anyone who didn’t join the climate cult – led in the U.S. by the Biden administration and globally by the U.N. – attempting to outdo each other for the most aggressive “net zero” policy was all the rage.

“As of June 2024, 107 countries… had adopted net-zero pledges either in law, in a policy document such as a national climate action plan or a long-term strategy, or in an announcement by a high-level government official,” boasted the United Nations.

More than 9,000 companies, over 1,000 cities, more than 1,000 educational institutions, and over 600 financial institutions have joined the Race to Zero, pledging to take rigorous, immediate action to halve global emissions by 2030.”

But as politicians know, promises and actions are often unrelated. Most people endowed with even a modicum of common sense and a grade-school understanding of basic science knew that meeting “net zero” goals would require a reduction in the use of our most affordable, effective and reliable energy sources to a degree that would devastate modern economies.

The fact that “net zero” pledges were nothing but a cruel joke was made clear last month in a story by NPR headlined,Leaders promised to cut climate pollution, then doubled down on fossil fuels.” Most thinking people were as surprised by that headline as by discovering wet water, hot fire or flying birds. It was not necessary to read further. “Of course,” they said to themselves, moving on to the next story.

But there are, sadly, climate cult converts who, in their shock, likely needed more details.

They discovered: “The world is producing too much coal, oil and natural gas to meet the targets set 10 years ago under the Paris Agreement, in which countries agreed to limit climate pollution and avoid the worst effects of global warming,” NPR reported.  The story said:

“A new report, led by the nonprofit research group Stockholm Environment Institute, shows countries plan to produce more than twice the amount of fossil fuels in 2030 than would be consistent with limiting global heating to 1.5 degrees Celsius (2.7 degrees Fahrenheit).”

For the true believers, here’s the real punch to the gut: “The SEI report shows the 20 most polluting countries, including China, the U.S. and India, actually plan to produce even more fossil fuels than they did two years ago, when the report was last updated.”

Of course, as he did in his first term, President Trump is pulling the U.S. out of the Paris Agreement as he unleashes American industry and works to ensure energy affordability, independence and security for the nation. Legislation to roll back taxpayer subsidies for “renewables” and return to “reliables” has already been passed or introduced in various states and is soon likely to be fortified at the federal level.

After wasting billions of tax dollars on wind and solar subsidies that could have been directed toward schools, healthcare or other real needs, the fever is finally breaking. The world is slowly but surely awakening from the delusions of climate zealots who insisted that we were on the verge of catastrophe with constantly worsening weather disasters.

Just last May, for example, NOAA the National Oceanic and Atmospheric Administration predicted an “above-normal 2025 Atlantic hurricane season.” And just a few months earlier, PBS NewsHour reported on a study showing that “human-caused climate change made Atlantic hurricanes about 18 miles per hour (29 kilometers per hour) stronger in the last six years.”

The message was clear. More hurricanes.
Stronger hurricanes. This year’s reality so far?

“The 2025 Atlantic hurricane season is the first time in 10 years that a hurricane has not made landfall in the United States through the end of September,” according to American Press. While “hurricane season” extends through November, September is usually the busiest month.

The weather is – and has always been – unpredictable. Severe weather events like hurricanes, tornadoes, monsoons, floods, blizzards and drought have always been with us, and always will. The attempt to demonize humankind for the frequency and severity of the weather has been politically motived and economically disastrous.

“Net zero” pledges are being revealed for the false promises they most often were, designed mainly to win plaudits from the Lecturing Left. For leaders grounded in facts, real-world needs have always meant that no one is easing off the gas.

2. Vaclav Smil’s paper is at Fraser Institute Halfway between Kyoto and 2050.  Overview and keynote section are reprinted below with my bolds and added images.

      Contents
Executive Summary
Introduction
1. Carbon in the Biosphere
2. Energy Transitions
3. Our Record So Far
4. What It Would Take to Reverse the Past Emission Trend
5. The Task Ahead: Zero Carbon Electricity and Hydrogen
6. Costs, Politics, and Demand
7. Realities versus Wishful Thinking
8. Closing Thoughts
Executive Summary

♦  This essay evaluates past carbon emission reduction and the feasibility of eliminating fossil fuels to achieve net-zero carbon by 2050.

♦  Despite international agreements, government spending and regulations, and technological advancements, global fossil fuel consumption surged by 55 percent between 1997 and 2023.  And the share of fossil fuels in global energy consumption has only decreased from nearly 86 percent in 1997 to approximately 82 percent in 2022.

♦  The first global energy transition, from traditional biomass fuels such as wood and charcoal to fossil fuels, started more than two centuries ago and unfolded gradually.

♦  That transition remains incomplete, as billions of people still rely on traditional biomass energies for cooking and heating.

♦  The scale of today’s energy transition requires approximately 700 exajoules of new non-carbon energies by 2050, which needs about 38,000 projects the size of BC’s Site C or 39,000 equivalents of Muskrat Falls.

♦  Converting energy-intensive processes (e.g., iron smelting, cement, and plastics) to non-fossil alternatives requires solutions not yet available for largescale use.

♦  The energy transition imposes unprecedented demands for minerals including copper and lithium, which require substantial time to locate and develop mines.

♦  To achieve net-zero carbon, affluent countries will incur costs of at least 20 percent of their annual GDP.

♦  While global cooperation is essential to achieve decarbonization by 2050, major emitters such as the United States, China, and Russia have conflicting interests.

♦  To eliminate carbon emissions by 2050, governments face unprecedented technical, economic and political challenges, making rapid and inexpensive transition impossible.

7. Realities versus Wishful Thinking

Since the world began to focus on the need to end the combustion of fossil fuels, we have not made the slightest progress in the goal of absolute global decarbonization: emission declines in many affluent countries were far smaller than the increased consumption of coal and hydrocarbons in the rest of the world, a trend that has also reflected the continuing deindustrialization in Europe and North America and the rising shares of carbon-intensive industrial production originating in Asia. As a result, by 2023 the absolute reliance on fossil carbon rose by 54 percent worldwide since the Kyoto commitment. Moreover, a significant part of emission declines in many affluent countries has been due to their deindustrialization, to transferring some of their carbon-intensive industries abroad, above all to China.

A recent international analysis of 1500 climate policies around the world concluded that 63 or 4% of them were successful in reducing emissions.

Denmark, with half of its electricity now coming from wind, is often pointed out as a particular decarbonization success: since 1995 it cut its energy-related emissions by 56 percent (compared to the EU average of about 22 percent)—but, unlike its neighbours, the country does not produce any major metals (aluminum, copper, iron, or steel), it does not make any float glass or paper, does not synthesize any ammonia, and it does not even assemble any cars. All these products are energy-intensive, and transferring the emissions associated with their production to other countries creates an undeservedly green reputation for the country doing the transferring.

Given the fact that we have yet to reach the global carbon emission peak (or a plateau) and considering the necessarily gradual progress of several key technical solutions for decarbonization (from large-scale electricity storage to mass-scale hydrogen use), we cannot expect the world economy to become carbon free by 2050. The goal may be desirable, but it remains unrealistic. The latest International Energy Agency World Energy Outlook report confirms that conclusion. While it projects that energy-related CO2 emissions will peak in 2025, and that the demand for all fossil fuels will peak by 2030, it also anticipates that only coal consumption will decline significantly by 2050 (though it will still be about half of the 2023 level), and that the demand for crude oil and natural gas will see only marginal changes by 2050 with oil consumption still around 4 billion tons and natural gas use still above 4 trillion cubic meters a year (IEA, 2023d).

Wishful thinking or claiming otherwise should not be used or defended by saying that doing so represents “aspirational” goals. Responsible analyses must acknowledge existing energy, material, engineering, managerial, economic, and political realities. An impartial assessment of those resources indicates that it is extremely unlikely that the global energy system will be rid of all fossil carbon by 2050. Sensible policies and their vigorous pursuit will determine the actual degree of that dissociation, which might be as high as 60 or 65 percent. More and more people are recognizing these realities, and fewer are swayed by the incessant stream of miraculously downward-bending decarbonization scenarios so dear to demand modelers.

Long-term global energy forecasts offering numbers for overall demand or supply and for shares contributed by specific sources or conversions are beyond our capability: the system is too complex and too open to unforeseen but profound perturbations for such specificity. However, skepticism in constructing long-term estimates will lessen the extent of inevitable errors. Here is an example of a realistic 2023 forecast done by Norwegian risk management company DNV that has been echoed recently by other realistic assessments. After noting that global energy-related emissions are still climbing (but might peak in 2024 when the transition would effectively begin) it concludes that by 2050 we will move from the present roughly 80 percent fossil/20 percent non-fossil split to a 48 percent/52 percent ratio by 2050, with primary energy from fossil fuels declining by nearly two-thirds but still remaining at about 314 EJ by 2050—in other words, about as high as it was in 1995 (DNV, 2023).

Again, that is what any serious student of global energy transitions would expect. Individual components change at different speeds and notably rapid transformations are possible, but the overall historical pattern quantified in terms of primary energies is one of gradual changes. Unfortunately, modern forecasting in general and the anticipation of energy advances in particular have an unmistakable tendency toward excessive optimism, exaggeration, and outright hype (Smil, 2023b). During the 1970s many people believed that by the year 2000 all electricity would come not just from fission, but from fast breeder reactors, and soon afterwards came the promises of “soft energy” taking over (Smil, 2000).

Belief in near-miraculous tomorrows never goes away. Even now we can read declarations claiming that the world can rely solely on wind and PV by 2030 (Global100REStrategyGroup, 2023). And then there are repeated claims that all energy needs (from airplanes to steel smelting) can be supplied by cheap green hydrogen or by affordable nuclear fusion. What does this all accomplish besides filling print and screens with unrealizable claims? Instead, we should devote our efforts to charting realistic futures that consider our technical capabilities, our material supplies, our economic possibilities, and our social necessities—and then devise practical ways to achieve them. We can always strive to surpass them—a far better goal than setting ourselves up for repeated failures by clinging to unrealistic targets and impractical visions.

 

Placing Melissa in History

Climatic media has fallen in love with Melissa, many of them blaming “climate change”, i.e. CO2 for her strength and destructive power.  No surprise that Imperial College London (who foisted its covid pandemic models upon us) reports that its IRIS model confirms a “rapid attribution” claim.  No doubt there will be more such yada yada at Belem COP to stir up the faithful.

For the rest of us, let’s remember the saying attributed to George Santayana: “Those who cannot remember the past are condemned to repeat it.”  For example, Melissa belongs to a class of stong Atlantic hurricanes going back almost a century.  Here’s a table of them along with peak sustained winds and the CO2 levels at the time.

Peak Wind CO2 Level
Hurricane Year mph ppm
“Cuba” 1932 175 308
“Labor Day” 1935 185 310
Janet 1955 175 314
Camille 1969 175 325
Anita 1977 175 334
David 1979 175 337
Allen 1980 190 339
Gilbert 1988 185 352
Andrew 1992 175 356
Mitch 1998 180 367
Wilma 2005 185 380
Rita 2005 180 380
Katrina 2005 175 380
Dean 2007 175 384
Felix 2007 175 384
Irma 2017 180 407
Maria 2017 175 407
Dorian 2019 185 411
Milton 2024 180 425
Melissa 2025 185 428

Note that all twenty hurricanes had winds ranging between 175 to 190 mph, going back to 1932.  Meanwhile CO2 has increased from 308 ppm to 428 (2025 ytd).  Note also the absence of such storms in the decade 2007 to 2017 despite CO2 adding 23 ppm in that period. The correlation between high wind speeds and CO2 concentrations is an insignificant 0.18.

Then there is the Global Accumulated Cyclone Energy (ACE) report that includes the effects of both minor and major storms, combining strength and frequency.

I added an overlay of CO2 to illustrate how unlikely is a link between CO2 and storms.  Finally from Roger Pielke Jr. a chart showing ACE strength per hurricane:

The charts show that 16 is the average ACE per hurricane, in North Atlantic since 1900 and Globally since 1980.  The trend is not upward, and in North Atlantic appears currently lower than the past.

See Also:

Devious Climate Attribution Studies

 

Solid Arctic Ice Recovery October 2025

The animation shows the rapid growth of Arctic ice extent during October 2025, from day 274 to day 304, yesterday.  For all of the fuss over the September minimum, little is said about Arctic ice growing 3M km2, that’s 3 Wadhams in one month!.  Look on the left (Russian side) at the complete closing of the Northern Sea Route for shipping.

The graph below shows 2025 compared to the 19 year average (2006 to 2024 inclusive), to SII (Sea Ice Index) and some notable years.

This year October added 2.6M km2 from end of September compared to an average October increase of 3.4M km2.  The first two weeks were above average, before the refreezing rate slowed down ending in a deficit of ~0.5M km2.  In other terms the end of October ice extents were four days behind the average, according to MASIE.  SII started the same, but tracked lower in the second half of October.

The table below shows the distribution of ice in the Arctic Ocean basins.

Region 2025304 Day 304 Ave. 2025-Ave. 2007304 2025-2007
 (0) Northern_Hemisphere 7867621 8401977 -534356 8175072 -307451
 (1) Beaufort_Sea 975681 937777 37904 1038126 -62444
 (2) Chukchi_Sea 683493 466318 217175 242685 440809
 (3) East_Siberian_Sea 1087032 952325 134707 835071 251961
 (4) Laptev_Sea 849204 848501 703 887789 -38585
 (5) Kara_Sea 137515 478870 -341355 311960 -174445
 (6) Barents_Sea 1466 81088 -79621 52823 -51356
 (7) Greenland_Sea 351374 418343 -66969 443559 -92184
 (8) Baffin_Bay_Gulf_of_St._Lawrence 128777 247258 -118481 289374 -160596
 (9) Canadian_Archipelago 568663 740190 -171526 817220 -248557
 (10) Hudson_Bay 8609 66501 -57892 48845 -40236
 (11) Central_Arctic 3051977 3153485 -101508 3206345.33 -154368

Overall ice extent was 534k km2 below average or 6%.  Surpluses appear on the Eurasian shelf seas of Beaufort, Chukchi and East Siberian, while sizeable deficits are shown elsewhere on the Atlantic side, especially Kara, Baffin Bay, Canadian Archipelago and Central Arctic.

Illustration by Eleanor Lutz shows Earth’s seasonal climate changes. If played in full screen, the four corners present views from top, bottom and sides. It is a visual representation of scientific datasets measuring ice and snow extents.

 

 

With Wind and Solar More Is Less

At their Energy Bad Boys website Mitch Rolling and Isaac Orr published More is Less with Wind and Solar.  Excerpts in italics with my bolds and added images.

Capacity Values of Wind and Solar Plummet as Penetration Increases

With all the talk about needing to dramatically increase power supplies to meet the growing demand from data centers, as well as for anticipated electric vehicle adoption and other electrification efforts, it’s time to highlight one glaring reality of filling that demand with wind and solarthe reality of diminishing returns.

As in: the more intermittent capacity you add, the less capacity value you get from it. When it comes to wind and solar, more is less.

How it Works

Electric grids and utilities across the country assign reliability ratings to wind and solar resources—called capacity values—and these values diminish to almost zero as the system adds more wind and solar.

This reality is lost on—or intentionally obfuscated by—many wind and solar advocates who like to brag about current high capacity values for wind and solar without mentioning the fact that these values plummet as you add more wind and solar to the grid.

What Are Capacity Values?

The term “capacity value” is defined by the National Renewable Energy Laboratory (NREL) as “the contribution of a power plant to reliably meeting demand. Capacity value is the contribution that a plant makes toward the planning reserve margin…”

Basically, capacity values are percentages of total installed capacity for each energy source that electric grids believe they can reliably count on to meet demand. It reflects the idea that while every energy source has a maximum capacity that it can reach under ideal conditions, not every energy source can reliably perform at these ratings at any given time and when needed.

Limitations of current capacity value methods

Current methodologies for calculating wind and solar capacity values have several limitations that need to be considered when referencing them as reliability metrics.

The first limitation is that they are dependent on existing resources already on the grid. This means that if the generation makeup of the grid changes dramatically, as is happening on power systems across the country, this will have a significant negative impact on the capacity values of wind and solar.

Furthermore, they are also dependent on current load profiles, which are also anticipated to change in major ways with the emergence of data center load growth.

Finally, many capacity values are based on average performance, and not during the highest stress hours for maintaining system reliability, such as peak demand or net peak demand (demand minus wind and solar generation). As a result, capacity values may not assess the reliability of wind and solar when they are needed most, which can lead to an overreliance on them for meeting peak and net peak demand.

Wind and solar capacity values plummet as the system adds more

Now that the basics are out of the way, let’s discuss the reality that many wind and solar advocates avoid: that every megawatt of wind and solar added to the system is less reliable than the one before it.

Wind and solar capacity values fall as more of these resources are added to the grid because their output patterns are often correlated—the sun sets over an entire continent or concentrated wind turbines experience a wind droughtand they are non-dispatchable. As a result, adding more of the same variable resource reaches a point where the resource does not meaningfully contribute to reliability.

Referring back to the methods above, this means that the more wind and solar you add, the less the load can increase on the system or the less perfect capacity can be removed, thus increasing the denominator of the equation at a higher rate than the numerator.

This is reflected by diminishing capacity values for wind and solar in several major regional transmission operators (RTOs) in the country, which we detail below.

Map of Diminishing Capacity Values for Major RTOs

For a summary comparison, the map above shows the current capacity values of wind and solar in major RTOs across the country and how they are all expected to decline in the future as more are added to the system.

Midcontinent Independent System Operator (MISO)

In almost every season for wind and solar capacity values plummet and reach as low as .4 percent for solar in winter and 8.6 percent for wind in fall by 2043. The one exception to this is wind in the summer months, which actually increases from 8 percent in 2025/26 to 11.5 percent in 2030 before falling again to 8.9 percent by 2043. Still not a great reliability rating compared to coal, gas, hydro, and nuclear, which range from 64 percent to 95 percent in every single season.

In its 2024 Regional Resource Assessment, MISO explains that even though wind and solar will make up the vast majority of installed capacity in the future, reliable/accredited capacity will still be made up of primarily thermal resources.

Pennsylvania-New Jersey-Maryland (PJM)

PJM shows a similar story. While onshore wind and offshore wind begin at 41 percent and 68 percent, respectively, in the 2027/28 planning year, these resources drop to 19 percent and 26 percent by 2035/36.  Solar already starts at a low capacity value, dropping from 7—9 percent in 2027/28 to 6—7 percent by 2035/36. PJM explains:

-The ratings for the two solar classes remain stable at low values during the entire period due to the high level of winter risk

-The ratings for the two wind classes decrease significantly due to a gradual shift in winter historical performance patterns driving the winter risk in the model (as shown in the above tables)

Electric Reliability Council of Texas (ERCOT)

ERCOT shows a similar effect as more wind and solar are added to the system, as the same trend can be seen in the following charts.  As you can see, as more solar is added to the grid, the ELCCs drop to the 0—2 percent range, even with significant amounts of wind capacity on the grid.  Similarly, as more wind is added to the ERCOT system, wind ELCCs drop into the 5—10 percent range.

We hear a lot about the complementary nature of wind and solar generation in ERCOT. While this is true to some extent, these results show that even this has its limits when relying on large amounts of wind and solar capacity for meeting demand because complementary generation won’t always be the case, and there will be times when both resources perform poorly at the same time.

Southwest Power Pool (SPP)

For Southwest Power Pool, solar values are fairly high at the moment, ranging from 55 percent to 74 percent, because it has very few solar resources on the grid, while wind is much lower, ranging from 19 percent to 26 percent, because it is already saturated with wind resources.

Conclusion

The trend is simple enough to catch—the more wind and solar are added, the less valuable every additional MW becomes to the grid. The New York ISO (NYISO) makes the case clear in its 2023-2042 System & Resource Outlook report:

One complex challenge that needs to be considered beyond 2040 is the relative ineffectiveness of new solar and wind resources to contribute during periods of reliability risk after a significant amount of capacity has been built.

This is an important reality to remember when wind and solar advocates try to present intermittent resources as reliable energy sources that are able to meet the power demand needs of the future.

The fact is that not only are wind and solar already intermittent and unreliable,
but they have diminishing returns as you add more of them.

As usual, we end with the recommendation of not only keeping our existing thermal fleet in operation for as long as possible, because they are often the most affordable and reliable power plants on the system, but also bringing back recently retired facilities and building new ones on top of it.

IPCC Global Warming Claims Not Only Wrong, But Impossible

Climate as heat engine. A heat engine produces mechanical energy in the form of work W by absorbing an amount of heat Qin from a hot reservoir (the source) and depositing a smaller amount Qout into a cold reservoir (the sink). (a) An ideal Carnot heat engine does the job with the maximum possible efficiency. (b) Real heat engines are irreversible, and some work is lost via irreversible entropy production TδS. (c) For the climate system, the ultimate source is the Sun, with outer space acting as the sink. The work is performed internally and produces winds and ocean currents. As a result, Qin = Qout.

Ad Huijser recently published a paper explaining why IPCC claims about global warming are contradicted by observations of our Earth thermal system including a number of internal and external subsytems. The title Global Warming and the “impossible” Radiation Imbalance links to the pdf. This post is a synopsis to present the elements of his research findings, based on the rich detail, math and references found in the document. Excerpts in italics with my bolds and added images. H/T Kenneth Richard and No Tricks Zone.

Abstract

Any perturbation in the radiative balance at the top of the atmosphere (TOA) that induces a net energy flux into- or out of Earth’s thermal system will result in a surface temperature response until a new equilibrium is reached. According to the Anthropogenic Global Warming (AGW) hypothesis which attributes global warming solely to rising concentrations of Greenhouse gases (GHGs), the observed increase in Earth’s radiative imbalance is entirely driven by anthropogenic GHG-emissions.

However, a comparison of the observed TOA radiation imbalance with the assumed GHG forcing trend reveals that the latter is insufficient to account for the former. This discrepancy persists even when using the relatively high radiative forcing values for CO2 adopted by the Intergovernmental Panel on Climate Change (IPCC), thereby challenging the validity of attributing recent global warming exclusively to human-caused GHG emissions.

In this paper, Earth’s climate system is analyzed as a subsystem of the broader Earth Thermal System, allowing for the application of a “virtual balance” approach to distinguish between anthropogenic and other, natural contributions to global warming. Satellite-based TOA radiation data from the CERES program (since 2000), in conjunction with Ocean Heat Content (OHC) data from the ARGO float program (since 2004), indicate that natural forcings must also play a significant role. Specifically, the observed warming aligns with the net increase in incoming shortwave solar radiation (SWIN), likely due to changes in cloud cover and surface albedo. Arguments suggesting that the SWIN trend is merely a feedback response to GHG-induced warming are shown to be quantitatively insufficient.

This analysis concludes that approximately two-thirds of the observed global warming must be attributed to natural factors that increase incoming solar radiation, with only one-third attributable to rising GHG-concentrations. Taken together, these findings imply a much lower climate sensitivity than suggested by IPCC-endorsed Global Circulation Models (GCMs).

Introduction

On a global scale and over longer periods of time, the average surface temperature of our climate system reacts similarly to that of a thermal system such as a pot of water on a stove: when the incoming heat is steady and below boiling, the system stabilizes when the heat loss (via radiation and convection) equals the input. Analogously, Earth’s surface-atmosphere interface is the main absorber and emitter of heat. Reducing the “flame” (solar input) leads to cooling, regardless of the total heat already stored in the system. The system’s average temperature will drop as well, as soon as the heating stops. So, no sign of any “warming in the pipeline” for such a simple system.

The two transport mechanisms, air and ocean, operate on different timescales. Air has a low specific heat capacity, but high wind speeds make it a fast medium for heat transfer. Oceans, by contrast, have a high specific heat capacity but move more slowly. The Atlantic Meridional Overturning Circulation (AMOC) with the well-known Gulf Stream carrying warm water from south to north, can reach speeds up to about 3 m/s. But its warm current remains largely confined to surface layers due to limited solar radiation penetration and gravity-induced stratification. With a path-lengths of up to 8,000 km and an average speed of 1.5 m/s, ocean heat takes approximately 2 months to travel from the Gulf of Mexico to the Arctic. This is comparable to the 1 to 2 months delay between solar input and temperature response in the annual cycle, suggesting that oceanic heat transport is part of the climate system’s normal operation. Climate adaptation times from anthropogenic influences are estimated at 3 to 5 years. If “warming in the pipeline” exists, it must be buried in the much colder, deeper ocean layers.

ARGO float data since 2004 show substantial annual increases in Ocean Heat Content (OHC), sometimes expressed in mind-boggling terms such as 10²² joules per year (see Fig.1). While this may sound alarming [1,2], when converted to flux, it represents less than 1 W/m², a mere 0.6% of the average 160 W/m² of absorbed solar energy at the surface. All the rest is via evaporation, convection and ultimately by radiation sent back to space after globally being redistributed by wind and currents.

Fig. 1. Ocean Heat Content (OHC) anomaly from 0–2000 meters over time, shown as 3-month and annual moving averages (CMAA), along with their time derivatives. Notable are the relatively large variations, likely reflecting the influence of El Niño events. The average radiative imbalance at the top of the atmosphere (TOA), estimated at 0.85 W/m², corresponds approximately to the midpoint of the time series (around 2015). Data: https://www.ncei.noaa.gov/access/global-ocean-heat-content/basin_heat_data.html [7].

This raises the question: Why would extra GHGs that have only a limited effect on the 99.4% of the outgoing flux, have affected this 0.6% residue during a couple of decennia in such a way that we should be scared about all that “warming in the pipeline” as Hansen et al. [2] are warning us for? In the following sections, we examine data showing that observed trends in the radiation imbalance and OHC are better explained by the internal dynamics of the Earth’s thermal system and natural forcings such as from increasing solar radiation, rather than solely by GHG emissions.

Estimating our climate’s thermal capacity CCL

The rather fast responses of our climate indicates that the thermal capacity of our climate must be much less than the capacity of the entire Earth thermal system. This climate heat capacity CCL depends on how sunlight is being absorbed, how that heat is transferred to the atmosphere and which part of it is being stored in either land or ocean.

At continental land-area, sunlight is absorbed only at the very surface where the generated heat is also in direct contact with the atmosphere. Seasonal temperature variations don’t penetrate more than 1 to 2 meters deep in average and as a consequence, storage of heat is relatively small. Sunlight can penetrate pure water to several hundred meters deep, but in practice, penetration in the oceans is limited by scattering and absorption of organic and inorganic material. A good indication is the depth of the euphotic zone where algae and phytoplankton live, which need light to grow. In clear tropical waters where most of the sunlight hits our planet, this zone is 80 to 100 m deep [12].

Another important factor in our climate’s heat capacity is how this ocean layer of absorbed heat is in contact with the atmosphere. Tides, wind, waves and convection continuously mix the top layer of our oceans, by which heat is easily exchanged with the atmosphere. This mixed-layer is typically in the order of 25 – 100 m, dependent on season, latitude and on the definition of “well mixed” [13]. Below this ~100 m thick top-layer, where hardly any light is being absorbed and the mixing process has stopped, ocean temperatures drop quickly with depth. As the oceans’ vertical temperature gradient at that depth doesn’t support conductive nor convective heat flows going upward, climate processes at the surface will thus become isolated from the rest of the Earth’ thermal system.

Figure 4 with the Change in Ocean Heat Content vs. Depth over the period 2004 – 2020 obtained via the ARGO-floats [6,14], offers a good indication for the average climate capacity CCL. It shows the top layer with a high surface temperature change according to the observed global warming rate of about 0.015 K/year, and a steep cut off at about 100 m depth in line with the explanation above. Below the top layer, temperature effects are small and difficult to interpret, probably due to averaging over all kinds of temperature/depth profiles in the various oceans ranging from Tropical- to Polar regions.

In case of a “perfect” equilibrium (N = 0, dTS/dt = 0), all of the absorbed sunlight up to about 100 m deep, has to leave on the ocean-atmosphere interface again. However, deep oceans are still very cold with a stable, negative temperature gradient towards the bottom. This gradient will anyhow push some of the absorbed heat downwards. Therefore, even at a climate equilibrium with dTS/dt= 0, we will observe N > 0. With the large heat capacity of the total ocean volume, that situation will not change easily, as it takes about 500 years with today’s N ≈ +1 W/m2 to raise its average temperature just 1°C.

The Earth’s climate system can thus be regarded as a subset of the total Earth’s thermal system (ETS) responding to different relaxation times. The climate relaxes to a new equilibrium within 3–5 years, while the deeper oceans operate on multidecadal or even longer timescales, related to their respective thermal capacities C for the ETS, and CCL for the climate system.

The (near) “steady state” character of current climate change

Despite the ongoing changes in climate, the current state can be considered a “near” steady-state. The GHG forcing trend has been pretty constant for decades. Other forcings, primarily in the SW channel, are also likely to change slowly and can be approximated as having constant trends over decadal timescales. Similarly, despite yearly fluctuations, the surface temperature trend has remained fairly stable since 2000.

This analysis strengthens the conclusion that the increase in both N(t) and N0(t) are not a direct consequence of greenhouse gas emissions, but rather of enhanced forcing in the SW-channel.

The preceding analysis highlights how the IPCC’s assumptions diverge significantly from observed reality. While the IPCC model components may collectively reproduce the observed warming trend, they fail to individually align with key observational data, in particular the Ocean Heat Content.

Figure 6 also illustrates that changes in cloudiness are more pronounced on the Northern Hemisphere, especially at mid-latitudes and over Western Europe. For example, the Dutch KNMI weather-station at Cabauw (51.87°N, 4.93oE), where all ground-level radiation components are monitored every 10 minutes, recorded an increase in solar radiation of almost +0.5 W/m²/year since 2000 [26]. Applying the 0.43 net-CRE factor (conservative for this latitude), we estimate a local forcing trend dFSW/dt ≈ 0.2 W/m²/year. This is an order of magnitude larger than the GHG forcing (0.019–0.037 W/m²/year). Even with the IPCC values, GHGs can just account for about 16% of the warming at this station. The average temperature trend for this rural station located in a polder largely covered by grassland, is with ~ +0.043 K/year almost 3x the global average. This, nor the other trends mentioned above can be adequately explained by the IPCC’s GHG-only model.

The IPCC places strong emphasis on the role of climate feedbacks in amplifying the warming effect of greenhouse gases (GHGs) [8]. These feedbacks are considered secondary consequences of Anthropogenic Global Warming, driven by the initial temperature increase from GHGs. Among them, Water-Vapor feedback is the most significant. A warmer atmosphere holds more water vapor (approximately +7%/K) and since water vapor is a potent GHG, even a small warming from CO2 can amplify itself through enhanced evaporation.

Other feedbacks recognized by the IPCC include Lapse Rate, Surface Albedo, and Cloud feedbacks [8], all of which are inherently tied to the presence and behavior of water in its various phases. Therefore, these feedbacks are natural responses to temperature changes, regardless of the original cause of warming, be it GHGs, incoming solar variability, or internal effects. They are not additive components to natural climate sensitivity, as treated by the IPCC, but rather integral parts of it [4].

This analysis reinforces a fundamental point: climate feedbacks are not external modifiers of climate sensitivity; rather, they are inherent to the system. Their combined effect is already embedded in the climate response function. The IPCC’s treatment of feedbacks as additive components used to “explain” high sensitivities in GCMs is conceptually flawed. Physically, Earth’s climate is governed by the mass balance of water in all its phases: ice, snow, liquid, vapor, and clouds. The dynamics between these phases are temperature-sensitive, and they constitute the feedback processes. Feedbacks aren’t just add-ons to the climate system, they are our climate.

Ocean Heat Content increase

In the introduction, the “heat in the pipeline” concept: the idea that heat stored in the deep, cold ocean layers could later resurface to significantly influence surface temperatures, was challenged. Without a substantial decrease in surface temperatures to reverse ocean stratification, this seems highly unlikely. Large and rapid temperature fluctuations during the pre-industrial era with rates up to plus, but also minus 0.05 K/year over several decennia as recorded in the Central England Temperature (CET) series [27], more than three times the rate observed today, further undermine the notion of a slow-release heat mechanism dominating surface temperature trends.

Ocean Heat Content must be related to solar energy. It is the prime source of energy heating the Earth thermal system. Almost 1 W/m2 of that 240 W/m2 solar flux that is in average entering the system, is presently remaining in the oceans. This is an order of magnitude larger than the estimated 0.1 W/m2 of geothermal heat upwelling from the Earth inner core [11]. Extra greenhouse gasses don’t add energy to the system, but just obstruct cooling. As shown in Section 5.3, this accounts for a radiation imbalance offset τ dFGHG/dt, or equivalent to a contribution to dOHC/dt of only about 0.08 W/m2.
.
As redistribution of “heat in the pipeline” will not change the total OHC, roughly 3/4 of the observed positive trend in OHC must at least be attributed to rising solar input. The oceans act in this way as our climate system’s thermal buffer. It will mitigate warming during periods of increased solar input and dampen cooling when solar input declines, underscoring its critical role in Earth’s climate stability.

The strong downwards slope in the OHC before 1970 confirms the observation in Section 5.4 and expressed by (12) that around the turning point t = ζ, the forcing trend in the SW-channel had to be negative. Moreover, the rather slowly increasing 700-2000m OHC data in Fig.7 indicate that most of the fluctuations have occurred relatively close to the surface. Heat from e.g. seafloor volcanism as “warming from below”, is expected to show up more pronounced in this 700-2000m OHC-profile. Although we cannot rule out geothermal influences [29], this observation makes them less likely.

ERBE measurements of radiative imbalance.

As the OHC seems to be primarily coupled to SWIN, the most plausible cause would involve rapid changes in SW-forcing. A sudden drop in cloud-cover might explain such changes, but no convincing observations could be found for the 1960-1980 period. Alternatively, changes in the latitudinal distribution of cloud-cover as illustrated by Fig.6, can result in similar radiative impacts due to the stark contrast between a positive radiation imbalance in the Tropics and a very negative imbalance at the Poles. The ENSO-oscillations in the Pacific Ocean around the equator are a typical example for such influences, as also illustrated in Fig.3 [10]. Shifts in cloud distribution are linked to changes in wind patterns and/or ocean currents, reinforcing the idea as indicated in Section 1, that even minor disruptions in horizontal heat transport can trigger major shifts in our climate’s equilibrium [29, 30]. Sharp shifts in Earth’s radiation imbalance like the one around 1970 as inferred from Fig.7, may even represent one of those alleged tipping points. But in this case, certainly not one triggered by GHGs. Ironically, some climate scientists in the early 1970s predicted an impending (Little) Ice Age [31].

While additional data (e.g. radiation measurements) are needed to draw firm conclusions, the available evidence already challenges the prevailing GHG-centric narrative again. GHG emissions, with their near constant forcing rate, cannot account for the timing nor the magnitude of historical OHC trends, as NOAA explicitly suggests [32]. Similarly, claims by KNMI that “accelerations” in radiation imbalance trends are GHG-driven [1], are not supported by data. And finally, the alarms around “heat in the pipeline” must be exaggerated if not totally misplaced. Given the similarities in radiation imbalance and GHG forcing rates around 1970 with today’s situation, we must conclude that this assumed heat manifested itself at that time apparently as “cooling in the pipeline”.

However, warnings for continued warming even if we immediately stop now with emitting GHGs are nevertheless, absolutely justified. Only, it isn’t warming then from that heat in the pipeline due to historical emissions that will boost our temperatures. Warming will continue to go on as long as natural forcings will be acting. These are already today’s dominant drivers behind global temperature trends. And unfortunately, they will not be affected by the illusion of stopping global warming as created by implementing Net-Zero policies.

Summary and conclusions

This analysis demonstrates that a global warming scenario driven solely by greenhouse gases (GHGs) is inconsistent with more than 20 years of observations from space and of Ocean Heat Content. The standard anthropogenic global warming (AGW) hypothesis, which attributes all observed warming to rising GHG concentrations, particularly CO2, cannot explain the observed trends. Instead, natural factors, especially long-term increase in incoming solar radiation, appear to play a significant and likely dominant role in global warming since the mid-1970s.

The observed increase in incoming solar radiation cannot be accounted for by the possible anthropogenic side effects of Albedo- and Cloud-feedback. All evidence points to the conclusion that this “natural” forcing with a trend of about 0.035 W/m2/year is equal to, or even exceeds the greenhouse gas related forcing of about 0.019 W/m2/year. Based on these values, only 1/3rd of the observed temperature trend can be of anthropogenic origin. The remaining 2/3rd must stem from natural changes in our climate system, or more broadly, in our entire Earth’ thermal system.

Moreover, the observed increase in Earth’s radiation imbalance appears to be largely unrelated to GHGs. Instead, it correlates strongly with natural processes driving increased incoming solar radiation. Claims of “acceleration” in the radiation imbalance due to GHG emissions are not supported by the trend in accurately measured GHG concentrations. If any acceleration in global warming is occurring, it is almost certainly driven by the increasing flux of solar energy—an inherently natural phenomenon not induced by greenhouse gases.

In summary, this analysis challenges the notion that GHGs are the primary drivers of recent climate change. It underscores the importance of accounting for natural variability, especially in solar input, when interpreting warming trends and evaluating climate models.

Note: Dr. Ad Huijser, physicist and former CTO of Philips and director of the Philips Laboratories, describes himself as “amateur climatologist”. However his approach to climate physics is quite professional, I think.

See Also: 

Our Atmospheric Heat Engine

 

 

 

 

Texans, Don’t Mess With Emissions Reductions

Gregory Wrightstone writes at Lone Star Standard; Texans should stop spending on fake climate crisis.  Excerpts in italics with my bolds and added images.

Boasting that Texas “has built more wind power than any state and is a top contender for the most solar power,” Texas Tribune article bemoans a decline in federal subsidies for such energy sources and a potential loss of “billions in investments and thousands of jobs.”

Interestingly, the writers focus on business interests of the climate industrial complex and ignore the stated reason for subsidies – to avoid supposed catastrophic global warming. Planetary health – purported to be threatened by industrial emissions of carbon dioxide (CO2) – was not even an afterthought in the handwringing over wind and solar financial fortunes.

Regardless, Texans face no such peril and the billions already spent on “green” obsessions in the Lone Star State are for naught. “There is no evidence of a climate crisis in Texas and none can be reasonably expected,” says a report, “Texas and Climate Change,” recently published by the CO2 Coalition, Fairfax, Virginia.

Both the Fifth National Climate Assessment (NCA5) and a Texas A&M University report predict harm to Texans from human-induced warming. Climate change is “putting us at risk from climate hazards that degrade our lands and waters, quality of life, health and well-being, and cultural interconnectedness,” according to NCA5.

In contradicting those findings, the CO2 Coalition analyzed data from the National Oceanic and Atmospheric Administration (NOAA), U.S. Environmental Protection Agency (EPA), NASA, U.S. Department of Agriculture, reports published in peer-reviewed journals and others.

“The temperature in Texas has shown no unprecedented or unusual warming, despite increasing atmospheric carbon dioxide,” says the CO2 Coalition report. “Recent temperatures in Texas are similar to those found more than 100 years ago.”

In fact, the annual number of 100-degree days in Texas has an overall decreasing trend.

While some have claimed a connection between climate change and July’s tragic flooding in central Texas, no scientific basis for such a link exists. Though extreme, the flooding was not a first.

According to Harris County meteorologist Jeff Lindner, the July 4th flood of the Guadalupe River at Kerrville peaked at 34.29 feet, making it the third-highest flood on record for the city. The 2025 flood crest trails the 39.0-foot flood crest from 1932 and the 37.72-foot flood crest from in 1987.

“Over the last 28 years, flash floods, while varying greatly from year to year, have actually been in slight decline,” the CO2 Coalition report found.

Precipitation data from the U.S. Historical Climatology Network indicate that Texas has experienced a very slight increase (1 to 2 inches annually) in precipitation since 1895, which is contrary to the predictions of significant increases in rainfall from climate alarmists. If anything, the modest increase in Texas precipitation should have beneficial effects on the state’s agricultural yields.

As for drought – the primary scourge of crops throughout the world – government data show no discernable trend in the severity of arid spells in Texas, which is a direct contradiction to claims of increasing drought by both the Texas A&M report and NCA5.

Similarly rebutting the fearmongering of alarmists, the CO2 Coalition report found no increasing trends for wildfires, hurricanes and tornadoes.

With respect to tornadoes, the U.S., including Texas, has seen a decades-long decline in the most violent of twisters. The likely reason is a warming Earth – a natural phenomenon following the end of the Little Ice Age – reduces the temperature differentials between regions inside and outside equatorial regions that drive storms.

Like the rest of the world, Texas has experienced record-breaking growth in crop production over the last several decades. This is no coincidence, as research shows every increase of 1 part per million (ppm) in CO2 concentration boosts yields of corn and wheat by 0.4% and 1%, respectively. Based on these metrics, the 140-ppm increase in CO2 since the beginning of the Industrial Revolution has led to increases of 56%, 84% and 140% in corn, soybeans and wheat, respectively.

CO2 is necessary for life on Earth, and reducing emissions of the gas would be harmful to vegetation, including forests, grasslands and agricultural crops.

Even if Texas could stop emitting CO2, the amount of atmospheric warming averted would be only 0.0093 degrees and 0.0237 degrees by 2050 and 2100, respectively. These changes are negligible and cannot be felt or measured.

If the reason for spending on Texas climate policy were to enrich wind and solar developers, then, yes, lamentations over the demise of subsidies are understandable. However, there is no basis for spending a cent on a fake crisis – and certainly not on technologies that offer no benefit.

Anti-Tornado Tech Better Than Mitigation?

Gregory Wrightstone is a geologist; executive director of the CO2 Coalition, Fairfax, Va.; author of “Inconvenient Facts: The Science That Al Gore Doesn’t Want You to Know” and “A Very Convenient Warming: How modest warming and more CO2 are benefiting humanity.”

CO2 Coalition Texas Report is here.  My snyopsis is :

No Climate Crisis in Texas

An Insider’s Story How Climatism Subverted Reason

Mark Keenan explains in his American Thinker article The Climate Creed: How Fear Replaced Science.  Excerpts in italics with my bolds and added images.

For decades, politicians and pundits have told us that “the science is settled.” Those four words have become a shield for power and a sword against dissent. But real science thrives on inquiry and investigation; not the suppression of it. What has emerged instead is not science at all, but a kind of secular faith — one that demands belief in man-made CO2-induced climate catastrophe and punishes heresy. Yet, many scientists, including scientists that have worked within the climate bureaucracy, know how fragile the claim that “climate change is caused by CO2” really is.

As a former scientist with the UK Department of Energy and Climate Change and later a technical expert for United Nations Environment, I saw firsthand how the modern climate narrative was shaped — not by evidence, but by politics. Uncertainty wasn’t treated as a question to investigate; it was treated as a threat to suppress. Entire careers and institutions came to depend on preserving a preordained conclusion: that carbon dioxide, the same gas that feeds plant life, is destroying the planet.

What began as environmental concern has hardened into climate orthodoxy — a moral creed enforced by bureaucrats, bankers, and media alike. It is a belief system that demands faith rather than understanding, obedience rather than inquiry. None of this means the climate isn’t changing. It means that the conversation about why and how has been systematically narrowed — not by discovery, but by decree.

The Rise of Climate Bureaucracy

By the 1990s, climate science had morphed from an academic discipline into a vast global bureaucracy. The Intergovernmental Panel on Climate Change (IPCC), founded in 1988, became the central authority — linking governments, corporations, and NGOs under a single mission: to define and manage “the problem.”

But the IPCC’s reports were never neutral. The “Summary for Policymakers” — the only section most journalists ever read — was often written before the science was finalized. Conclusions drove the evidence, not the other way around. Scientists who emphasized natural climate drivers such as solar cycles or ocean oscillations were quietly pushed aside. The institution that once claimed to study the climate became invested in proving a single narrative.

The Other Consensus

While the UN promotes its “consensus,” thousands of scientists disagree. In 2019, more than 2,000 experts signed the Climate Intelligence (CLINTEL) Declaration, stating bluntly:

“There is no [CO₂-induced] climate emergency. The geological record shows Earth’s climate has always varied naturally.”

CO2 is not pollution — it is plant food, essential for life and photosynthesis. Yet the UN’s focus on carbon rather than true pollutants such as heavy metals or industrial toxins has diverted environmentalism from its original mission into politics.

I witnessed this distortion firsthand while working within the UN system. My role involved servicing the Pollution Release and Transfer Register Protocol — a multinational agreement that monitors pollutants to air, land, and water. Real pollution exists, and it’s severe. But CO2 is not the problem. Confusing the two has served political and financial ends, not ecological ones.

When Science Becomes Statecraft

The line between scientific advice and political advocacy blurred long ago. Governments needed crisis to justify regulation and taxation. NGOs needed fear to justify funding. And so “consensus science” — a contradiction in terms — entered the lexicon and became the new norm.

Real science advances through dissent and enquiry; consensus is a political construct. But once the term took hold, it became a weapon. Questioning it marked one as a heretic. The language of faith — belief, denial, salvation — replaced the language of analysis. What began as environmental concern hardened into a kind of secular theology: the carbon creed.

Complexity was the enemy. Climate models that showed alarming forecasts were amplified, while those showing uncertainty were ignored. What followed was the moralization of data. The language of faith replaced the language of evidence: belief, denial, salvation, catastrophe. Dissenters weren’t debated — they were denounced. What began as environmental concern hardened into an ideology — one that rewards fear over reason.

Scientists Who Broke Ranks

Many respected scientists have spoken out. Professor John R. Christy, Director of Atmospheric and Earth Sciences, University of Alabama, stated: “The established global warming theory significantly misrepresents the impact of extra greenhouse gases.” MIT’s Richard Lindzen observed, “In Earth’s long history, there’s been almost no correlation between climate and CO₂.” Dr. Nils-Axel Mörner, once with the IPCC, called the carbon narrative “a wonderful way to control taxation and people.” Greenpeace co-founder Patrick Moore declared the crisis “fake science” hijacked by ideology.

Such voices are rarely heard in mainstream media, not because their credentials lack merit, but because they challenge the most politically valuable story of the century.

The Money Behind the Mandate

Follow the money, and the picture becomes clearer. The financialization of carbon—
through emissions trading, carbon credits, and “green investment” funds
— transformed moral urgency into a trillion-dollar industry.

Governments pour billions into renewable subsidies, enriching banks and corporations far more than benefiting the planet. If the climate crisis were truly existential, would its management really be entrusted to those who profit from it?

In my book Climate CO₂ Hoax – How Bankers Hijacked the Environmental Movement, I detail how the 1992 UN Earth Summit in Rio marked the turning point — when financial elites effectively captured global environmental policy. Reports and whistleblower accounts later suggested that key policies adopted at the summit were drafted without open debate — policies that subordinated national sovereignty to global ‘sustainability’ goals.”

Net Zero: The Mirage of Green Energy

The world’s economies are being restructured around “net zero,” but the irony is glaring. Building the infrastructure for so-called “green energy” — from solar panels to EV batteries — requires massive fossil-fuel use and destructive rare-earth mining.

Electric cars rely on lithium and cobalt extracted through environmentally devastating processes. The energy required to mine and refine these materials often exceeds what the vehicles save over their lifetimes.

In Germany, the green energy transition has turned a once-stable, low-cost energy grid into one of the most expensive in the industrial world. In Ireland, plans to close the coal-fired Moneypoint power station were reversed in 2022 as the government quietly converted it to burn oil instead — an unspoken admission that “renewables” can’t power modern economies.

Silencing Dissent

In this new orthodoxy, questioning the narrative is treated as blasphemy. Scientists who deviate from the CO2 script face censorship, ostracism, and blacklisting. The term “denier” — borrowed from the lexicon of moral condemnation — equates disagreement with depravity, and scepticism with sin

Dr. Roger Pielke Jr. of the University of Colorado revealed how the IPCC relies on the RCP 8.5 model — one he described as “fantasy land,” completely detached from real-world data. Yet it remains the foundation of global policy and countless policy papers and media headlines.

When truth becomes heresy, science itself collapses.

The Moralization of Carbon

CO2 has been transformed from a molecule into a moral symbol — the embodiment of human guilt. Citizens are told to measure their “carbon footprint” as if it were a sin ledger, redeemable only through “green” consumption. Yet many of these same products — from electric cars to solar infrastructure — depend on the same industrial extraction that environmentalism once opposed.

This framing serves a purpose. Instead of questioning the powerful institutions that profit from pollution and its supposed cure, individuals are encouraged to internalize blame. The message: You are the problem — not the system. It’s an old strategy of control — rule through guilt rather than force.

The Politics of Fear

No ideology survives without fear. Apocalyptic imagery — burning forests, flooded cities, “ticking clocks” — has replaced empirical evidence as the main instrument of persuasion. Yet forest fires and floods are as old as the Earth itself.

Children now grow up believing the planet will collapse before they reach adulthood. Politicians invoke “existential threat” rhetoric to justify sweeping economic and social controls. What was once a challenge to power has become a tool of it.

The New Creed

Modern climate orthodoxy is not science but ideology — a sociopolitical construct — a fusion of fear, money, and power that rewards conformity and punishes doubt. Science must never serve politics. When data becomes dogma, truth dies — and with it, freedom. If we truly wish to “save the planet,” we must first save science itself.

Mark Keenan is a former scientist at the UK Department of Energy and Climate Change and a former Environmental Affairs Officer with United Nations Environment. 

About Sea Surface Temperatures

Background from NOAA Climate.gov

Q:  What’s the temperature of water at the ocean’s surface?
A:  Colors on the map show the temperature of water right at the ocean’s surface. The darkest blue shows the coldest water: floating sea ice is usually present in these areas. Lighter shades of blue show temperatures of up to 80°F. White and orange areas show where surface temperatures are higher than 80°F, warm enough to fuel tropical cyclones or hurricanes.

Q:  Where do these measurements come from?
A:  Satellite instruments measure sea surface temperature—often abbreviated as SST—by checking how much energy comes off the ocean at different wavelengths. Computer programs merge sea surface temperatures from ships and buoys with the satellite data, and incorporate information from maps of sea ice. To produce the daily maps, programs invoke mathematical filters to combine and smooth data from all three sources.

Q:  Why do these data matter?
A:  While heat energy is stored and mixed throughout the depth of the ocean, the temperature of water right at the sea’s surface—where the ocean is in direct contact with the atmosphere—plays a significant role in weather and short-term climate. Where sea surface temperatures are high, relatively large amounts of heat energy and moisture enter the atmosphere, sometimes producing powerful, drenching storms downwind. Conversely, lower sea surface temperatures mean less evaporation. Global patterns of sea surface temperatures are an important factor for weather forecasts and climate outlooks.

Q:  How did you produce these snapshots?
A:  Data Snapshots are derivatives of existing data products: to meet the needs of a broad audience, we present the source data in a simplified visual style. NOAA’s Climate Data Records Program produces the Opitimum Interpolated Sea Surface Temperature files. To produce our images, we run a set of scripts that access the source files, re-project them into desired projections at various sizes, and output them with a custom color bar.

With the federal government shutdown, dataset updates are uncertain, but OISST is current and shows how presently the ocean is cooling down from it’s 2024 high temperatures.

Note: Daily SST Ocean Temperature Graphic, 1982-2025

Use the options below to generate graphics of daily sea surface temperatures since 1982 using data from NOAA’s Optimum Interpolation Sea Surface Temperature (OISST) v2.1 dataset. [The chart above defines Global as 60°N to 60°S.]  These graphics will update daily, or as data becomes available on the Climate Reanalyzer website. Note the most recent two weeks of data are considered preliminary. Specific information about the data can be found here.

My Comment:

The chart shows how 2025 is tracking ~half a degree F cooler than 2024. That may not seem significant, except that the ocean covers 71% of the Earth’s surface, and any SST warming is reported in numbers of zettajoules.  This is explained at the EPA website Climate Change Indicators: Ocean Heat:

The top 700 meters of the ocean contain 63% of the ocean’s heat content. The data shows that ocean layer has warmed about a zettajoule (1×10^22 joules) yearly since 1990.

For reference, an increase of 1 unit on this graph (1 × 10^22 joules) is equal to approximately 17 times the total amount of energy used by all the people on Earth in a year (based on a total global energy supply of 606 exajoules in the year 2019, which equates to 6.06 × 10^20 joules).

It appears that presently we may be about to lose that amount of energy through oceanic cooling.  And  the sun could be to blame:

Scare du jour Marine Heat Waves

If you watch legacy media, you must also be wondering after seeing all the current headlines about Marine Heat Waves raising the ocean to its boiling point.

Ocean heatwaves are breaking Earth’s hidden climate engine, Science Daily

The Pacific Ocean is overheated, making fall feel like summer, CBC

The ‘blob’ is back — except this time it stretches across the entire north Pacific, CNN

Record marine heatwaves may signal a permanent shift in the oceans, New Scientist

Global warming drives a threefold increase in persistence and 1 °C rise in intensity of marine heatwaves, PNAS

Etc., etc. etc.

The last one is the paper driving this recent clamor over Ocean SSTs Marcos et al. 2025 From the abstract:

We determine that global warming is responsible for nearly half of these extreme events and that, on a global average, it has led to a three-fold increase in the number of days per year that the oceans experience extreme surface heat conditions. We also show that global warming is responsible for an increase of 1 °C in the maximum intensity of the events. Our findings highlight the detrimental role that human-induced global warming plays on marine heatwaves. This study supports the need for mitigation and adaptation strategies to address these threats to marine ecosystems.

The coordinated media reports are exposed by all of them containing virtually the same claim:

As climate change causes our planet to warm, marine heatwaves are
becoming more frequent, more intense, and longer lasting. 

Animation shows locations of moderate to severe MHWs mid-month 2025 January to October. A marine heatwave is defined as one where the measured temperature is within 10% of the maximum values observed (i.e., above the Threshold (90th quantile) , for at least 5 consecutive days. For this, the intensity is compared to the difference between the climatological mean and the 90th percentile value (threshold). A marine heatwave intensity between 1 and 2 times this difference corresponds to a heatwave of moderate category; between 2 and 3 times, to a strong category; between 3 and 4 times, to a severe category; and a difference greater than 4 times corresponds to an extreme category.

First some background context on the phenomena (in italics with my bolds).

Background from perplexity.ai How Do Warm and Cool Ocean Blobs Circulate?

Warm and cool ocean blobs circulate through distinct oceanic and atmospheric processes, often linked to major currents and atmospheric patterns.

Warm ocean blobs, such as the “warm blob” in the northeast Pacific, form due to atmospheric circulation changes triggered by factors like Arctic warming. This leads to a high-pressure system over the region, weakening westerly winds and reducing ocean heat loss, causing surface waters to warm and creating persistent warm anomalies. The formation of these warm blobs involves a feedback loop between weakened winds, reduced ocean-atmosphere heat exchange, and ocean circulation, which retains heat in the mixed layer of the ocean.

Cool ocean blobs, like the North Atlantic “cold blob,” are influenced by weakening of the Atlantic Meridional Overturning Circulation (AMOC). This circulation moves warm, salty water northward, which cools, sinks, and then the cooler deep water travels southward in a conveyor-belt style flow. The cold blob forms when excess freshwater from ice melt dilutes the salty water, reducing its density and sinking ability, weakening this circulation and causing cooler surface water to persist. This cooling also affects the atmosphere by reducing water vapor, which decreases greenhouse effect locally and amplifies the cold anomaly, creating a coupled ocean-atmosphere feedback loop.

In summary, warm and cool ocean blobs circulate through a combination of ocean current dynamics and atmospheric interactions. Warm blobs form where atmospheric changes reduce ocean heat loss and circulation shifts retain heat, while cool blobs occur where circulation weakens, allowing cooler, less dense waters to persist and affect atmospheric conditions as well.

Then a summary of the issues undermining the alarmists’ claim.

From perplexity.ai What are reasons to doubt climate change is increasing marine heatwaves?

There are several reasons to doubt that climate change is definitively increasing the frequency, intensity, duration, and spatial extent of marine heatwaves, based on some ongoing scientific debates and uncertainties.

Natural Variability and Other Factors

♦  Marine heatwaves are influenced by natural climate variability, such as El Niño, Pacific Decadal Oscillation (PDO), and other oceanic and atmospheric processes. These phenomena can cause fluctuations in sea surface temperatures independent of long-term climate change, leading to periods of warmer ocean conditions that may be mistaken for climate-driven trends.

♦  Some studies emphasize the role of internal ocean variability, which can cause significant short-term temperature anomalies without requiring a direct link to anthropogenic climate change.

Complexity of Attribution

♦  The attribution of marine heatwave trends specifically to climate change involves complex modeling and statistical analysis, which can have uncertainties. Certain models suggest that long-term temperature increases are the primary driver, but the contribution of natural variability remains significant and sometimes difficult to separate clearly from climate signals.

♦  Regional differences and localized oceanic processes can obscure the global patterns, leading some scientists to argue that not all observed phenomena are directly attributable to climate change, particularly in areas with strong natural variability.

Limitations of Climate Models

♦  Climate models predicting future marine heatwave conditions depend heavily on assumptions about greenhouse gas emissions and other factors. These models often have limitations in resolution and in capturing small-scale processes, which could lead to overestimations or underestimations of climate change impacts.

Data Gaps and Uncertainties

♦  Although current observations show increasing trends in marine heatwaves, data gaps exist, especially in remote or deep-sea regions, making comprehensive global assessments challenging. These gaps contribute to uncertainty regarding the full extent and causality of observed changes.

♦  The precise long-term ecological impacts and possible adaptation or resilience mechanisms of marine ecosystems also remain uncertain, complicating the understanding of climate change’s role versus natural variability.

Summary

While a considerable body of evidence supports the role of climate change in increasing marine heatwaves, skepticism persists due to the influence of natural variability, model limitations, regional differences, and data gaps. These factors suggest that attribution is complex, and ongoing research continues to refine our understanding of the relative contributions of human influences and natural climate fluctuations.

Finally, a discussion of a specific example revealing flawed methods supposedly connecting CO2 emissions to marine heatwaves.

Much Ado About Marine Heat Waves

The promotion of this scare was published in 2022 at Nature by Barkhordarian et al. Recent marine heatwaves in the North Pacific warming pool can be attributed to rising atmospheric levels of greenhouse gases.  This post will unpack the reasons to distrust this paper and its claims.  First the Abstract of the subject and their declared findings in italics with my bolds.

Abstract

Over the last decade, the northeast Pacific experienced marine heatwaves that caused devastating marine ecological impacts with socioeconomic implications. Here we use two different attribution methods and show that forcing by elevated greenhouse gases levels has virtually certainly caused the multi-year persistent 2019–2021 marine heatwave. There is less than 1% chance that the 2019–2021 event with ~3 years duration and 1.6 ∘C intensity could have happened in the absence of greenhouse gases forcing. We further discover that the recent marine heatwaves are co-located with a systematically-forced outstanding warming pool, which we attribute to forcing by elevated greenhouse gases levels and the recent industrial aerosol-load decrease. The here-detected Pacific long-term warming pool is associated with a strengthening ridge of high-pressure system, which has recently emerged from the natural variability of climate system, indicating that they will provide favorable conditions over the northeast Pacific for even more severe marine heatwave events in the future.

Background on Ocean Warm Pools

Wang and Enfield study is The Tropical Western Hemisphere Warm Pool Abstract in italics with my bolds.

Abstract

The Western Hemisphere warm pool (WHWP) of water warmer than 28.5°C extends from the eastern North Pacific to the Gulf of Mexico and the Caribbean, and at its peak, overlaps with the tropical North Atlantic. It has a large seasonal cycle and its interannual fluctuations of area and intensity are significant. Surface heat fluxes warm the WHWP through the boreal spring to an annual maximum of SST and areal extent in the late summer/early fall, associated with eastern North Pacific and Atlantic hurricane activities and rainfall from northern South America to the southern tier of the United States. SST and area anomalies occur at high temperatures where small changes can have a large impact on tropical convection. Observations suggest that a positive ocean-atmosphere feedback operating through longwave radiation and associated cloudiness is responsible for the WHWP SST anomalies. Associated with an increase in SST anomalies is a decrease in atmospheric sea level pressure.

Chou and Chou published On the Regulation of the Pacific Warm Pool Temperature:

Abstract

Analyses of data on clouds, winds, and surface heat fluxes show that the transient behavior of basin-wide large-scale circulation has a significant influence on the warm pool sea surface temperature (SST). Trade winds converge to regions of the highest SST in the equatorial western Pacific. The reduced evaporative cooling due to weakened winds exceeds the reduced solar heating due to enhanced cloudiness. The result is a maximum surface heating in the strong convective and high SST regions. The maximum surface heating in strong convective regions is interrupted by transient atmospheric and oceanic circulation. Regions of high SST and low-level convergence follow the Sun. As the Sun moves away from a convective region, the strong trade winds set in, and the evaporative cooling enhances, resulting in a net cooling of the surface. We conclude that the evaporative cooling associated with the seasonal and interannual variations of trade winds is one of the major factors that modulate the SST distribution of the Pacific warm pool.

Comment:

So these are but two examples of oceanographic studies describing natural factors driving the rise and fall of Pacific warm pools.  Yet the Nature paper claims rising CO2 from fossil fuels is the causal factor, waving away natural processes.  Skeptical responses were already lodged upon the first incidence of the North Pacific marine heat wave, the “Blob” much discussed by west coast US meteorologists.  One of the most outspoken against the global warming attributionists has been Cliff Mass of Seattle and University of Washington.  Writing in 2014 and 2015, he observed the rise and fall of the warming blob and then posted a critique of attribution attempts at his blog.  For example, Media Miscommunication about the Blob.  Excerpts in italics with my bolds.

Blob Media Misinformation

One of the most depressing things for scientists is to see the media misinform the public about an important issue.

During the past few days, an unfortunate example occurred regarding the warm water pool that formed over a year ago in the middle of the north Pacific, a.k.a., the blob. Let me show how this communication failure occurred, with various media outlets messed things up in various ways.

The stimulant for the nationwide coverage of the Blob was a very nice paper published by Nick Bond (UW scientist and State Climatologist), Meghan Cronin, Howard Freeland, and Nathan Mantua in Geophysical Research Letters.

This publication described the origin of the Blob, showing that it was the result of persistent ridging (high pressure) over the Pacific. The high pressure, and associated light winds, resulted in less vertical mixing of the upper layer of the ocean; with less mixing of subsurface cold water to the surface. Furthermore, the high pressure reduced horizontal movement of colder water from the north. Straightforward and convincing work.

The inaccurate press release then led to a media frenzy, with the story going viral. And unfortunately, many of the media got it wrong.

There were two failure modes. In one, the headline was wrong, but the internal story was correct. . . In the second failure mode, the story itself was essentially flawed, with most claiming that the Blob off of western North America was the cause of the anomalous circulation (big ridge over West Coast, trough over the eastern U.S.). (The truth: the Blob was the RESULT of the anomalous circulations.) That the Blob CAUSED the California drought or the cold wave in the eastern U.S. These deceptive stories were found in major outlets around the country, including the Washington Post, NBC News, and others.

Blob Returns,  Attribution Misinformation

When the Blob returned 2020-2021, Cliff Mass had cause to again lament how the public is misled.  This time misdirection instigated by activist scientists using flawed methods.  His post Miscommunication in Recent Climate Attribution Studies.  Excerpts in italics with my bolds.

This attribution report, and most media stories that covered it, suggested a central role for global warming for the heatwave. As demonstrated in my previous blog, their narrative simply does not hold up to careful examination.

This blog will explain why their basic framing and approach is problematic, leading readers (and most of the media) to incorrect conclusions.

For the heatwave, the attribution folks only examine the statistics of temperatures hitting the record highs (108F in Seattle), but avoid looking at the statistics of temperature exceeding 100F, or even the record highs (like 103F in Seattle). There is a reason they don’t do that. It would tell a dramatically different (and less persuasive) story.

In the attribution studies, the main technology for determining changed odds of extreme weather is to use global climate models. First, they run the models with greenhouse gas forcing (which produces more extreme precipitation and temperature), and then they run the models again without increased greenhouse gases concentrations. By comparing the statistics of the two sets of simulations, they attempt to determine how the odds of extreme precipitation or temperature change.

Unfortunately, there are serious flaws in their approach: climate models fail to produce sufficient natural variability (they underplay the black swans) and their global climate models don’t have enough resolution to correctly simulate critical intense, local precipitation features (from mountain enhancement to thunderstorms). On top of that, they generally use unrealistic greenhouse gas emissions in their models (too much, often using the RCP8.5 extreme emissions scenario) And there is more, but you get the message. ( I am weather/climate modeler, by the way, and know the model deficiencies intimately.)

Vaunted Fingerprinting Attribution Is Statistically Unsound

From Barkhordarian et al.

Unlike previous studies which have focused on linking the SST patterns in the North Pacific to changes in the oceanic circulation and the extratropical/tropical teleconnections2,12,17,18,20,24,26, we here perform two different statistical attribution methodologies in order to identify the human fingerprint in Northeast Pacific SST changes both on multidecadal timescale (changes of mean SST) and on extreme SST events on daily timescale (Marine Heatwaves). Evidence that anthropogenic forcing has altered the base state (long-term changes of mean SST) over the northeast Pacific, which is characterized by strong low-frequency SST fluctuations, would increase confidence in the attribution of MHWs27, since rising mean SST is the dominant driver of increasing MHW frequency and intensity, outweighing changes due to temperature variability1,2.

In this study, we provide a quantitative assessment of whether GHG forcing, the main component of anthropogenic forcings, was necessary for the North Pacific high-impact MHWs (the Blob-like SST anomalies) to occur, and whether it is a sufficient cause for such events to continue to repeatedly occur in the future. With these purposes, we use two high-resolution observed SST datasets, along with harnessing two initial-condition large ensembles of coupled general circulation models (CESM1-LE28,29 with 35 members, and MPI-GE30 with 100 members). These large ensembles can provide better estimates of an individual model’s internal variability and response to external forcing31,32, and facilitate the explicit consideration of stochastic uncertainty in attribution results33. We also use multiple single-forcing experiments from the Detection and Attribution Model Intercomparision Project (DAMIP34) component of Coupled Model Intercomparison Project phase 6 (CMIP635).

From Barkhordarian et al. References

 

The IPCC’s attribution methodology is fundamentally flawed

The central paper underpinning the attribution analysis was assessed and found unreliable by statistician Ross McKitrick’s published evaluation. Excerpts in italics with my bolds.

One day after the IPCC released the AR6 I published a paper in Climate Dynamics showing that their “Optimal Fingerprinting” methodology on which they have long relied for attributing climate change to greenhouse gases is seriously flawed and its results are unreliable and largely meaningless. Some of the errors would be obvious to anyone trained in regression analysis, and the fact that they went unnoticed for 20 years despite the method being so heavily used does not reflect well on climatology as an empirical discipline.

My paper is a critique of “Checking for model consistency in optimal fingerprinting” by Myles Allen and Simon Tett, which was published in Climate Dynamics in 1999 and to which I refer as AT99. Their attribution methodology was instantly embraced and promoted by the IPCC in the 2001 Third Assessment Report (coincident with their embrace and promotion of the Mann hockey stick). The IPCC promotion continues today: see AR6 Section 3.2.1. It has been used in dozens and possibly hundreds of studies over the years. Wherever you begin in the Optimal Fingerprinting literature (example), all paths lead back to AT99, often via Allen and Stott (2003). So its errors and deficiencies matter acutely.

Abstract

Allen and Tett (1999, herein AT99) introduced a Generalized Least Squares (GLS) regression methodology for decomposing patterns of climate change for attribution purposes and proposed the “Residual Consistency Test” (RCT) to check the GLS specification. Their methodology has been widely used and highly influential ever since, in part because subsequent authors have relied upon their claim that their GLS model satisfies the conditions of the Gauss-Markov (GM) Theorem, thereby yielding unbiased and efficient estimators.

But AT99:

  • stated the GM Theorem incorrectly, omitting a critical condition altogether,
  • their GLS method cannot satisfy the GM conditions, and
  • their variance estimator is inconsistent by construction.
  • Additionally, they did not formally state the null hypothesis of the RCT nor
  • identify which of the GM conditions it tests, nor
  • did they prove its distribution and critical values, rendering it uninformative as a specification test.

The continuing influence of AT99 two decades later means these issues should be corrected. I identify 6 conditions needing to be shown for the AT99 method to be valid.

In Conclusion,  McKitrick:

One point I make is that the assumption that an estimator of C provides a valid estimate of the error covariances means the AT99 method cannot be used to test a null hypothesis that greenhouse gases have no effect on the climate. Why not? Because an elementary principle of hypothesis testing is that the distribution of a test statistic under the assumption that the null hypothesis is true cannot be conditional on the null hypothesis being false. The use of a climate model to generate the homoscedasticity weights requires the researcher to assume the weights are a true representation of climate processes and dynamics.

The climate model embeds the assumption that
greenhouse gases have a significant climate impact.

Or, equivalently, that natural processes alone cannot generate a large class of observed events in the climate, whereas greenhouse gases can. It is therefore not possible to use the climate model-generated weights to construct a test of the assumption that natural processes alone could generate the class of observed events in the climate.

October Arctic Ice Grows After Pope’s Blessing

Last Wednesday Pope Leo spoke before a slowly melting chunk of glacial ice in Vatican City in his first address on climate change.  The pontiff addressed a crowd of roughly 1,000 attendees and called on people all over the world to demand action on climate from their governments. This post presents evidence the Arctic is already heeding his call, growing by leaps and bounds. /sarc

The graph above shows Sept./Oct. daily ice extents for 2025 compared to 19 year averages, and some years of note. Day 260 has been the lowest daily ice extent on average for the last 19 years. Note how in just the last five days, Arctic ice extent has grown by half a wadham or ~0.5M km2!

Why is this important?  All the claims of global climate emergency depend on dangerously higher temperatures, lower sea ice, and rising sea levels.  The lack of additional warming prior to 2023 El Nino, which is now receding, is documented in a post Tropics UAH Temps Cooler August 2025.

The lack of acceleration in sea levels along coastlines has been discussed also.  See Observed vs. Imagined Sea Levels 2023 Update.

Also, a longer term perspective is informative:

post-glacial_sea_levelThe table below shows the distribution of Sea Ice on day 260 across the Arctic Regions, on average, this year and 2007. At this point in the year, Bering and Okhotsk seas are open water and thus dropped from the table. The has grown to 5.64M km2 from 5.14 and the overall surplus to average is 447k km2, ( 9 %). The 2025 ice extent exceeds 2007 by a full wadham.

Region 2025278 Day 278 ave. 2025-Ave. 2007278 2025-2007
 (0) Northern_Hemisphere 5643927 5196640 447286 4560836 1083091
 (1) Beaufort_Sea 781758 582635 199123 590267 191490
 (2) Chukchi_Sea 474277 232765 241512 25934 448343
 (3) East_Siberian_Sea 558888 329424 229465 311 558577
 (4) Laptev_Sea 299904 208865 91039 305220 -5316
 (5) Kara_Sea 1026 45918 -44892 22717 -21691
 (6) Barents_Sea 0 17669 -17669 3580 -3580
 (7) Greenland_Sea 175128 271377 -96248 404376 -229248
 (8) Baffin_Bay_Gulf_of_St._Lawrence 81997 63374 18623 72162 9835
 (9) Canadian_Archipelago 355462 410626 -55164 349687 5775
 (10) Hudson_Bay 1172 2333 -1161 1936 -764
 (11) Central_Arctic 2912747 3030507 -117760 2783370 129376

bathymetric_map_arctic_ocean

Illustration by Eleanor Lutz shows Earth’s seasonal climate changes. If played in full screen, the four corners present views from top, bottom and sides. It is a visual representation of scientific datasets measuring ice and snow extents.