Scottish Wind Power from Diesel Generators

John Gideon Hartnett writes at Spectator Australia Another climate myth busted  explaining how the Scottish public was scammed about their virtuous green wind power by the public authority.  Excerpts in italics with my bolds and added images from post by John Ray at his blog.

What I like to call ‘climate cult’ wind farms expose the myth that wind can replace hydrocarbon fuels for power generation. The following story is typical of the problems associated with using wind turbines to generate electricity in a cold environment.

Apparently, diesel-fuelled generators are being used to power some wind turbines as a way of de-icing them in cold weather, that is, to keep them rotating. Also, it appears that the wind turbines have been drawing electric power directly from the grid instead of supplying it to the grid.

Scotland’s wind turbines have been secretly using fossil fuels.

The revelation is now fueling environmental, health and safety concerns, especially since the diesel-generated turbines were running for up to six hours a day.

Scottish Power said the company was forced to hook up 71 windmills to the fossil fuel supply after a fault on its grid. The move was an attempt to keep the turbines warm and working during the cold month of December.

South Scotland Labor MSP Colin Smyth said regardless of the reasons, using diesel to deice faulty turbines is “environmental madness”.  Source: Straight Arrow News

Charging system for Teslas at Davos WEF meeting.

Nevertheless, those pushing these technologies are so blind to the physical realities of the world that they are prepared to ignore failures while pretending to efficiently generate electricity. I say ‘failures’ because wind energy was put out of service the day the Industrial Revolution was fired up (pun intended) with carbon-based fuels, from petroleum and coal. And in the case of modern wind turbines, they do not always generate electricity; they sometimes consume it from the grid.

Green energy needs the hydrocarbon-based fuel it claims to replace.

Hydrocarbon-based fuels were provided providentially by the Creator of this planet for our use. That includes coal, which has been demonised in the Western press as some sort of evil. But those who run that line must have forgotten to tell China, because they build two coal-fired power stations every other week. No other source of non-nuclear power is as reliable for baseload generation.

How will wind turbines work in a globally cooling climate as Earth heads into a grand solar minimum and temperatures plummet? This case from Scotland may give us a hint. As cloud cover increases with cooler weather, and more precipitation occurs, how will solar perform? It won’t.
The two worst choices for electricity generation in cold, wet, and stormy environments are solar and wind. Solar is obvious. No sun means no power generation. But you might think wind is a much better choice under those conditions.

However, wind turbine rotors have to be shut down if the wind becomes too strong and/or rapidly changes in strength. They are shut down when too much ice forms or when there is insufficient wind. And now we have learned in Scotland they just turn on the diesel generators when that happens or they draw power directly from the grid.

Where are all the real engineers? Were they fired?

In regards to wind turbines going forward, once their presence in the market has destroyed all the coal or natural gas electricity generators, how are they going to keep the rotors turning and the lights on?

These devices are based on a rotating shaft with a massive bearing, that suffers massive frictional forces. In this case, only a high-quality heavy-duty oil can lubricate this system and I am sure it would need to be regularly replaced.

Wind turbine gearbox accelerates blade rotor (left) up to 1800 rpm output to electical generator (right)

Massive amounts of carbon-based oil are needed for the lubrication of all gears and bearings in a wind turbine system, which is mechanical in its nature. In 2019, wind turbine applications were estimated to consume around 80 per cent of the total supply of synthetic lubricants. Synthetic lubricants are manufactured using chemically modified petroleum components rather than whole crude oil. These are used in the wind turbine gearboxes, generator bearings, and open gear systems such as pitch and yaw gears.

Now we also know that icing causes the rotors to stop turning so diesel power has to be used to keep the bearings warm during cold weather. The diesel generator is needed to get the blades turning on start-up to overcome the limiting friction of the bearing or when the speed of the rotor drops too low.

In this case in Scotland, 71 windmills on the farm were supplied with diesel power. Each windmill has its own diesel generator. Just think of that.

What about the manufacturing of these windmills?

The blades are made from tons of fibreglass. Manufacturing fibreglass requires the mining of silica sand, limestone, kaolin clay, dolomite, and other minerals, which requires diesel-driven machines. These minerals are melted in a furnace at high temperatures (around 1,400°C) to produce the glass. Where does that heat come from? Not solar or wind power, that is for sure. The resin in the fibreglass comes from alcohol or petroleum-based manufacturing processes.

The metal structure is made from steel that requires tons of coking coal (carbon) essential to make pig iron, which is made from iron ore in a blast furnace at temperatures up to 2,200°C. The coal and iron ore is mined from the ground with giant diesel-powered machines and trucks. The steel is made with pig iron and added carbon in another furnace powered by massive electric currents. Carbon is a critical element in steel making, as it reacts with iron to form the desired steel alloy. None of this comes from wind and solar power.

Wind turbine power generation is inherently intermittent and unreliable.
It can hardly called green as the wind turbines require enormous
amounts of hydrocarbons in their manufacture and continued operation.

 

Biden Climate Policies the Greatest Financial Risk

Will Hild writes at Real Clear Policy The Biden Administration Proves Itself Wrong on ‘Climate Risk’.  The report shows how the feds’ own numbers prove their net zero policies pose a far greater financial risk than the climate itself. Excerpts in italics with my bolds and added images.

Environmental activists and left-leaning political bodies have long argued that climate risk is a form of financial risk, constantly pressuring blue states and the Biden Administration to make the issue a central part of their agendas. Recently, a group of those states has started suing oil and gas companies directly in their state courts. California, for example, claims that oil companies have colluded for decades to keep clean energy unavailable. Such lawsuits are ludicrous, and my organization, Consumers’ Research, filed an amicus brief at the Supreme Court supporting an effort to stop this litigation abuse that drives up consumer costs.

These harmful suits are driven in part by the idea that climate risks are inherently financial risks. However, evidence from the Biden Administration’s own study on climate risk shows that these risks are grossly exaggerated and immaterial. In an attempt to justify its radical and onerous climate policies, the Administration inadvertently exposed the fraud behind the environmental movement.

Soon after taking office, President Biden issued an executive order asking executive agencies to assess “the climate-related financial risk, including both physical and transition risks, to … the stability of the U.S. financial system.” Agency officials quickly responded to the order. Treasury Secretary Yellen announced that “climate change is an emerging and increasing threat to U.S. financial stability.” The FDIC declared that climate risk endangered the banking system. The SEC issued controversial climate risk disclosure rules, which impose massive regulatory burdens on Americans.

The problem with these new rules is that the Administration lacked
sufficient evidence to show that “climate-related financial risk” existed.

The SEC and Secretary Yellen relied on a Biden Administration report issued in 2021 by the Financial Stability Oversight Council. However, the report itself admitted that there were “gaps” in the evidence needed to support its speculative assertion that climate change would “likely” present shocks to the financial system. 

John H. Cochrane, a respected Stanford professor, slammed the Biden Administration’s “climate-related financial risk” assertions and highlighted that the Administration has not shown any serious threat to the financial system. “Financial regulators may only act if they think financial stability is at risk,” but “there is absolutely nothing in even the most extreme scientific speculations” to support the type of risk that would allow financial regulators to intervene. [See my synopsis Financial Systems Have Little Risk from Climate]

In response to criticism, the Biden Administration came up with an idea to manufacture its own evidence.  If the Federal Reserve created scenarios in which banks must simulate extreme “physical” and “transition” climate risks, these custom-designed scenarios could show a large impact to the financial system just like federal “stress tests” for banks.

To ensure that the “stresses” were sufficiently severe,
the Biden Administration manipulated the scenarios
to ensure as much stress as possible. 

For example, for “physical risk,” major banks had to simulate the effect of a storm-of-two-centuries-sized hurricane smashing into the heavily populated Northeast United States with no insurance coverage available to pay for the damage.  For “transition risk,” the government demanded a simulation in which “stringent climate policies are introduced immediately,” without any chance for banks to prepare for such policies, along with rapidly rising carbon prices. 

Despite these attempts to make the climate risk as extreme as possible, the tests utterly failed to demonstrate any significant effect.  The Administration’s study demonstrated that even under some of the most extreme climate scenarios imaginable, the probability of default on loans only increased by half a percentage point or less.   In contrast, federal bank stress tests involving true financial stresses, such as a severe recession, have resulted in probabilities of default jumping by 20 to 40 times that amount or more, leading to hundreds of billions in losses. 

The climate analyses also revealed the expected costs
of the Biden Administration’s quixotic net zero quest. 

The Biden Administration employed scenarios from the Network of Central Banks and Supervisors for Greening the Financial System (NGFS). Those climate scenarios envision the cost of carbon emissions steadily rising and reaching over $400 per ton by 2050.  Given that the average American emits about 16 tons of carbon a year, the Biden Administration’s hand-picked climate scenarios would cost the average American around $125k between now and 2050 in government mandated carbon fees.

Thus, the Biden Administration’s own bank stress test proved that climate risk is not a material financial risk, and that the biggest financial risk at issue is that the Administration’s net-zero policies would result in massive financial losses for everyday Americans. The Biden Administration should stop using lies to support their burdensome policies, and blue states should drop their punitive lawsuits against oil and gas companies.  Otherwise, the result of both efforts will be to inflict high costs on everyday Americans without any benefit.

 

 

Ruinous Folly of CO2 Pricing

Dr. Lars Schernikau is an energy economist explaining why CO2 pricing (also falsely called “carbon pricing”) is a terrible idea fit only for discarding.  His blog article is The Dilemma of Pricing CO2.  Excerpts below in italics with my bolds and added images.

1.Understanding CO₂ pricing
2.Economic and environmental impact
3.Global economic impact
4.Alternative solutions
5.Conclusion
6.Additional comments and notes
7.References

Introduction

As an energy economist I am confronted daily with questions about the “energy transition” away from conventional fuels. As we know, the discussion about the “energy transition” stems from concerns about climatic changes.

The source of climatic changes is a widely discussed issue, with numerous policies and measures proposed to mitigate its impact. One such measure is the current and future pricing of carbon dioxide (CO₂) emissions. The logic followed is that if human CO₂ emissions are reduced, future global temperatures will be measurably lower, extreme weather events will be reduced, and sea-levels will rise less or stop rising all together.

Although intended to reduce greenhouse gases, this approach has sparked considerable debate. In this blog post I discuss the controversial topic of CO₂ pricing, examining its economic and environmental ramifications.

However, this article is not about the causes of climatic changes, nor is it about the negative or positive effects of a warming planet and higher atmospheric CO₂ concentrations. It is also not about the scientifically undisputed fact that we don’t know how much warming CO₂ causes (a list of recent academic research on CO₂’s climate sensitivity can be found at the end of this blog).

Nor do I unpack the undisputed and IPCC confirmed fact that each additional ton of CO₂ in the atmosphere has less warming effect than the previous ton as the climate sensitivity of CO₂ is a logarithmic function irrespective of us not knowing what that climate sensitivity is. I also don’t discuss the NASA satellite confirmed greening of the world over the past decades partially driven by higher atmopheric CO₂ concentrations (see sources inc Chen et al. 2024).

Instead, this blog post is about the environmental and economic “sense”, or lack thereof, of pricing CO₂ emissions as currently practiced in most OECD countries and increasingly seen in developing nations. It is about the “none-sense” of measuring practically all human activity with a “CO₂ footprint”, often mistakenly called “carbon footprint”, and having nearly every organization set claims for current or future “Net-Zero” (Figure 1).

1.Understanding CO₂ Pricing

CO₂ pricing aims to internalize the external costs of CO₂ emissions, thereby encouraging businesses and individuals to reduce their “carbon footprint”. The concept is straightforward: by assigning a cost to CO₂ emissions, it becomes financially advantageous to emit less CO₂.

However, this simplistic view overlooks
significant complexities and unintended consequences.

Our entire existence is based on drawing from nature (“renewable” or not), so the “Net-Zero” discussion ignores a fundamental requirement for our survival. I agree that it should be our aim to reduce the environmental footprint as much as possible but only if our lives, health, and wealth don’t deteriorate as a result.

Now, I am sure, some readers and many “activists” may disagree, which I respect but, at a global level, find unrealistic. However, I would assume that most agree that no-one’s life ought to be harmed or shortened for the sake of reducing the environmental impact made. Otherwise, there is little room for a conversation.

BloombergNEFs “New Energy Outlook” from May 2024 should possibly be called “CO₂ Outlook”, as there is little to be found about energy and its economics but rather all about CO₂ emissions and the so called “Net-Zero” (Figure 1), which is in line with media, government, and educational focus on primarily carbon dioxide emissions.

2.Economic and Environmental Impacts

One of the primary criticisms of CO₂ pricing is that it addresses only one environmental externality while ignoring others. This narrow focus can lead to economic distortions, as it fails to account for the full spectrum of environmental and social impacts. For instance, while CO₂ pricing might reduce emissions, it can also drive-up energy costs, disproportionately affecting lower-income populations and hindering economic development in lesser developed countries.

It is by now undisputed amongst energy economists that, large-scale “Net-Zero” intermittent and unpredictable wind and solar power generation increases the total or “full” cost of electricity, primarily because of their low energy density, intermittency, inherent net energy and raw material inefficiency, mounting integration costs for power grids, and the need for a drastic overbuild installation system plus an overbuild backup/storage system because of their intermittency.

CO₂ pricing can also result in environmental trade-offs. For example, the shift towards “renewable” energy sources like wind and solar, incentivized by CO₂ pricing, has its own set of environmental impacts, including land use, resource extraction, energy footprint, and energy storage challenges.

When BloombergNEF (Figure 1) displays
how clean power and electrification
will directly reduce CO₂ emissions to zero,
then they are clearly mistaken.

My native country Germany provides a notable example of the complexities involved in transitioning to “renewable” energy. The country has invested heavily in wind and solar power, leading to the highest electricity costs among larger nations. Germany’s installed wind and solar capacity is now twice the total peak power demand. This variable “renewable” wind and solar power capacity now produces about a third of the country’s electricity and contributes about 6% to Germany’s primary energy supply (Figure 2).

Sources: Schernikau based on Fraunhofer, Agora, AG Energiebilanzen. See also www.unpopular-truth.com/graphs.

3.Global Economic Implications

Higher energy costs, obviously and undisputedly, hurt less affluent people and stifles the development of poorer nations (Figure 3). Thus, a move to more expensive wind and solar energy has “human externalities”. The less fortunate will be “starved of” energy as they wouldn’t be able to afford it, leading to literal reduction in life expectancy.

Source: Eschenbach 2017; Figure 38 in Book “The Unpopular Truth… about Electricity and the Future of Energy”

CO₂ pricing typically focuses only on emissions during operation,
neglecting significant environmental and economic costs
incurred during other stages or by the entire system.

For instance, the production of solar panels involves substantial energy and raw material inputs. Today there is not one single solar panel that is produced without coal. Similarly, the manufacturing and transportation processes of wind turbines and electric vehicles are energy-intensive and environmentally impactful. These stages are rarely accounted for in CO₂ pricing schemes, leading to a distorted view of their true environmental footprint. Also not accounted for are:

a) the required overbuild,
b) short and long duration energy storage,
c) backup facilities, or
d) larger network integration and transmission infrastructure.

Source: Schernikau, adapted from Figure 39 in Book “The Unpopular Truth… about Electricity and the Future of Energy“

Figure 4 illustrates how virtually all CO₂ pricing or taxation happens only at the stage of “operation” or combustion. How else could a “Net-Zero” label be assigned to a solar panel produced from coal and minerals extracted in Africa with diesel-run equipment, transported to China on a vessel powered by fuel-oil, and processed with heat and electricity from coal- or gas-fired power partially using forced labour? All this energy-intensive activity and not a single kilogram of CO₂ is taxed (see my recent article on this subject here) The same applies to wind turbines, hydro power, biofuel, or electric vehicles.

It turns out, CO₂ tax is basically just a means to redistribute wealth, with the collecting agency (government) deciding where the funds go. Yes, a CO₂ tax does incentivize industry to reduce CO₂ emissions at their taxed operations only, but this comes at a cost to economies, the environment, and often people. . . Any economist will confirm that pricing one externality but not others leads to economic distortions and, many would say, worse environmental impacts.

4.Alternative Approaches

Distortion, in this case, is just another word for unintended consequences to the environment, our economies, and the people. Pricing CO₂ only during combustion but failing to price methane, raw material and recycling, inefficiency, or embodied energy, or energy shortages, or land requirement, or greening from CO₂… will cause undesirable outcomes. The world will be worse off economically and environmentally.

Protest if you must, but let me offer a simple example. The leaders of the Western world seem to have united around abandoning coal immediately, because it is the highest CO₂ emitter during combustion (UN 2019). Instead, demanding reliable and affordable energy, Bangladesh, Pakistan, Germany, and so many more nations have embraced liquified natural gas (LNG) as a “bridge” fuel to replace coal. This “switch” is taking place despite questions about LNG’s impact on the environment, including the “climate”. This policy, supported by almost all large consultancies, indirectly caused blackouts affecting over 150 million people in Bangladesh in October 2022 (Reuters and Bloomberg).

So, the world is embarking on an expensive venture
to replace as much coal as possible with
more expensive liquified natural gas LNG.

On top of that, wind and solar are given preference. For example, the IEA recently confirmed that 2024 sparks the first year where investments in solar outstrip the combined investments in all other power generation technologies. As a result, energy costs go up, dependencies increase, lights go off, and, as per the UN’s IPCC, the “climate gets worse.”

Now imagine what would happen if we would truly take into account all environmental and human impacts, both negative and positive, along the entire value chain of energy production, transportation, processing, generation, consumption, and disposal… we would all be surprised! You would look at fossil fuels and certainly nuclear through different eyes. Instead we should simply incentivize resource and energy efficiency which will truly make a positive difference!

From Schernikau et al. 2022.

5.Conclusion

No matter what your view on climate change is, pricing CO₂ is harmful… why?
Answer: … because pricing one externality but not others leads to economic and environmental distortions…causing human suffering.

That is why, even considering the entire value chain, I do not support any CO₂ pricing.. That is why I fight for environmental and economic justice so we can, by avoiding energy starvation and resulting poverty, make a truly positive difference not only for ourselves but also for future generations to come.. . We need INvestment in, not DIvestment from 80% of our energy supply to rationalize our energy systems and to allow people and the planet to flourish.

I strongly support increasing adaptation efforts, which have already been successful in drastically reducing the death rate and GDP adjusted financial damage from natural disasters during the past
100 years (OurWorldInData, Pielke 2022, Economist).

 

 

 

 

 

 

 

 

Experimental Proof Nil Warming from GHGs

Thomas Allmendinger is a Swiss physicist educated at Zurich ETH whose practical experience is in the fields of radiology and elemental particles physics.  His complete biography is here.

His independent research and experimental analyses of greenhouse gas (GHG) theory over the last decade led to several published studies, including the latest summation The Real Origin of Climate Change and the Feasibilities of Its Mitigation, 2023, at Atmospheric and Climate Sciences journal. The paper is a thorough and detailed discussion of which I provide here a synopsis of his methods, findings and conclusions. Excerpts are in italics with my bolds and added images.

Abstract

The actual treatise represents a synopsis of six important previous contributions of the author, concerning atmospheric physics and climate change. Since this issue is influenced by politics like no other, and since the greenhouse-doctrine with CO2 as the culprit in climate change is predominant, the respective theory has to be outlined, revealing its flaws and inconsistencies.

But beyond that, the author’s own contributions are focused and deeply discussed. The most eminent one concerns the discovery of the absorption of thermal radiation by gases, leading to warming-up, and implying a thermal radiation of gases which depends on their pressure. This delivers the final evidence that trace gases such as CO2 don’t have any influence on the behaviour of the atmosphere, and thus on climate.

But the most useful contribution concerns the method which enables to determine the solar absorption coefficient βs of coloured opaque plates. It delivers the foundations for modifying materials with respect to their capability of climate mitigation. Thereby, the main influence is due to the colouring, in particular of roofs which should be painted, preferably light-brown (not white, from aesthetic reasons).

It must be clear that such a drive for brightening-up the World would be the only chance of mitigating the climate, whereas the greenhouse doctrine, related to CO2, has to be abandoned. However, a global climate model with forecasts cannot be aspired to since this problem is too complex, and since several climate zones exist.

Background

The alleged proof for the correctness of this theory was delivered 25 years later by an article in the Scientific American of the year 1982 [4]. Therein, the measurements of C.D. Keeling were reported which had been made at two remote locations, namely at the South Pole and in Hawaii, and according to which a continuous rise of the atmospheric CO2-concentration from 316 to 336 ppm had been detected between the years 1958 and 1978 (cf. Figure 1), suggesting coherence between the CO2 concentration and the average global temperature.

But apart from the fact that these CO2-concentrations are quite minor (400 ppm = 0.04%), and that a constant proportion between the atmospheric CO2-concentration and the average global temperature could not be asserted over a longer period, it should be borne in mind that this conclusion was an analogous one, and not a causal one, since solely a temporal coincidence existed. Rather, other influences could have been effective which happened simultaneously, in particular the increasing urbanisation, influencing the structure and the coloration of large parts of Earth surface.

However, this contingency was, and still is, categorically excluded. Solely the two possibilities are considered as explanation of the climate change: either the anthropogenic influence due to CO2-production, or a natural one which cannot be influenced. A third influence, the one suggested here, namely the one of colours, is a priori excluded, even though nobody denies the influence of colouring on the surface temperature of Earth and the existence of urban heat islands, and although an increase of winds and storms cannot be explained by the greenhouse theory.

However, already well in advance institutions were founded which aimed at mitigating climate change through political measures. Thereby, climate change was equated with the industrial CO2 production, although physical evidence for such a relation was not given. It was just a matter of belief. In this regard, in 1992 the UNFCCC (United Nations Framework Convention on Climate Change) was founded, supported by the IPCC (Intergovernmental Panel on Climate Change). In advance, side by the side with the UNO, numerous so-called COPs (Conferences on the Parties) were hold: the first one in 1985 in Berlin, the most popular one in 1997 in Kyoto, and the most important one in 2015 in Paris, leading to a climate convention which was signed by representatives of 195 nations. Thereby, numerous documents were compiled, altogether more than 40,000. But actually these documents didn’t fulfil the standards of scientific publications since they were not peer reviewed.

Subsequently, intensive research activities emerged, accompanied by a flood of publications, and culminating in several text books. Several climate models were presented with different scenarios and diverging long-term forecasts. Thereby, the fact was disregarded that indeed no global climate exists but solely a plurality of climates, or rather of micro-climates and at best of climate-zones, and that the Latin word “clima” (as well as the English word “clime”) means “region”. Moreover, an average global temperature is not really defined and thus not measurable because the temperature-differences are immense, for instance with regard to the geographic latitude, the altitude, the distinct conditions over sea and over land, and not least between the seasons and between day and night. Moreover, the term “climate” implicates rain and snow as well as winds and storms which, in the long-term, are not foreseeable. In particular, it should be realized that atmospheric processes are energetically determined, whereto the temperature contributes only a part.

2. The Historical Inducement for the Greenhouse Theory and Their Flaws

The scientific literature about the greenhouse theory is so extensive that it is difficult to find a clearly outlined and consistent description. Nevertheless, the publications of James E. Hansen [5] and of V. Ramanathan et al. [6] may be considered as authoritative. Moreover, the textbooks [7] [8] and [9] are worth mentioning. Therein it is assumed that Earth surface, which is heated up by sun irradiation, emits thermal radiation into the atmosphere, warming it up due to heat absorption by “greenhouse gases” such as CO2 and CH4. Thereby, counter-radiation occurs which induces a so-called radiative transfer. This aspect involved the rise of numerous theories (e.g. [10] [11] [12]). But the co-existence of theories is in contrast to the scientific principle that for each phenomenon solely one explanation or theory is admissible.

Already simple thoughts may lead one to question this theory. For instance: Supposing the present CO2-concentration of approx. 400 ppm (parts per million) = 0.04%, one should wonder how the temperature of the atmosphere can depend on such an extremely low gas amount, and why this component can be the predominant or even the sole cause for the atmospheric temperature. This would actually mean that the temperature would be situated near the absolute zero of −273˚C if the air would contain no CO2 or other greenhouse gases.

Indeed, no special physical knowledge is needed in order to realize that this theory cannot be correct. However, the fact that it has settled in the public mind, becoming an important political issue, requires a more detailed investigation of the measuring methods and their results which delivered the foundations of this theory, and why misinterpretations arose. Thereto, the two subsequent points have to be particularly considered: The first point concerns the photometrical measurements on gases in the electromagnetic range of thermal radiation which initially Tyndall had carried out in the 1860s [13], and which had been expanded to IR-measurements evaluated by Plass nineteen years later [14]. The second point concerns the application of the Stefan/Boltzmann-law on the Earth-atmosphere system firstly made by Arrhenius in 1896 [2], and more or less adopted by modern atmospheric physics. Both approaches are deficient and would question the greenhouse theory without requiring the author’s own approaches.

2.1 The Photometric and IR-Measurement Methods for CO2

By variation of the wave length and measuring the respective absorption, the spectrum of a substance can be evaluated. This IR-spectroscopic method is widely used in order to characterize organic chemical substances and chemical bonds, usually in solution. But even there this method is not suited for quantitative measurements, i.e. the absorption of the IR-active substance is not proportional to its concentration as the Beer-Lambert law predicts. It probably will even less be the case in the gaseous phase and, all the more, at high pressures which were applied in order to imitate the large distances in the atmosphere in the range of several (up to 10) kilometres. Thereby it is disregarded that the pressure of the atmosphere depends on the altitude above sea level, which prohibits the assumption of a linear progress.

Moreover, it is disregarded that at IR-spectrographs the effective radiation intensity is not known, and that in the atmosphere a gas mixture exists where the CO2 amounts solely to a little extent, whereas for the spectroscopic measurements pure CO2 was used. Nevertheless, in the text books for atmospheric physics the Beer-Lambert law is frequently mentioned, however without delivering concrete numerical results about the absorbed radiation.

In both cases solely the absorption degree of the radiation was determined, i.e. the decrease of the radiation intensity due to its run through a gas, but never its heating-up, that means its temperature increase. Instead, it was assumed that a gas is necessarily warmed up when it absorbs thermal radiation. According to this assumption, pure air, or rather a 4:1 mixture of nitrogen and oxygen, is expected to be not warmed up when it is thermally irradiated since it is IR-spectroscopically inactive, in contrast to pure CO2.

However, no physical formula exists which would allow to calculate such an effect, and no respective empirical evidence was given so far. Rather, the measurements which were recently performed by the author delivered converse, surprising results.

2.2. The Impact of Solar Radiation onto the Earth Surface and Its Reflexion

Besides, a further error is implicated in the usual greenhouse theory. It results from the fact that the atmosphere is only partly warmed up by direct solar radiation. In addition, it is warmed up indirectly, namely via Earth surface which is warmed up due to solar irradiation, and which transmits the absorbed heat to the atmosphere either by thermal conduction or by thermal radiation. Moreover, air convection contributes a considerable part. This process is called Anthropogenic Heat Flux (AHF). It has recently been discussed by Lindgren [16]. However, herewith a more fundamental view is outlined.

The thermal radiation corresponds to the radiative emission of a so-called “black body”. Such a body is defined as a body which entirely absorbs electromagnetic radiation in the range from IR to UV light. Likewise, it emits electromagnetic radiation all the more as its temperature grows. Its radiative behaviour is formulated by the law of Stefan and Boltzmann. . . According to this law, the radiation wattage Φ of a black body is proportional to the fourth power of its absolute temperature. Usually, this wattage is related to the area, exhibiting the dimension W/m2.

This formula does not allow making a statement about the wave-length or the frequency of the emitted light. This is only possible by means of Max Planck’s formula which was published in 1900. According to that, the frequencies of the emitted light tend to be the higher the temperature is. At low temperatures, only heat is emitted, i.e. IR-radiation. At higher temperatures the body begins to glow: first of all in red, and later in white, a mixture of different colours. Finally, UV-radiation emerges. The emission spectrum of the sun is in quite good accordance with Planck’s emission spectrum for approx. 6000 K.

Black co2 absorption lines are not to scale.

This model can be applied on Earth surface considering it as a coloured opaque body: On one side, with respect to its thermal emission, it behaves like a black body fulfilling the Stefan/Boltzmann-law. On the other side, it adsorbs only a part βs of the incident solar light, converting it into heat, whereas the complementary part is reflected. However, the intensity of the incident solar light on Earth surface, Φsurface, is not identically equal with its extra-terrestrial intensity beyond the atmosphere, but depends on the sea level since the atmosphere absorbs a part of the sunlight. Remarkably, the atmosphere behaves like a black body, too, but solely with respect to the emission: On one side, it radiates inwards to the Earth surface, and on the other side, it radiates outwards in the direction of the rest of the atmosphere.

However, this method implies three considerable snags:
•  Firstly, TEarth means the constant limiting temperature of the Earth surface which is attained when the sun had constantly shone onto the same parcel and with the same intensity. But this is never the case, except at thin plates which are thermally insulated at the bottom and at the sides, since the position of the sun changes permanently.

•  Secondly, this formula does not allow making a statement about the rate of the warming up-process, which depends on the heat capacity of the involved plate, too. This is solely possible using the author’s approach (see Chapter 3). Nevertheless, it is often attempted (e.g. in [35]), not least within radiative transfer approaches.

•  Thirdly, it is principally impossible to determine the absolute values of the solar reflection coefficient αs with an Albedometer or a similar apparatus, because the intensity of the incident solar light is independent of the distance to the surface, whereas the intensity of the reflected light depends on it. Thus, the herewith obtained values depend on the distance from Earth surface where the apparatus is positioned. So they are not unambiguous but only relative.

In the modern approach of Hansen et al. [5] the Earth is apprehended as a coherent black body, disregarding its segmentation in a solid and a gaseous part, and thus disregarding the contact area between Earth surface and the atmosphere where the reflexion of the sunlight takes place. As a consequence, in Equation (4b) the expression with Tair disappears, whereas a total Earth temperature appears which is not definable and not determinable. This approach has been widely adopted in the textbooks, even though it is wrong (see also [15]).

Altogether, the matter of fact was neglected that the proportionality of the radiation intensity to the absolute temperature to the fourth is solely valid if a constant equilibrium is attained. In contrast, the subsequently described method enables the direct detection of the colour dependent solar absorption coefficient βs = 1 –αs using well-defined plates. Furthermore, the time/temperature-courses are mathematically modelled up to the limiting temperatures. Finally, relative field measurements are possible based on these results.

3. The Measurement of Solar Absorption-Coefficients with Coloured Plates

Within the here described and in [20] published lab-like method, not the reflected but the absorbed sun radiation was determined, namely by measuring the temperature courses of coloured quadratic plates (10 × 10 × 2 cm3) when sunlight of known intensity came vertically onto these plates. The temperatures of the plates were determined by mercury thermometers, while the intensity of the sunlight was measured by an electronic “Solarmeter” (KIMO SL 100). The plates were embedded in Styrofoam and covered with a thin transparent foil acting as an outer window in order to minimize erratic cooling by atmospheric turbulence (Figure 5). Their heat capacities were taken from literature values. The colours as well as the plate material were varied. Aluminium was used as a reference material, being favourable due to its high heat capacity which entails a low heating rate and a homogeneous heat distribution. For comparison, additional measurements were made by wooden plates, bricks and natural stones. For enabling a permanent optimal orientation towards the sun, six plate-modules were positioned on an adjustable panel (Figure 6).

The evaluation of the curves of Figure 7 yielded the colour specific solar absorption-coefficients βs rendered in Figure 9. They were independent of the plate material. Remarkably, the value for green was relatively high.

Figure 7. Warming-up of aluminium plates at 1040 W/m2 [20].

If the sunlight irradiation and thus the warming-up process would be continued, finally constant limiting temperatures are attained. However, when 20 mm thick aluminium plates are used, the hereto needed time would be too long, exceeding the constantly available sunshine period during a day. Instead, separate cooling-down experiments were made, allowing a mathematical modelling of the whole process including the determination of the limiting temperatures.

Figure 10. Cooling-down of different materials (in brackets: ambient temperature) [20]. al = aluminium 20 mm; st = stone 20.5 mm; br = brick 14.5 mm; wo = wood 17.5 mm.

These limiting temperature values are in good accordance with the empirical values reported in [24] and with the Stefan/Boltzmann-values. As obvious from the respective diagrams in Figure 11 and Figure 12, the limiting temperatures are independent of the plate-materials, whereas the heating rates strongly depend on them.  In principal, it is also possible to model combined heating-up and cooling-down processes [20]. However, this presumes constant environmental conditions which normally do not exist.

4. Thermal Gas Absorption Measurements

If the warming-up behaviour of gases has to be determined by temperature measurements, interference by the walls of the gas vessel should be regarded since they exhibit a significantly higher heat capacity than the gas does, which implicates a slower warming-up rate. Since solid materials absorb thermal radiation stronger than gases do, the risk exists that the walls of the vessel are directly warmed up by the radiation, and that they subsequently transfer the heat to the gas. And finally, even the thin glass-walls of the thermometers may disturb the measurements by absorbing thermal radiation.

By these reasons, quadratic tubes with a relatively large profile (20 cm) were used which consisted of 3 cm thick plates from Styrofoam, and which were covered at the ends by thin plastic foils. In order to measure the temperature course along the tube, mercury-thermometers were mounted at three positions (beneath, in the middle, and atop) whose tips were covered with aluminium foils. The test gases were supplied from steel cylinders being equipped with reducing valves. They were introduced by a connecter during approx. one hour, because the tube was not gastight and not enough consistent for an evacuation. The filling process was monitored by means of a hygrometer since the air, which had to be replaced, was slightly humid. Afterwards, the tube was optimized by attaching adhesive foils and thin aluminium foils (see Figure 13). The equipment and the results are reported in [21].

Figure 13. Solar-tube, adjustable to the sun [21].

The initial measurements were made outdoor with twin-tubes in the presence of solar light. One tube was filled with air, and the other one with carbon-dioxide. Thereby, the temperature increased within a few minutes by approx. ten degrees till constant limiting temperatures were attained, namely simultaneously at all positions. Surprisingly, this was the case in both tubes, thus also in the tube which was filled with ambient air. Already this result delivered the proof that the greenhouse theory cannot be true. Moreover, it gave rise to investigate the phenomenon more thoroughly by means of artificial, better defined light.

Figure 14. Heat-radiation tube with IR-spot [21].

Accordingly, the subsequent experiments were made using IR-spots with wattages of 50 W, 100 W and 150W which are normally employed for terraria (Figure 14). Particularly the IR-spot with 150 W lead to a considerably higher temperature increase of the included gas than it was the case when sunlight was applied, since its ratio of thermal radiation was higher. Thereby, variable impacts such as the nature of the gas could be evaluated.

Due to the results with IR-spots at different gases (air, carbon-dioxide, the noble gases argon, neon and helium), essential knowledge could be gained. In each case, the irradiated gas warmed up until a stable limiting temperature was attained. Analogously to the case of irradiated coloured solid plates, the temperature increased until the equilibrium state was attained where the heat absorption rate was identically equal with the heat emission rate.

Figure 15. Time/temperature-curves for different gases [21] (150 W-spot, medium thermometer-position).

As evident from the diagram in Figure 15, the initial observation made with sunlight was approved that pure carbon-dioxide was warmed up almost to the same degree as air does (whereby ambient air only scarcely differed from a 4:1 mixture between nitrogen and oxygen). Moreover, noble gases absorb thermal radiation, too. As subsequently outlined, a theoretical explanation could be found thereto.

Interpretation of the Results

Comparison of the results obtained by the IR-spots, on the one hand, and those obtained with solar radiation, on the other hand, corroborated the conclusion that comparatively short-wave IR-radiation was involved (namely between 0.9 and 1.9 μm). However, subsequent measurements with a hotplate (<90˚C), placed at the bottom of the heat-radiation tube ([15], Figure 16), yielded that long-wave thermal radiation (which is expected at bodies with lower temperatures such as Earth surface) induces also temperature increase of air and of carbon-dioxide, cf. Figure 17.

Thus, the herewith discovered absorption effect at gases proceeds over a relatively wide wave-length range, in contrast to the IR-spectroscopic measurements where only narrow absorption bands appear. This effect is not exceptional, i.e. it occurs at all gases, also at noble gases, and leads to a significant temperature increase, even though it is spectroscopically not detectable. This temperature increase overlays an eventual temperature increase due to the specific IR-absorption since the intensity ratio of the latter one is very small.

This may be explained as follows: In any case, an oscillation of particles, induced by thermal radiation, acts a part. But whereas in the case of the specific IR-absorption the nuclei inside the molecules are oscillating along the chemical bond (which must be polar), in the relevant case here the electronic shell inside the atoms, or rather the electron orbit, is oscillating implicating oscillation energy. Obviously, this oscillation energy can be converted into kinetic translation energy of the entire atoms which correlates to the gas temperature, and vice versa.

5. The Altitude-Paradox of the Atmospheric Temperature

The statement that it’s colder in the mountains than in the lowlands is trivial. Not trivial is the attempt to explain this phenomenon since the reason is not readily evident. The usual explanation is given by the fact that rising air cools down since it expands due to the decreasing air-pressure. However, this cannot be true in the case of plateaus, far away from hillsides which engender ascending air streams. It appears virtually paradoxical in view of the fact that the intensity of the sun irradiation is much greater in the mountains than in the lowlands, in particular with respect to its UV-amount. Thereby, the intensity decrease is due to the scattering and the absorption of sunlight within the atmosphere, not only within the IR-range but also in the whole remaining spectral area. If such an absorption, named Raleigh-scattering, didn’t occur, the sky would not be blue but black.

However, the direct absorption of sunlight is not the only factor which determines the temperature of the atmosphere. Its warming-up via Earth surface,which is warmed up due to absorbed sun-irradiation, is even more important. Thereby, the heat transfer occurs partly by heat conduction and air convection, and partly by thermal radiation. But there is an additional factor which has to be regarded: namely the thermal radiation of the atmosphere. It runs on the one hand towards Earth (as counter-radiation), and on the other hand towards Space. Thus the situation becomes quite complicated, all the more the formal treatment based on the Stefan/Boltzmann-relation would require limiting equilibrated temperature conditions. But in particular, that relation does not reveal an influence of the atmospheric pressure which obviously acts a considerable part.

In order to study the dependency on the atmospheric pressure, it would be desirable solely varying the pressure, whereas the other terms remain constant by varying the altitude of the measuring station above sea level which implicates a variation of the intensity of the sunlight and of the ambient atmosphere temperature, too. The here reported measurements were made at two locations in Switzerland, namely at Glattbrugg (close to Zürich), 430 m above sea level, and at the top of the Furka pass, 2430 m above sea level. Using the barometric height formula, the respective atmospheric pressures were approx. 0.948 and 0.748 bar.  At any position, two measurements were made in the same space of time.

Figure 18. Comparison of the temperature courses during two measurements [24] (continuous lines: Glattbrugg; dotted lines: Furka).

Figure 18 renders the data of one measurement pair. Obviously, the limiting temperatures were not ideally attained within 90 minutes. Moreover, the evaluation of the data didn’t provide strictly invariant values for A. But this is reasonable in view of the fact that the sunlight intensity was not entirely constant during that period, and that its spectrum depends on the altitude over sea level. Nevertheless, for the atmospheric emission constant A an approximate value of 22W·m−2• bar−1• K−0.5 could be found.

These findings indeed confirm that in a way a greenhouse-effect occurs, since the atmosphere thermally radiates back to Earth surface. But this radiation has  nothing to do with trace gases such as CO2. It rather depends on the atmospheric pressure which diminishes at higher altitudes.

If the oxygen content of the air would be considerably reduced, a general reduction of the atmospheric pressure and, as a consequence, of the temperature would proceed. This may be an explanation for the appearance of glacial periods. However, other explanations are possible, in particular the temporary decrease of the sun activity.

Over all it can be stated that climate change cannot be explained by ominous greenhouse gases such as CO2, but mainly by artificial alterations of Earth surface, particularly in urban areas by darkening and by enlargement of the surface (so-called roughness). These urban alterations are not least due to the enormous global population growth, but also to the character of modern buildings tending to get higher and higher, and employing alternative materials such as concrete and glass. As a consequence, respective measures have to be focussed, firstly mentioning the previous work, and then applying the here presented method.

7. Conclusions

The herewith summarized work of the author concerns atmospheric physics with respect to climate change, comprising three specific and interrelated points based on several previous publications: The first one consists in a critical discussion and refutation of the customary greenhouse theory; the second one outlines the method for measuring the thermal-radiative behaviour of gases; and the third one describes a lab-like method for the characterization of the solar-reflective behaviour of solid opaque bodies, in particular for the determination of the colour-specific solar absorption coefficients.

As to the first point, three main flaws were revealed:

•  Firstly, the insufficiency of photometric methods in order to determine the heating-up of gases in the presence of thermal radiation;
•  Secondly, the lack of  causal relationship between the CO2-concentration in the atmosphere and the average global temperature, based on the reasoning that the empiric simultaneous increase of its concentration and of the global temperature would prove a causal relationship instead of an analogous one; and
•  Thirdly, the inadmissible application of the Stefan/Boltzmann-law to the entire Earth (including the atmosphere) versus Space, instead of the application onto the boundary between the Earth surface and the atmosphere.

As to the second point, the discovery has to be taken into account according to which every gas is warmed up when it is thermally irradiated, even noble gases, attaining a limiting temperature where the absorption of radiation is in equilibrium with the emitted radiation. In particular, pure CO2 behaves similarly to pure air. Applying kinetic gas theory, a dependency of the emission intensity on the pressure, on the root of the absolute temperature, and on the particle size could be found and theoretically explained by oscillation of the electron shell.

As to the third point not only a lab-like measuring method for the colour dependent solar absorption coefficient βs was developed, but also a mathematical modelling of the time/temperature-course where coloured opaque plates are irradiated by sunlight. Thereby, the (colour-dependent) warming-up and the (colour-independent) cooling-down are detected separately. Likewise, a limiting temperature occurs where the intensity of the absorbed solar light is identical equal with the intensity of the emitted thermal radiation. In the absence of wind-convection, the so-called heat transfer coefficient B is invariant. Its value was empirically evaluated, amounting to approx. 9 W·m−2•K−1.

Finally, the theoretically suggested dependency of the atmospheric thermal radiation intensity on the atmospheric pressure could be empirically verified by measurements at different altitudes, namely in Glattbrugg (430 m above sea level and on the top of the Furka-pass (2430 m above sea level), both in Switzerland, delivering a so-called atmospheric emission constant A ≈ 22 W·m−2•bar−1•K−0.5. It explained the altitude-paradox of the atmospheric temperature and delivered the definitive evidence that the atmospheric behavior, and thus the climate, does not depend on trace gases such as CO2. However, the atmosphere thermally reradiates indeed, leading to something similar to a Greenhouse effect. But this effect is solely due to the atmospheric pressure.

Therefore, and also considering the results of Seim and Olsen [23], the customary greenhouse doctrine assuming CO2 as the culprit in climate change has to be abandoned and instead replaced by the here recommended concept of improving the albedo by brightening parts of the Earth surface, particularly in cities, unless fatal consequences will be hazarded.

Figure 24. Up-winds induced by an urban heat island.

Polar Bears, Dead Coral and Other Climate Fictions

Bjorn Lomborg calls out climate alarmist nonsense in his WSJ article Polar Bears, Dead Coral and Other Climate Fictions.  Excerpts in italics with my bolds and added images.

Activists’ tales of doom never pan out,
but they leave us poorly informed and feed bad policy.

Whatever happened to polar bears? They used to be all climate campaigners could talk about, but now they’re essentially absent from headlines. Over the past 20 years, climate activists have elevated various stories of climate catastrophe, then quietly dropped them without apology when the opposing evidence becomes overwhelming. The only constant is the scare tactics.

Protesters used to dress up as polar bears. Al Gore’s 2006 film, “An Inconvenient Truth,” depicted a sad cartoon polar bear floating away to its death. The Washington Post warned in 2004 that the species could face extinction, and the World Wildlife Fund’s chief scientist claimed some polar bear populations would be unable to reproduce by 2012.

Then in the 2010s, campaigners stopped talking about them.

After years of misrepresentation, it finally became impossible to ignore the mountain of evidence showing that the global polar-bear population has increased substantially. Whatever negative effect climate change had was swamped by the reduction in hunting of polar bears. The population has risen from around 12,000 in the 1960s to about 26,000.

The same thing has happened with activists’ outcry about Australia’s Great Barrier Reef. For years, they shouted that the reef was being killed off by rising sea temperatures. After a hurricane extensively damaged the reef in 2009, official Australian estimates of the percent of reef covered in coral reached a record low in 2012. The media overflowed with stories about the great reef catastrophe, and scientists predicted the coral cover would be reduced by another half by 2022. The Guardian even published an obituary in 2014.

The percentage of coral cover in the northern and central Great Barrier Reef has increased.(Supplied: Australian Institute of Marine Science)

The latest official statistics show a completely different picture. For the past three years the Great Barrier Reef has had more coral cover than at any point since records began in 1986, with 2024 setting a new record. This good news gets a fraction of the coverage that the panicked predictions did.

More recently, green campaigners were warning that small Pacific islands would drown as sea levels rose. In 2019 United Nations Secretary-General António Guterres flew all the way to Tuvalu, in the South Pacific, for a Time magazine cover shot. Wearing a suit, he stood up to his thighs in the water behind the headline “Our Sinking Planet.” The accompanying article warned the island—and others like it—would be struck “off the map entirely” by rising sea levels.

Hundreds of Pacific Islands are growing, not shrinking. No habitable island got smaller.

About a month ago, the New York Times finally shared what it called“surprising” climate news: Almost all atoll islands are stable or increasing in size. In fact, scientific literature has documented this for more than a decade. While rising sea levels do erode land, additional sand from old coral is washed up on low-lying shores. Extensive studies have long shown this accretion is stronger than climate-caused erosion, meaning the land area of Tuvalu and many other small islands is increasing.

Today, killer heat waves are the new climate horror story. In July President Biden claimed “extreme heat is the No. 1 weather-related killer in the United States.”

He is wrong by a factor of 25. While extreme heat kills nearly 6,000 Americans each year, cold kills 152,000, of which 12,000 die from extreme cold. Even including deaths from moderate heat, the toll comes to less than 10,000. Despite rising temperatures, age-standardized extreme-heat deaths have actually declined in the U.S. by almost 10% a decade and globally by even more, largely because the world is growing more prosperous. That allows more people to afford air-conditioners and other technology that protects them from the heat.

My Mind is Made Up, Don’t Confuse Me with the Facts. H/T Bjorn Lomborg, WUWT

The petrified tone of heat-wave coverage twists policy illogically. Whether from heat or cold, the most sensible way to save people from temperature-related deaths would be to ensure access to cheap, reliable electricity. That way, it wouldn’t be only the rich who could afford to keep safe from blistering or frigid weather. Unfortunately, 

Activists do the world a massive disservice by refusing to acknowledge
facts that challenge their intensely doom-ridden worldview.

There is ample evidence that man-made emissions cause changes in climate, and climate economics generally finds that the costs of these effects outweigh the benefits. But the net result is nowhere near catastrophic. The costs of all the extreme policies campaigners push for are much worse. All told, politicians across the world are now spending more than $2 trillion annually—far more than the estimated cost from climate change that these policies prevent each year.

Yes, those are Trillions of US$ they are projecting to spend.

Scare tactics leave everyone—especially young people—distressed and despondent. Fear leads to poor policy choices that further frustrate the public. And the ever-changing narrative of disasters erodes public trust.

Telling half-truths while piously pretending to “follow the science” benefits activists with their fundraising, generates clicks for media outlets, and helps climate-concerned politicians rally their bases. But it leaves all of us poorly informed and worse off.

Mr. Lomborg is president of the Copenhagen Consensus, a visiting fellow at Stanford University’s Hoover Institution and author of “Best Things First: The 12 Most Efficient Solutions for the World’s Poorest and our Global SDG Promises.”

See Also:

You Won’t Survive “Sustainability” Agenda 2024

Propaganda Machine is Failing, Beware the Jackboots

Matthias Desmet previously alerted us to the appearance of mass formation supporting totalitarian Covid controls.  Now he writes a warning concerning popular rejection of the elite propaganda machine. His Brownstone article is The Propaganda Model Has Limits.  Excerpts in italics with my bolds and added images.  H/T Tyer Durden.

Normally, I let my pen rest during the summer months, but for some things, you set aside your habits. What has been happening in the context of the US presidential elections over the past few weeks is, to say the least, remarkable. We are witnessing a social system that – to use a term from complex dynamic systems theory – is heading toward a catastrophe.

And the essence of the tipping point we are approaching is this:
the propaganda model is beginning to fail.

It started a few weeks ago like this: Trump, the presidential candidate who must not win, is up against Biden, the presidential candidate who must win. After the first debate, it was immediately clear: Trump will win against Biden. The big problem: Biden and Jill are about the only ones who don’t realize this.

The media then turned against Biden. That, in itself, is a revolution. They had praised President Biden to the skies for four years, turning a blind eye to the fact that the man either seemed hardly aware of what he was saying or was giving speeches that could only be described as having the characteristics of a fascist’s discourse.

In any case, the media’s love for Biden was suddenly over when it became clear that he could not possibly win the election, even not with a little help from the media. If you want to know how that ‘little help’ worked in 2020, look at one of the most important interviews of the past year, where Mike Benz – former director of the cyber portfolio of the US government – explains to Tucker Carlson in detail how information flows on the internet were manipulated during the 2020 elections (and the Covid crisis).

After the first debate, the media realized that even they could not help Biden win the election. They changed their approach. Biden was quickly stripped of his saintly status. The Veil of Appearances was pulled away, and he suddenly stood naked and vulnerable in the eye of the mainstream – a man in the autumn of his life, mentally confused, addicted to power, and arrogant. Some journalists even started attributing traits of the Great Narcissistic Monster Trump to him.

Then things took another turn, a turn so predictable that one is astonished that it actually happened. An overaged teenager calmly climbed onto a roof with a sniper rifle, under the watchful eyes of the security services, and nearly shot Trump in the head. The security services, which initially did not respond for minutes when people tried to draw attention to the overaged teenager with an assault rifle, suddenly reacted decisively: they shot the overaged teenager dead seconds after the assassination attempt.

What happened there? There are many reasons to have reservations about Trump, but one thing we cannot help but say: if Trump becomes president, the war in Ukraine will be over. Anyone who does not attribute any weight to that should subject themselves to a conscience examination. And no, Trump will not have to give half of Europe to Putin for that. My cautious estimate, for what it’s worth: It will suffice for NATO to stop and partially reverse its eastward expansion, for Russia to retain access to the Black Sea via Crimea (something everyone with historical awareness knows that denying would mean the death blow to Russia as a great power and thus a direct declaration of war), and for the population of the Russian-speaking part of Ukraine to choose in a referendum whether to belong to Russia or Ukraine.

So, I repeat my point: with Trump, the provocation of Russia stops, and the war in Ukraine ends. Presidents who threaten to end wars are sometimes shot at by lone gunmen. And those lone gunmen are, in turn, shot dead. And the archives about that remarkable act of lone gunmen sometimes remain sealed for a remarkably long time, much longer than they usually do.

The news reaction was predictable. We are used to the media. Some journalists even suggested that Trump had been shot with a paintball, others thought the most accurate way to report was that someone ‘wounded Trump on the ear.’

In any case, after the assassination attempt, the situation became even more dire for the mainstream: the presidential candidate who must not win is now even more popular, and his victory in a race with Biden is almost inevitable.

Then the next chapter begins. Biden suddenly changes his mind: he has come to his senses and drops out of the race. He announces this – of all things – in a letter with a signature that, even for his shaky condition, looked quite clumsy. Then he stayed out of the public eye for a few days. We are curious about what exactly happened there.

But the media are compliant again. Biden has now been sanctified again. Just like Kamala Harris, of course. They are already mentioning polls showing she will beat Trump. With a little help from the media, of course. Curious how this will continue, but I would be surprised if the rest of the campaign will be a walk in the park. Trump is not safe after the first attempt, that’s for sure. And to Kamala Harris, I say this: when totalitarian systems go into a chaotic phase, they become monsters that devour their own children.

Case in Point 1: Orbán: The West Sees Immigration As A “Way Of Getting Rid Of The Ethnic Homogeneity That Is The Basis Of The Nation-State”

 

From Zerohedge:  In Hungarian Prime Minister Viktor Orbán’s speech at Tusványos in Romania, he focuses on the intractable differences developing between the East and West of Europe, with immigration one of the key divisions. He not only rejects the Western view on immigration, but sees it as an agenda with a very specific ideology behind it, which is designed to erode the nation-state entirely.

“But Westerners, quite differently, believe that nation-states no longer exist. They therefore deny that there is a common culture and a public morality based on (the nation-state). There is no public morality, if you watched the Olympic opening yesterday, you saw it. So, they also think differently about migration. They believe that migration is not a threat or a problem, but in fact a way of getting rid of the ethnic homogeneity that is the basis of a nation. This is the essence of the progressive liberal international concept. That is why the absurdity does not occur to them, or they do not see it as absurd,” he said.

This dramatic ideological cleavage is not a “secret,” according to Orbán. He said that the documents and policy papers coming out of the EU show that the “clear aim is to transcend the nation.”

“But the point is that the powers, the sovereignty, should be transferred from the nation-states to Brussels. This is the logic behind all major measures. In their minds, the nation is a historical, or transitional, formation of the 18th and 19th centuries — as it came, so it may go. They are already in a post-national state in the Western half of Europe. It’s not just a politically different situation, but what I’m trying to talk about here is that it’s a new mental space.

And finally, the last element of reality is that this post-national situation that we see in the West has a serious, I would say dramatic, political consequence that is shaking democracy. Societies are increasingly resistant to migration, gender, war and globalism. And this creates the political problem of elites and the people, elitism and populism. This is a dominant phenomenon in Western politics today…This means that the elites condemn the people for drifting to the right. The feelings and ideas of the people are labeled xenophobia, homophobia and nationalism. The people, meanwhile, in response, accuse the elite of not caring about what is important to them, but of sinking into some kind of mindless globalism.

Case in Point 2: UK Prime Minister Calls For Crackdown On Angry Brits – Violent Migrants Get A Pass

 

From Zerohedge: Beyond the obvious Cloward-Piven agenda in play throughout most of Europe and the US, open border policies accomplish much more than simply erasing western culture with third-world migrants. The introduction of violent peoples from violent countries and ideologies is a perfect way to generate public hostility and getting them to react in anger. When a government refuses to represent the interests of actual citizens that are under attack by foreign elements the only avenue left to that populace is self defense.

Establishment elites understand very well that their malicious activities are going to generate a vengeful response. Their first measure is to shame the public with accusations of “extremism” when the public fights back. When that doesn’t work, the next measure is to use popular riots as an excuse to impose authoritarian controls “in the name of safety.”

UK police and political authorities specifically avoid keeping a record of the migrant status of most perpetrators, making it difficult to directly prove who is responsible for the crime spike.  Correlation is not necessarily causation, but there were no other dramatic changes to UK society during that time period that would account for the crime spike.  The only thing that changed was the type of migrants the UK government was accepting and the amount of people they were allowing in.

Remove the migrants and watch crime numbers plummet; it’s that simple.
The UK public knows it and those in power choose to ignore it.
This has led to predictable conflict.

Keep in mind, leftist protests and riots have been ongoing in the UK for the past year in the name of Gaza, among other causes.  The law enforcement response to these situations has been minimal.  It is clear that there is a grotesque double standard when it comes to conservative or nationalist protests – If you aren’t on Team Progressive, then you aren’t allowed a redress of grievances.  The left is allowed to riot, conservatives are not even allowed to take to the streets.  This is what happened after January 6th in the US and it’s happening right now in the UK.

If the goal is a dystopian nightmare state then the UK is well on its way.  The public has made clear that they will no longer be abused.  The question is, who will blink first – The populace under threat of Orwellian surveillance and lockdowns, or the establishment fearing that the civil unrest they so hoped to trigger might grow out of their control.  

 

 

 

Arctic Ice Slight Deficit July 31, 2024

 

The graph above shows July daily ice extents for 2024 compared to 18 year averages, and some years of note.

The black line shows on average Arctic ice extents during July decline 2.7M km2 down to 6.9M Km2 by day 213.  2024  tracked a little higher than the 18-year average early in July, then slipped into deficit in the last 10 days.  SII was close to MASIE early in July, then diverged mid month showing up to 666k km2 lower until ending July ~300k km2 less extent than MASIE.  2023 was higher than average, while 2007 ended ~ 540k km2 in deficit to average.  2020 ice ended nearly 1 Wadham or 1M km2 in deficit.

Why is this important?  All the claims of global climate emergency depend on dangerously higher temperatures, lower sea ice, and rising sea levels.  The lack of additional warming prior to 2023 El Nino is documented in a post UAH June 2024: Oceans Lead Cool Down.

The lack of acceleration in sea levels along coastlines has been discussed also.  See Observed vs. Imagined Sea Levels 2023 Update.

Also, a longer term perspective is informative:

post-glacial_sea_levelThe table below shows the distribution of Sea Ice on day 213 across the Arctic Regions, on average, this year and 2007. At this point in the year, Bering and Okhotsk seas are open water and thus dropped from the table.

Region 2024213 Day 213 Ave. 2024-Ave. 2007213 2024-2007
 (0) Northern_Hemisphere 6634637 6882380 -247743 6344860 289777
 (1) Beaufort_Sea 717847 791600 -73754 760576 -42729
 (2) Chukchi_Sea 702720 534093 168628 382350 320370
 (3) East_Siberian_Sea 760894 740772 20122 445385 315509
 (4) Laptev_Sea 223615 368247 -144631 314382 -90767
 (5) Kara_Sea 136159 163171 -27012 239232 -103073
 (6) Barents_Sea 454 31406 -30952 23703 -23249
 (7) Greenland_Sea 237168 294526 -57358 324737 -87570
 (8) Baffin_Bay_Gulf_of_St._Lawrence 170245 145062 25183 94179 76066
 (9) Canadian_Archipelago 454695 540106 -85411 510063 -55368
 (10) Hudson_Bay 129434 133138 -3704 93655 35780
 (11) Central_Arctic 3098283 3138628 -40345 3154837 -56554

The overall deficit to average is 248k km2, (4%).  The major deficits are in Laptev, Beaufort and CAA (Canadian Archipelago), while Kara is the only region with a large surplus.

bathymetric_map_arctic_ocean

Illustration by Eleanor Lutz shows Earth’s seasonal climate changes. If played in full screen, the four corners present views from top, bottom and sides. It is a visual representation of scientific datasets measuring ice and snow extents.

There is no charge for content on this site, nor for subscribers to receive email notifications of postings.

 

Roots of Climate Change Distortions

Roger Pielke Jr. explains at his blog Why Climate Misinformation Persists.  Excerpts in italics with my bolds and added images. H/T John Ray

Noble Lies, Conventional Wisdom, and Luxury Beliefs

In 2001, I participated in a roundtable discussion hosted at the headquarters of the National Academy of Sciences (NAS) with a group of U.S. Senators, the Secretary of Treasury, and about a half-dozen other researchers. The event was organized by Idaho Senator Larry Craig (R-ID) following the release of a short NAS report on climate to help the then-new administration of George W. Bush get up to speed on climate change.

At the time I was a 32 year-old fresh-faced researcher about to leave the National Center for Atmospheric Research for a faculty position across town at the University of Colorado. I had never testified before Congress or really had any high-level policy engagement.

When the roundtable was announced, I experienced something completely new in my professional career — several of my much more senior colleagues contacted me to lobby me to downplay or even to misrepresent my research on the roles of climate and society in the economic impacts of extreme weather. I had become fairly well known in the atmospheric sciences research community back then for our work showing that increasing U.S. hurricane damage could be explained entirely by more people and more wealth.

One colleague explained to me that my research, even though scientifically accurate, might distract from efforts to advocate for emissions reductions:

“I think we have a professional (or moral?) obligation to be very careful what we say and how we say it when the stakes are so high.”

At the time, I wrote that the message I heard was that the “ ends justify means or, in other words, doing the “right thing” for the wrong reasons is OK” — even if that meant downplaying or even misrepresenting my own research.

I have thought about that experience over the past few weeks as I have received many comments on the first four installments of the THB series Climate Fueled Extreme Weather (Part 1Part 2Part 3Part 4). One of the most common questions I’ve received asks why it is that the scientific assessments of the Intergovernmental Panel on Climate Change (IPCC) are so different than what is reported in the media, proclaimed in policy, and promoted by the most public-facing climate experts. And, why can’t that gap be closed?

Over the past 23 years, I have wondered a lot myself about this question — not just how misinformation arises in policy discourse (we know a lot about that), but why it is that the expert climate community has been unable or unwilling to correct rampant misinformation about extreme weather, with some even promoting that misinformation.

Obviously, I don’t have good answers, but I will propose three inter-related explanations that help me to make sense of these dynamics — the noble lie, conventional wisdom, and luxury beliefs.

The Noble Lie

The most important explanation is that many in the climate community — like my senior colleague back in 2001 — appear to believe that achieving emissions reductions is so very important that its attainment trumps scientific integrity. The ends justify the means. They also believe that by hyping extreme weather, they will make emissions reductions more likely (I disagree, but that is a subject for another post).

I explained this as a “fear factor” in The Climate Fix:

Typically, the battle over climate-change science focused on convincing (or, rather, defeating) those skeptical has meant advocacy focused on increasing alarm. As one Australian academic put it at a conference at Oxford University in the fall of 2009: “The situation is so serious that, although people are afraid, they are not fearful enough given the science. Personally I cannot see any alternative to ramping up the fear factor.” Similarly, when asked how to motivate action on climate change, economist Thomas Schelling replied, “It’s a tough sell. And probably you have to find ways to exaggerate the threat. . . [P]art of me sympathizes with the case for disingenuousness. . . I sometimes wish that we could have, over the next five or ten years, a lot of horrid things happening—you know, like tornadoes in the Midwest and so forth—that would get people very concerned about climate change. But I don’t think that’s going to happen.” From the opening ceremony of the Copenhagen climate negotiations to Al Gore’s documentary, An Inconvenient Truth, to public comments from leading climate scientists, to the echo-chambers of the blogosphere, fear and alarm have been central to advocacy for action on climate change.

It’s just a short path from climate fueled extreme weather
to the noble lie at the heart of fear-based campaigns.

Conventional Wisdom

The phrase was popularized by John Kenneth Galbraith in his 1958 book, The Affluent Society, where he explained:

It will be convenient to have a name for the ideas which are esteemed at any time for their acceptability, and it should be a term that emphasizes this predictability. I shall refer to these ideas henceforth as the conventional wisdom.

The key point here is “acceptability” regardless of an idea’s conformance with truth. Conventional wisdom is that which everyone knows to be true whether actually true or not. Examples of beliefs that at one time or another were/are conventional wisdom include — Covid-19 did not come from a lab, Sudafed helps with hay fever, spinach is high in iron, and climate change is fueling extreme weather.

As the noble lie of climate fueled extreme weather has taken hold as conventional wisdom, few have been willing to offer correctives — Though there are important exceptions out in plain sight, like the IPCC Working Group 1 and the NOAA GFDL Hurricanes and Global Warming page.

Actively challenging conventional wisdom has professional and social costs. For instance, those who suggested that Covid-19 may have had a research-related origin were labeled conspiracy theorists and racists, those who suggested that President Biden was too old to serve another term were called Trump enablers, and those who accurately represent the science of climate and extreme weather are tarred as climate deniers and worse.

A few years ago I wrote an op-ed for a U.S. national newspaper and included a line about how hurricane landfalls had not increased in the U.S. in frequency or in their intensity since at least 1900. The editor removed the sentence and told me that while the statement was correct, most readers wouldn’t believe it and that would compromise the whole piece.

When noble lies become conventional wisdom,
they become harder to correct.

Luxury Beliefs

Yascha Mounk offers a useful definition:

Luxury beliefs are ideas professed by people who would be much less likely to hold them if they were not insulated from, and had therefore failed seriously to consider, their negative effects.

After all, what is the real harm if people believe in climate fueled extreme weather? If people believe that the extreme weather outside their window or on their feeds make aggressive climate policies more compelling, then that’s a good thing, right?

Surely there can only be positive outcomes that result from promoting misunderstandings of climate and extreme weather at odds with evidence and scientific assessments?

Well, no.

Last year I attended a forum at Lloyd’s of London — the event was held under the Chatham House rule, and in the event, later reported by the Financial Times, the company made a surprising admission contrary to the conventional wisdom (emphasis added):

Lloyd’s of London has warned insurers that the full impact of climate change has yet to translate into claims data despite annual natural catastrophe losses borne by the sector topping $100bn.

Insurance prices are surging as companies look to repair their margins after years of significant losses from severe weather to insured properties, exacerbated by inflation in rebuild costs. A warming planet has been identified by insurance experts and campaigners alike as a key factor.

But at a private event last month, one executive at the corporation that oversees the market told underwriters that it has not yet seen clear evidence that a warming climate is a major driver in claims costs.

It turns out that misunderstandings of the science of climate and extreme weather actually lead to unnecessary costs for people whose jobs involve making decisions about climate and extreme weather. Those unnecessary costs often trickle down to ordinary people. Kudos to Lloyd’s for calling things straight, even if only in a private forum — That is of course their professional responsibility.

Conclusion

In a classic paper in 2012, the late Steve Rayner explained that some forms of social ignorance are created and maintained on purpose:

To make sense of the complexity of the world so that they can act, individuals and institutions need to develop simplified, self-consistent versions of that world. The process of doing so means that much of what is known about the world needs to be excluded from those versions, and in particular that knowledge which is in tension or outright contradiction with those versions must be expunged. This is ‘uncomfortable knowledge’.

Climate fueled extreme weather is perhaps the canonical example
of dysfunctional socially constructed ignorance.  It may be
uncorrectable — a falsehood that persists permanently.

Here at THB, I am an optimist that science and policy are self-correcting, even if that takes a while. That means that the series on climate fueled extreme weather will keep going — Have a great weekend!

 

 

Power Density Physics Trump Energy Politics

A plethora of insane energy policy proposals are touted by clueless politicians, including the apparent Democrat candidate for US President.  So all talking heads need reminding of some basics of immutable energy physics.  This post is in service of restoring understanding of fundamentals that cannot be waved away.

The Key to Energy IQ

This brief video provides a key concept in order to think rationally about calls to change society’s energy platform.  Below is a transcript from the closed captions along with some of the video images and others added.

We know what the future of American energy will look like. Solar panels, drawing limitless energy from the sun. Wind turbines harnessing the bounty of nature to power our homes and businesses.  A nation effortlessly meeting all of its energy needs with minimal impact on the environment. We have the motivation, we have the technology. There’s only one problem: the physics.

The history of America is, in many ways, the history of energy. The steam power that revolutionized travel and the shipping of goods. The coal that fueled the railroads and the industrial revolution. The petroleum that helped birth the age of the automobile. And now, if we only have the will, a new era of renewable energy.

Except … it’s a little more complicated than that. It’s not really a matter of will, at least not primarily. There are powerful scientific and economic constraints on where we get our power from. An energy source has to be reliable; you have to know that the lights will go on when you flip the switch. An energy source needs to be affordable–because when energy is expensive…everything else gets more expensive too. And, if you want something to be society’s dominant energy source, it needs to be scalable, able to provide enough power for a whole nation.

Those are all incredibly important considerations, which is one of the reasons it’s so weird that one of the most important concepts we have for judging them … is a thing that most people have never heard of. Ladies and gentlemen, welcome to the exciting world of…power density.

Look, no one said scientists were gonna be great at branding. Put simply, power density is just how much stuff it takes to get your energy; how much land or other physical resources. And we measure it by how many watts you can get per square meter, or liter, or kilogram – which, if you’re like us…probably means nothing to you.

So let’s put this in tangible terms. Just about the worst energy source America has by the standards of power density are biofuels, things like corn-based ethanol. Biofuels only provide less than 3% of America’s energy needs–and yet, because of the amount of corn that has to be grown to produce it … they require more land than every other energy source in the country combined. Lots of resources going in, not much energy coming out–which means they’re never going to be able to be a serious fuel source.

Now, that’s an extreme example, but once you start to see the world in these terms, you start to realize why our choice of energy sources isn’t arbitrary. Coal, for example, is still America’s second largest source of electricity, despite the fact that it’s the dirtiest and most carbon-intensive way to produce it. Why do we still use so much of it? Well, because it’s significantly more affordable…in part because it’s way less resource-intensive.

An energy source like offshore wind, for example, is so dependent on materials like copper and zinc that it would require six times as many mineral resources to produce the same amount of power as coal. And by the way, getting all those minerals out of the ground…itself requires lots and lots of energy.

Now, the good news is that America has actually been cutting way down on its use of coal in recent years, thanks largely to technological breakthroughs that brought us cheap natural gas as a replacement. And because natural gas emits way less carbon than coal, that reduced our carbon emissions from electricity generation by more than 30%.

In fact, the government reports that switching over to natural gas did more than twice as much to cut carbon emissions as renewables did in recent years. Why did natural gas progress so much faster than renewables? It wasn’t an accident.

Energy is a little like money: You’ve gotta spend it to make it. To get usable natural gas, for example, you’ve first gotta drill a well, process and transport the gas, build a power plant, and generate the electricity. But the question is how much energy are you getting back for your investment? With natural gas, you get about 30 times as much power out of the system as you put into creating it.  By contrast, with something like solar power, you only get about 3 1/2 times as much power back.

Replacing the now closed Indian Point nuclear power plant would require covering all of Albany County NY with wind mills.

Hard to fuel an entire country that way. And everywhere you look, you see similarly eye-popping numbers. To replace the energy produced by just one oil well in the Permian Basin of Texas–and there are thousands of those–you’d need to build 10 windmills, each about 330 feet high. To meet just 10% of the country’s electricity needs, you’d have to build a wind farm the size of the state of New Hampshire. To get the same amount of power produced by one typical nuclear reactor, you’d need over three million solar panels, none of which means, by the way, that we shouldn’t be using renewables as a part of our energy future.

But it does mean that the dream of using only renewables is going to remain a dream,
at least given the constraints of current technology. We simply don’t know how
to do it while still providing the amount of energy that everyday life requires.

No energy source is ever going to painlessly solve all our problems. It’s always a compromise – which is why it’s so important for us to focus on the best outcomes that are achievable, because otherwise, New Hampshire’s gonna look like this.

Addendum from Michael J. Kelly

Energy return on investment (EROI)

The debate over decarbonization has focussed on technical feasibility and economics. There is one emerging measure that comes closely back to the engineering and the thermodynamics of energy production. The energy return on (energy) investment is a measure of the useful energy produced by a particular power plant divided by the energy needed to build, operate, maintain, and decommission the plant. This is a concept that owes its origin to animal ecology: a cheetah must get more energy from consuming his prey than expended on catching it, otherwise it will die. If the animal is to breed and nurture the next generation then the ratio of energy obtained from energy expended has to be higher, depending on the details of energy expenditure on these other activities. Weißbach et al. have analysed the EROI for a number of forms of energy production and their principal conclusion is that nuclear, hydro-, and gas- and coal-fired power stations have an EROI that is much greater than wind, solar photovoltaic (PV), concentrated solar power in a desert or cultivated biomass: see Fig. 2.

In human terms, with an EROI of 1, we can mine fuel and look at it—we have no energy left over. To get a society that can feed itself and provide a basic educational system we need an EROI of our base-load fuel to be in excess of 5, and for a society with international travel and high culture we need EROI greater than 10. The new renewable energies do not reach this last level when the extra energy costs of overcoming intermittency are added in. In energy terms the current generation of renewable energy technologies alone will not enable a civilized modern society to continue!

On Energy Transitions

Postscript

Fantasies of Clever Climate Policies

Chris Kenny writes at The Australian Facts at a premium in blustery climate debate. Excerpts in italics from text provided by John Ray at his blog, Greenie Watch.  My bolds and added images.

Collective Idiocy From Intellectual Vanity

We think we are so clever. The conceit of contemporary humankind is often unbearable.  Yet this modern self-regard has generated a collective idiocy, an inane confusion between feelings and facts, and an inability to distinguish between noble aims and hard reality.

This preference for virtue signalling over practical action can be explained only by intellectual vanity, a smugness that over-estimates humankind’s ability to shape the world it inhabits.

As a result we have a tendency to believe we are masters of the universe, that we can control the climate and regulate natural disasters. Too lazy or spoiled to weigh facts and think things through, we are more susceptible than ever to mass delusion.

We have seen this tendency play out in deeply worrying ways, such as the irrational belief in the communal benefits of Covid vaccination despite the distinct lack of scientific evidence. Too many people just wanted to believe the vaccine had this thing beaten.

Still, there is no area of public debate where rational thought is more readily cast aside than in the climate and energy debate. This is where alarmists demand that people “follow the science” while they deploy rhetoric, scare campaigns and policies that turn reality and science on their heads.

This nonsense is so widespread and amplified by so many authoritative figures that we have become inured to it. Teachers and children break from school to draw attention to what the UN calls a “climate emergency” as the world lives through its most populous and prosperous period in history, when people are shielded from the ill-effects of weather events better than they ever have been previously.

Politicians tell us in the same breath that producing clean energy is the most urgent and important task for the planet and reject nuclear energy, the only reliable form of emissions-free energy. The activists argue that reducing emissions is so imperative it is worth lowering living standards, alienating farmland, scarring forests and destroying industries, but it is not worth the challenge of boiling water to create energy-generating steam by using the tried and tested technology of nuclear fission.

Our acceptance of idiocy, unchecked and unchallenged, struck me in one interview this week given by teal MP Zali Steggall. In many ways it was an unexceptional interview; there are politicians and activists saying this sort of thing every day somewhere, usually unchallenged.

Steggall was preoccupied with Australia’s emissions reduction targets. “If we are going to be aligned to a science-based target and keep temperatures as close to 1.5 degrees as we can, we must have a minimum reduction of 75 per cent by 2035 as an interim target,” she said.

Steggall then patronised her audience by comparing meeting emissions targets to paying down a mortgage. The claim about controlling global temperatures is hard to take seriously, but to be fair it is merely aping the lines of the UN, which argues the increase in global average temperatures can be held to 1.5 degrees with emissions reductions of that size – globally.

We could talk all day about the imprecise nature of these calculations, the contested scientific debate about the role of other natural variabilities in climate, and the presumption that humankind, through policy imposed by a supranational authority, can control global climate as if with a thermostat. The simplistic relaying of this agenda as central to Australian policy decisions was not the worst aspect of Steggall’s presentation.

“The Coalition has no policy, so let’s be really clear, they are taking Australia out of the Paris Agreement if they fail to nominate an improvement with a 2035 target,” Steggall lectured, disingenuously.

This was Steggall promulgating the central lie of the national climate debatethat Australia’s emissions reduction policies can alter the climate. It is a fallacy embraced and advocated by Labor, the Greens and the teals, and which the Coalition is loath to challenge for fear of being tagged into a “climate denialism” argument.

It is arrant nonsense to suggest our policies can have any discernible effect on the climate or “climate risk”. Any politician suggesting so, directly or by implication, is part of a contemporary, fake-news-driven dumbing down of the public square, and injecting an urgency into our policy considerations that is hurting citizens already with high electricity prices, diminished reliability and a damaged economy.

Steggall went on to claim we were feeling the consequences of global warming already. “And for people wondering ‘How does that affect me?’, just look at your insurance premiums, our insurance premiums around Australia are going through the roof,” she extrapolated, claiming insurance costs were keeping people out of home ownership. “This is not a problem for the future,” Steggall stressed, “it is problem for now.”

It is a problem all right – it is unmitigated garbage masquerading as a policy debate. Taking it to its logical conclusion, Steggall claims if Australia reduced its emissions further we would lower the risk of natural disasters, leading to lower insurance premiums and improved housing affordability – it is surprising that world peace did not get a mention.

Mind you, these activists do like to talk about global warming as a security issue. They will say anything that heightens fears, escalates the problem and supports their push for more radical deindustrialisation.

Our national contribution to global emissions
is now just over 1 per cent and shrinking.

Australia’s annual emissions total less than 400 megatonnes while China’s are rising by more than that total each year and are now at 10,700Mt or about 30 times Australia’s. While our emissions reduce, global emissions are increasing. We could shut down our country, eliminating our emissions completely, and China’s increase would replace ours in less than a year.

So, whatever we are doing, it is not changing and cannot change the global climate. Our national chief scientist, Alan Finkel, clearly admitted this point in 2018, even though he was embarrassed by its implications in the political debate. Yet the pretence continues.

And before critics suggest I am arguing for inaction, I am not. But clearly, the logical and sensible baseline for our policy consideration should be a recognition that our national actions cannot change the weather. Therefore we should carefully consider adaptation to measured and verified climate change, while we involve ourselves as a responsible nation in global negotiations and action.

Obviously, we should not be leading that action but acting cautiously to protect our own interests and prosperity.

It is madness for us to undermine our cheap energy advantage to embark on a renewables-plus-storage experiment that no other country has dared to even try, when we know it cannot shift the global climate one iota. It is all pain for no gain.

Yet that is what this nation has done. So my question today is what has happened to our media, academia, political class and wider population so that it allows this debate and policy response to occur in a manner that is so divorced from reality?

Are we so complacent and overindulged that we accept post-rational debate to address our post-material concerns? Even when it is delivering material hardship to so many Australians and jeopardising our long-term economic security?

Should public debate accept absurd baseline propositions such as the idea that our energy transition sacrifice will improve the weather and reduce natural disasters, simply because they are being argued by major political groupings or the UN? Or should we not try to impose a dose of reality and stick to the facts?

This feebleness of our public debate has telling ramifications – there is no way this country could have embarked on the risky, expensive and doomed renewables-plus-storage experiment if policies and prognostications had been subject to proper scrutiny and debate.

Our media is now so polarised that the climate activists of Labor, the Greens and the teals are able to ensure their nonsensical advocacy is never challenged, and the green-left media, led by the publicly funded ABC, leads the charge in spreading misinformation.

Clearly, we are not as clever as we think. Our children need us to wise up.