Put Climate Insanity Behind Us

Conrad Black writes at National Post Time for the climate insanity to stop.  Excerpts in italics with my bolds and added images.

We have been racing to destroy our standard of living
to avert a crisis that never materialized

We must by now be getting reasonably close the point where there is a consensus for re-examining the issue of climate change and related subjects. For decades, those of us who had our doubts were effectively shut down by the endless deafening repetition, as if from the massed choir of an operatic catechism school, of the alleged truism: “98 per cent of scientists agree …” (that the world is coming to an end in a few years if we don’t abolish the combustion engine). Decades have gone by in which the polar bears were supposed to become extinct because of the vanishing polar ice cap, the glaciers were supposed to have melted in the rising heat and the impact of melting ice would raise ocean levels to the point that Pacific islands, such as former U.S. vice-president Al Gore’s oratorical dreamworld, the Pacific island state of Tuvalu, would only be accessible to snorkelers. There has been no progress toward any of this. Ocean levels have not risen appreciably, nothing has been submerged and the polar bear population has risen substantially.

A large part of the problem has been the fanaticism of the alarmist forces. This has not been one of those issues where people may equably disagree. There was a spontaneous campaign to denigrate those of us who were opposed to taking drastic and extremely expensive economic steps to reduce carbon emissions on the basis of existing evidence: we could not be tolerated as potentially sensible doubters; we were labelled “deniers,” a reference to Holocaust-deniers who would sweep evidence of horrible atrocities under the rug. For our own corrupt or perverse motives, we were promoting the destruction of the world and unimaginable human misery. There has been climate hysteria like other panics in history, such as those recounted in Charles MacKay’s “Extraordinary Popular Delusions and the Madness of Crowds,” particularly the 1630’s tulip mania, in which a single tulip bulb briefly sold for the current equivalent of $25,000.

In western Europe, and particularly in the United States, where the full panic of climate change prevailed, the agrarian and working echelons of society have rebelled against the onerous financial penalties of the war on carbon emissions. There have been movements in some countries to suppress the population of cows because of the impact of their flatulence on the composition of the atmosphere. This has created an alliance of convenience between the environmental extremists and the dietary authoritarians as they take dead aim at the joint targets of carbon emissions and obesity. Germany, which should be the most powerful and exemplary of Europe’s nations, has blundered headlong into the climate crisis by conceding political power to militant Greens. It has shut down its advanced and completely safe nuclear power program, the ultimate efficient fuel, and has flirted with abolishing leisure automobile drives on the weekends.

Claims that tropical storms have become more frequent are rebutted by meticulously recorded statistics. Claims that forest fires are more frequent and extensive have also been shown not to be true. My own analysis, which is based on observations and makes no pretense to scientific research, as I have had occasion to express here before, is that the honourable, if often tiresome, conservation movement, the zealots of Greenpeace and the Sierra Club, were suddenly displaced as organizers and leaders of the environmental movement by the international left, which was routed in the Cold War. Their western sympathizers demonstrated a genius for improvisation that none of us who knew them in the Cold War would have imagined that they possessed, and they took over the environmental bandwagon and converted it into a battering ram against capitalism in the name of saving the planet.

Everyone dislikes pollution and wants the cleanest air and water possible. All conscientious people want the cleanest environment that’s economically feasible. We should also aspire to the highest attainable level of accurate information before we embark on, or go any further with, drastic and hideously expensive methods of replacing fossil fuels. Large-scale disruptions to our ways of life at immense cost to consumers and taxpayers, mainly borne by those who can least easily afford it, are a mistake. We can all excuse zeal in a sincerely embraced cause, but it is time to de-escalate this discussion from its long intemperate nature of hurling thunderbolts back and forth, and instead focus on serious research that will furnish a genuine consensus. I think this was essentially what former prime minister Stephen Harper and former environment minister John Baird were advocating in what they called a ”Canadian solution” to the climate question. Since then, our policy has been fabricated by fanatics, including the prime minister, who do not wish to be confused by the facts. The inconvenient truth is now the truth that inconveniences them.

Western Europe has effectively abandoned its net-zero carbon emission goals; the world is not deteriorating remotely as quickly as Al Gore, King Charles, Tony Blair and the Liberal Party of Canada predicted. Some of the largest polluters — China, India and Russia — do not seem to care about any of this. Canada should lead the world toward a rational consensus with intensified research aiming at finding an appropriate response to the challenge. What we have had is faddishness and public frenzy. Historians will wonder why the West made war on its own standard of living in pursuit of a wild fantasy, and no immediate chance of accomplishing anything useful. We have been cheered on by the under-developed world because they seek reparations from the advanced countries, although some of them are among the worst climate offenders. It is insane. Canada should help lead the patient back to sanity.

Postscript:

So to be more constructive, let’s consider what should be proposed by political leaders regarding climate, energy and the environment.  IMO these should be the pillars:

♦  Climate change is real, but not an emergency.

♦  We must use our time to adapt to future climate extremes.

♦  We must transition to a diversified energy platform.

♦  We must safeguard our air and water from industrial pollutants.

A Rational Climate Policy

This is your brain on climate alarm. Just say N0!

Scottish Wind Power from Diesel Generators

John Gideon Hartnett writes at Spectator Australia Another climate myth busted  explaining how the Scottish public was scammed about their virtuous green wind power by the public authority.  Excerpts in italics with my bolds and added images from post by John Ray at his blog.

What I like to call ‘climate cult’ wind farms expose the myth that wind can replace hydrocarbon fuels for power generation. The following story is typical of the problems associated with using wind turbines to generate electricity in a cold environment.

Apparently, diesel-fuelled generators are being used to power some wind turbines as a way of de-icing them in cold weather, that is, to keep them rotating. Also, it appears that the wind turbines have been drawing electric power directly from the grid instead of supplying it to the grid.

Scotland’s wind turbines have been secretly using fossil fuels.

The revelation is now fueling environmental, health and safety concerns, especially since the diesel-generated turbines were running for up to six hours a day.

Scottish Power said the company was forced to hook up 71 windmills to the fossil fuel supply after a fault on its grid. The move was an attempt to keep the turbines warm and working during the cold month of December.

South Scotland Labor MSP Colin Smyth said regardless of the reasons, using diesel to deice faulty turbines is “environmental madness”.  Source: Straight Arrow News

Charging system for Teslas at Davos WEF meeting.

Nevertheless, those pushing these technologies are so blind to the physical realities of the world that they are prepared to ignore failures while pretending to efficiently generate electricity. I say ‘failures’ because wind energy was put out of service the day the Industrial Revolution was fired up (pun intended) with carbon-based fuels, from petroleum and coal. And in the case of modern wind turbines, they do not always generate electricity; they sometimes consume it from the grid.

Green energy needs the hydrocarbon-based fuel it claims to replace.

Hydrocarbon-based fuels were provided providentially by the Creator of this planet for our use. That includes coal, which has been demonised in the Western press as some sort of evil. But those who run that line must have forgotten to tell China, because they build two coal-fired power stations every other week. No other source of non-nuclear power is as reliable for baseload generation.

How will wind turbines work in a globally cooling climate as Earth heads into a grand solar minimum and temperatures plummet? This case from Scotland may give us a hint. As cloud cover increases with cooler weather, and more precipitation occurs, how will solar perform? It won’t.
The two worst choices for electricity generation in cold, wet, and stormy environments are solar and wind. Solar is obvious. No sun means no power generation. But you might think wind is a much better choice under those conditions.

However, wind turbine rotors have to be shut down if the wind becomes too strong and/or rapidly changes in strength. They are shut down when too much ice forms or when there is insufficient wind. And now we have learned in Scotland they just turn on the diesel generators when that happens or they draw power directly from the grid.

Where are all the real engineers? Were they fired?

In regards to wind turbines going forward, once their presence in the market has destroyed all the coal or natural gas electricity generators, how are they going to keep the rotors turning and the lights on?

These devices are based on a rotating shaft with a massive bearing, that suffers massive frictional forces. In this case, only a high-quality heavy-duty oil can lubricate this system and I am sure it would need to be regularly replaced.

Wind turbine gearbox accelerates blade rotor (left) up to 1800 rpm output to electical generator (right)

Massive amounts of carbon-based oil are needed for the lubrication of all gears and bearings in a wind turbine system, which is mechanical in its nature. In 2019, wind turbine applications were estimated to consume around 80 per cent of the total supply of synthetic lubricants. Synthetic lubricants are manufactured using chemically modified petroleum components rather than whole crude oil. These are used in the wind turbine gearboxes, generator bearings, and open gear systems such as pitch and yaw gears.

Now we also know that icing causes the rotors to stop turning so diesel power has to be used to keep the bearings warm during cold weather. The diesel generator is needed to get the blades turning on start-up to overcome the limiting friction of the bearing or when the speed of the rotor drops too low.

In this case in Scotland, 71 windmills on the farm were supplied with diesel power. Each windmill has its own diesel generator. Just think of that.

What about the manufacturing of these windmills?

The blades are made from tons of fibreglass. Manufacturing fibreglass requires the mining of silica sand, limestone, kaolin clay, dolomite, and other minerals, which requires diesel-driven machines. These minerals are melted in a furnace at high temperatures (around 1,400°C) to produce the glass. Where does that heat come from? Not solar or wind power, that is for sure. The resin in the fibreglass comes from alcohol or petroleum-based manufacturing processes.

The metal structure is made from steel that requires tons of coking coal (carbon) essential to make pig iron, which is made from iron ore in a blast furnace at temperatures up to 2,200°C. The coal and iron ore is mined from the ground with giant diesel-powered machines and trucks. The steel is made with pig iron and added carbon in another furnace powered by massive electric currents. Carbon is a critical element in steel making, as it reacts with iron to form the desired steel alloy. None of this comes from wind and solar power.

Wind turbine power generation is inherently intermittent and unreliable.
It can hardly called green as the wind turbines require enormous
amounts of hydrocarbons in their manufacture and continued operation.

 

Biden Climate Policies the Greatest Financial Risk

Will Hild writes at Real Clear Policy The Biden Administration Proves Itself Wrong on ‘Climate Risk’.  The report shows how the feds’ own numbers prove their net zero policies pose a far greater financial risk than the climate itself. Excerpts in italics with my bolds and added images.

Environmental activists and left-leaning political bodies have long argued that climate risk is a form of financial risk, constantly pressuring blue states and the Biden Administration to make the issue a central part of their agendas. Recently, a group of those states has started suing oil and gas companies directly in their state courts. California, for example, claims that oil companies have colluded for decades to keep clean energy unavailable. Such lawsuits are ludicrous, and my organization, Consumers’ Research, filed an amicus brief at the Supreme Court supporting an effort to stop this litigation abuse that drives up consumer costs.

These harmful suits are driven in part by the idea that climate risks are inherently financial risks. However, evidence from the Biden Administration’s own study on climate risk shows that these risks are grossly exaggerated and immaterial. In an attempt to justify its radical and onerous climate policies, the Administration inadvertently exposed the fraud behind the environmental movement.

Soon after taking office, President Biden issued an executive order asking executive agencies to assess “the climate-related financial risk, including both physical and transition risks, to … the stability of the U.S. financial system.” Agency officials quickly responded to the order. Treasury Secretary Yellen announced that “climate change is an emerging and increasing threat to U.S. financial stability.” The FDIC declared that climate risk endangered the banking system. The SEC issued controversial climate risk disclosure rules, which impose massive regulatory burdens on Americans.

The problem with these new rules is that the Administration lacked
sufficient evidence to show that “climate-related financial risk” existed.

The SEC and Secretary Yellen relied on a Biden Administration report issued in 2021 by the Financial Stability Oversight Council. However, the report itself admitted that there were “gaps” in the evidence needed to support its speculative assertion that climate change would “likely” present shocks to the financial system. 

John H. Cochrane, a respected Stanford professor, slammed the Biden Administration’s “climate-related financial risk” assertions and highlighted that the Administration has not shown any serious threat to the financial system. “Financial regulators may only act if they think financial stability is at risk,” but “there is absolutely nothing in even the most extreme scientific speculations” to support the type of risk that would allow financial regulators to intervene. [See my synopsis Financial Systems Have Little Risk from Climate]

In response to criticism, the Biden Administration came up with an idea to manufacture its own evidence.  If the Federal Reserve created scenarios in which banks must simulate extreme “physical” and “transition” climate risks, these custom-designed scenarios could show a large impact to the financial system just like federal “stress tests” for banks.

To ensure that the “stresses” were sufficiently severe,
the Biden Administration manipulated the scenarios
to ensure as much stress as possible. 

For example, for “physical risk,” major banks had to simulate the effect of a storm-of-two-centuries-sized hurricane smashing into the heavily populated Northeast United States with no insurance coverage available to pay for the damage.  For “transition risk,” the government demanded a simulation in which “stringent climate policies are introduced immediately,” without any chance for banks to prepare for such policies, along with rapidly rising carbon prices. 

Despite these attempts to make the climate risk as extreme as possible, the tests utterly failed to demonstrate any significant effect.  The Administration’s study demonstrated that even under some of the most extreme climate scenarios imaginable, the probability of default on loans only increased by half a percentage point or less.   In contrast, federal bank stress tests involving true financial stresses, such as a severe recession, have resulted in probabilities of default jumping by 20 to 40 times that amount or more, leading to hundreds of billions in losses. 

The climate analyses also revealed the expected costs
of the Biden Administration’s quixotic net zero quest. 

The Biden Administration employed scenarios from the Network of Central Banks and Supervisors for Greening the Financial System (NGFS). Those climate scenarios envision the cost of carbon emissions steadily rising and reaching over $400 per ton by 2050.  Given that the average American emits about 16 tons of carbon a year, the Biden Administration’s hand-picked climate scenarios would cost the average American around $125k between now and 2050 in government mandated carbon fees.

Thus, the Biden Administration’s own bank stress test proved that climate risk is not a material financial risk, and that the biggest financial risk at issue is that the Administration’s net-zero policies would result in massive financial losses for everyday Americans. The Biden Administration should stop using lies to support their burdensome policies, and blue states should drop their punitive lawsuits against oil and gas companies.  Otherwise, the result of both efforts will be to inflict high costs on everyday Americans without any benefit.

 

 

Ruinous Folly of CO2 Pricing

Dr. Lars Schernikau is an energy economist explaining why CO2 pricing (also falsely called “carbon pricing”) is a terrible idea fit only for discarding.  His blog article is The Dilemma of Pricing CO2.  Excerpts below in italics with my bolds and added images.

1.Understanding CO₂ pricing
2.Economic and environmental impact
3.Global economic impact
4.Alternative solutions
5.Conclusion
6.Additional comments and notes
7.References

Introduction

As an energy economist I am confronted daily with questions about the “energy transition” away from conventional fuels. As we know, the discussion about the “energy transition” stems from concerns about climatic changes.

The source of climatic changes is a widely discussed issue, with numerous policies and measures proposed to mitigate its impact. One such measure is the current and future pricing of carbon dioxide (CO₂) emissions. The logic followed is that if human CO₂ emissions are reduced, future global temperatures will be measurably lower, extreme weather events will be reduced, and sea-levels will rise less or stop rising all together.

Although intended to reduce greenhouse gases, this approach has sparked considerable debate. In this blog post I discuss the controversial topic of CO₂ pricing, examining its economic and environmental ramifications.

However, this article is not about the causes of climatic changes, nor is it about the negative or positive effects of a warming planet and higher atmospheric CO₂ concentrations. It is also not about the scientifically undisputed fact that we don’t know how much warming CO₂ causes (a list of recent academic research on CO₂’s climate sensitivity can be found at the end of this blog).

Nor do I unpack the undisputed and IPCC confirmed fact that each additional ton of CO₂ in the atmosphere has less warming effect than the previous ton as the climate sensitivity of CO₂ is a logarithmic function irrespective of us not knowing what that climate sensitivity is. I also don’t discuss the NASA satellite confirmed greening of the world over the past decades partially driven by higher atmopheric CO₂ concentrations (see sources inc Chen et al. 2024).

Instead, this blog post is about the environmental and economic “sense”, or lack thereof, of pricing CO₂ emissions as currently practiced in most OECD countries and increasingly seen in developing nations. It is about the “none-sense” of measuring practically all human activity with a “CO₂ footprint”, often mistakenly called “carbon footprint”, and having nearly every organization set claims for current or future “Net-Zero” (Figure 1).

1.Understanding CO₂ Pricing

CO₂ pricing aims to internalize the external costs of CO₂ emissions, thereby encouraging businesses and individuals to reduce their “carbon footprint”. The concept is straightforward: by assigning a cost to CO₂ emissions, it becomes financially advantageous to emit less CO₂.

However, this simplistic view overlooks
significant complexities and unintended consequences.

Our entire existence is based on drawing from nature (“renewable” or not), so the “Net-Zero” discussion ignores a fundamental requirement for our survival. I agree that it should be our aim to reduce the environmental footprint as much as possible but only if our lives, health, and wealth don’t deteriorate as a result.

Now, I am sure, some readers and many “activists” may disagree, which I respect but, at a global level, find unrealistic. However, I would assume that most agree that no-one’s life ought to be harmed or shortened for the sake of reducing the environmental impact made. Otherwise, there is little room for a conversation.

BloombergNEFs “New Energy Outlook” from May 2024 should possibly be called “CO₂ Outlook”, as there is little to be found about energy and its economics but rather all about CO₂ emissions and the so called “Net-Zero” (Figure 1), which is in line with media, government, and educational focus on primarily carbon dioxide emissions.

2.Economic and Environmental Impacts

One of the primary criticisms of CO₂ pricing is that it addresses only one environmental externality while ignoring others. This narrow focus can lead to economic distortions, as it fails to account for the full spectrum of environmental and social impacts. For instance, while CO₂ pricing might reduce emissions, it can also drive-up energy costs, disproportionately affecting lower-income populations and hindering economic development in lesser developed countries.

It is by now undisputed amongst energy economists that, large-scale “Net-Zero” intermittent and unpredictable wind and solar power generation increases the total or “full” cost of electricity, primarily because of their low energy density, intermittency, inherent net energy and raw material inefficiency, mounting integration costs for power grids, and the need for a drastic overbuild installation system plus an overbuild backup/storage system because of their intermittency.

CO₂ pricing can also result in environmental trade-offs. For example, the shift towards “renewable” energy sources like wind and solar, incentivized by CO₂ pricing, has its own set of environmental impacts, including land use, resource extraction, energy footprint, and energy storage challenges.

When BloombergNEF (Figure 1) displays
how clean power and electrification
will directly reduce CO₂ emissions to zero,
then they are clearly mistaken.

My native country Germany provides a notable example of the complexities involved in transitioning to “renewable” energy. The country has invested heavily in wind and solar power, leading to the highest electricity costs among larger nations. Germany’s installed wind and solar capacity is now twice the total peak power demand. This variable “renewable” wind and solar power capacity now produces about a third of the country’s electricity and contributes about 6% to Germany’s primary energy supply (Figure 2).

Sources: Schernikau based on Fraunhofer, Agora, AG Energiebilanzen. See also www.unpopular-truth.com/graphs.

3.Global Economic Implications

Higher energy costs, obviously and undisputedly, hurt less affluent people and stifles the development of poorer nations (Figure 3). Thus, a move to more expensive wind and solar energy has “human externalities”. The less fortunate will be “starved of” energy as they wouldn’t be able to afford it, leading to literal reduction in life expectancy.

Source: Eschenbach 2017; Figure 38 in Book “The Unpopular Truth… about Electricity and the Future of Energy”

CO₂ pricing typically focuses only on emissions during operation,
neglecting significant environmental and economic costs
incurred during other stages or by the entire system.

For instance, the production of solar panels involves substantial energy and raw material inputs. Today there is not one single solar panel that is produced without coal. Similarly, the manufacturing and transportation processes of wind turbines and electric vehicles are energy-intensive and environmentally impactful. These stages are rarely accounted for in CO₂ pricing schemes, leading to a distorted view of their true environmental footprint. Also not accounted for are:

a) the required overbuild,
b) short and long duration energy storage,
c) backup facilities, or
d) larger network integration and transmission infrastructure.

Source: Schernikau, adapted from Figure 39 in Book “The Unpopular Truth… about Electricity and the Future of Energy“

Figure 4 illustrates how virtually all CO₂ pricing or taxation happens only at the stage of “operation” or combustion. How else could a “Net-Zero” label be assigned to a solar panel produced from coal and minerals extracted in Africa with diesel-run equipment, transported to China on a vessel powered by fuel-oil, and processed with heat and electricity from coal- or gas-fired power partially using forced labour? All this energy-intensive activity and not a single kilogram of CO₂ is taxed (see my recent article on this subject here) The same applies to wind turbines, hydro power, biofuel, or electric vehicles.

It turns out, CO₂ tax is basically just a means to redistribute wealth, with the collecting agency (government) deciding where the funds go. Yes, a CO₂ tax does incentivize industry to reduce CO₂ emissions at their taxed operations only, but this comes at a cost to economies, the environment, and often people. . . Any economist will confirm that pricing one externality but not others leads to economic distortions and, many would say, worse environmental impacts.

4.Alternative Approaches

Distortion, in this case, is just another word for unintended consequences to the environment, our economies, and the people. Pricing CO₂ only during combustion but failing to price methane, raw material and recycling, inefficiency, or embodied energy, or energy shortages, or land requirement, or greening from CO₂… will cause undesirable outcomes. The world will be worse off economically and environmentally.

Protest if you must, but let me offer a simple example. The leaders of the Western world seem to have united around abandoning coal immediately, because it is the highest CO₂ emitter during combustion (UN 2019). Instead, demanding reliable and affordable energy, Bangladesh, Pakistan, Germany, and so many more nations have embraced liquified natural gas (LNG) as a “bridge” fuel to replace coal. This “switch” is taking place despite questions about LNG’s impact on the environment, including the “climate”. This policy, supported by almost all large consultancies, indirectly caused blackouts affecting over 150 million people in Bangladesh in October 2022 (Reuters and Bloomberg).

So, the world is embarking on an expensive venture
to replace as much coal as possible with
more expensive liquified natural gas LNG.

On top of that, wind and solar are given preference. For example, the IEA recently confirmed that 2024 sparks the first year where investments in solar outstrip the combined investments in all other power generation technologies. As a result, energy costs go up, dependencies increase, lights go off, and, as per the UN’s IPCC, the “climate gets worse.”

Now imagine what would happen if we would truly take into account all environmental and human impacts, both negative and positive, along the entire value chain of energy production, transportation, processing, generation, consumption, and disposal… we would all be surprised! You would look at fossil fuels and certainly nuclear through different eyes. Instead we should simply incentivize resource and energy efficiency which will truly make a positive difference!

From Schernikau et al. 2022.

5.Conclusion

No matter what your view on climate change is, pricing CO₂ is harmful… why?
Answer: … because pricing one externality but not others leads to economic and environmental distortions…causing human suffering.

That is why, even considering the entire value chain, I do not support any CO₂ pricing.. That is why I fight for environmental and economic justice so we can, by avoiding energy starvation and resulting poverty, make a truly positive difference not only for ourselves but also for future generations to come.. . We need INvestment in, not DIvestment from 80% of our energy supply to rationalize our energy systems and to allow people and the planet to flourish.

I strongly support increasing adaptation efforts, which have already been successful in drastically reducing the death rate and GDP adjusted financial damage from natural disasters during the past
100 years (OurWorldInData, Pielke 2022, Economist).

 

 

 

 

 

 

 

 

Experimental Proof Nil Warming from GHGs

Thomas Allmendinger is a Swiss physicist educated at Zurich ETH whose practical experience is in the fields of radiology and elemental particles physics.  His complete biography is here.

His independent research and experimental analyses of greenhouse gas (GHG) theory over the last decade led to several published studies, including the latest summation The Real Origin of Climate Change and the Feasibilities of Its Mitigation, 2023, at Atmospheric and Climate Sciences journal. The paper is a thorough and detailed discussion of which I provide here a synopsis of his methods, findings and conclusions. Excerpts are in italics with my bolds and added images.

Abstract

The actual treatise represents a synopsis of six important previous contributions of the author, concerning atmospheric physics and climate change. Since this issue is influenced by politics like no other, and since the greenhouse-doctrine with CO2 as the culprit in climate change is predominant, the respective theory has to be outlined, revealing its flaws and inconsistencies.

But beyond that, the author’s own contributions are focused and deeply discussed. The most eminent one concerns the discovery of the absorption of thermal radiation by gases, leading to warming-up, and implying a thermal radiation of gases which depends on their pressure. This delivers the final evidence that trace gases such as CO2 don’t have any influence on the behaviour of the atmosphere, and thus on climate.

But the most useful contribution concerns the method which enables to determine the solar absorption coefficient βs of coloured opaque plates. It delivers the foundations for modifying materials with respect to their capability of climate mitigation. Thereby, the main influence is due to the colouring, in particular of roofs which should be painted, preferably light-brown (not white, from aesthetic reasons).

It must be clear that such a drive for brightening-up the World would be the only chance of mitigating the climate, whereas the greenhouse doctrine, related to CO2, has to be abandoned. However, a global climate model with forecasts cannot be aspired to since this problem is too complex, and since several climate zones exist.

Background

The alleged proof for the correctness of this theory was delivered 25 years later by an article in the Scientific American of the year 1982 [4]. Therein, the measurements of C.D. Keeling were reported which had been made at two remote locations, namely at the South Pole and in Hawaii, and according to which a continuous rise of the atmospheric CO2-concentration from 316 to 336 ppm had been detected between the years 1958 and 1978 (cf. Figure 1), suggesting coherence between the CO2 concentration and the average global temperature.

But apart from the fact that these CO2-concentrations are quite minor (400 ppm = 0.04%), and that a constant proportion between the atmospheric CO2-concentration and the average global temperature could not be asserted over a longer period, it should be borne in mind that this conclusion was an analogous one, and not a causal one, since solely a temporal coincidence existed. Rather, other influences could have been effective which happened simultaneously, in particular the increasing urbanisation, influencing the structure and the coloration of large parts of Earth surface.

However, this contingency was, and still is, categorically excluded. Solely the two possibilities are considered as explanation of the climate change: either the anthropogenic influence due to CO2-production, or a natural one which cannot be influenced. A third influence, the one suggested here, namely the one of colours, is a priori excluded, even though nobody denies the influence of colouring on the surface temperature of Earth and the existence of urban heat islands, and although an increase of winds and storms cannot be explained by the greenhouse theory.

However, already well in advance institutions were founded which aimed at mitigating climate change through political measures. Thereby, climate change was equated with the industrial CO2 production, although physical evidence for such a relation was not given. It was just a matter of belief. In this regard, in 1992 the UNFCCC (United Nations Framework Convention on Climate Change) was founded, supported by the IPCC (Intergovernmental Panel on Climate Change). In advance, side by the side with the UNO, numerous so-called COPs (Conferences on the Parties) were hold: the first one in 1985 in Berlin, the most popular one in 1997 in Kyoto, and the most important one in 2015 in Paris, leading to a climate convention which was signed by representatives of 195 nations. Thereby, numerous documents were compiled, altogether more than 40,000. But actually these documents didn’t fulfil the standards of scientific publications since they were not peer reviewed.

Subsequently, intensive research activities emerged, accompanied by a flood of publications, and culminating in several text books. Several climate models were presented with different scenarios and diverging long-term forecasts. Thereby, the fact was disregarded that indeed no global climate exists but solely a plurality of climates, or rather of micro-climates and at best of climate-zones, and that the Latin word “clima” (as well as the English word “clime”) means “region”. Moreover, an average global temperature is not really defined and thus not measurable because the temperature-differences are immense, for instance with regard to the geographic latitude, the altitude, the distinct conditions over sea and over land, and not least between the seasons and between day and night. Moreover, the term “climate” implicates rain and snow as well as winds and storms which, in the long-term, are not foreseeable. In particular, it should be realized that atmospheric processes are energetically determined, whereto the temperature contributes only a part.

2. The Historical Inducement for the Greenhouse Theory and Their Flaws

The scientific literature about the greenhouse theory is so extensive that it is difficult to find a clearly outlined and consistent description. Nevertheless, the publications of James E. Hansen [5] and of V. Ramanathan et al. [6] may be considered as authoritative. Moreover, the textbooks [7] [8] and [9] are worth mentioning. Therein it is assumed that Earth surface, which is heated up by sun irradiation, emits thermal radiation into the atmosphere, warming it up due to heat absorption by “greenhouse gases” such as CO2 and CH4. Thereby, counter-radiation occurs which induces a so-called radiative transfer. This aspect involved the rise of numerous theories (e.g. [10] [11] [12]). But the co-existence of theories is in contrast to the scientific principle that for each phenomenon solely one explanation or theory is admissible.

Already simple thoughts may lead one to question this theory. For instance: Supposing the present CO2-concentration of approx. 400 ppm (parts per million) = 0.04%, one should wonder how the temperature of the atmosphere can depend on such an extremely low gas amount, and why this component can be the predominant or even the sole cause for the atmospheric temperature. This would actually mean that the temperature would be situated near the absolute zero of −273˚C if the air would contain no CO2 or other greenhouse gases.

Indeed, no special physical knowledge is needed in order to realize that this theory cannot be correct. However, the fact that it has settled in the public mind, becoming an important political issue, requires a more detailed investigation of the measuring methods and their results which delivered the foundations of this theory, and why misinterpretations arose. Thereto, the two subsequent points have to be particularly considered: The first point concerns the photometrical measurements on gases in the electromagnetic range of thermal radiation which initially Tyndall had carried out in the 1860s [13], and which had been expanded to IR-measurements evaluated by Plass nineteen years later [14]. The second point concerns the application of the Stefan/Boltzmann-law on the Earth-atmosphere system firstly made by Arrhenius in 1896 [2], and more or less adopted by modern atmospheric physics. Both approaches are deficient and would question the greenhouse theory without requiring the author’s own approaches.

2.1 The Photometric and IR-Measurement Methods for CO2

By variation of the wave length and measuring the respective absorption, the spectrum of a substance can be evaluated. This IR-spectroscopic method is widely used in order to characterize organic chemical substances and chemical bonds, usually in solution. But even there this method is not suited for quantitative measurements, i.e. the absorption of the IR-active substance is not proportional to its concentration as the Beer-Lambert law predicts. It probably will even less be the case in the gaseous phase and, all the more, at high pressures which were applied in order to imitate the large distances in the atmosphere in the range of several (up to 10) kilometres. Thereby it is disregarded that the pressure of the atmosphere depends on the altitude above sea level, which prohibits the assumption of a linear progress.

Moreover, it is disregarded that at IR-spectrographs the effective radiation intensity is not known, and that in the atmosphere a gas mixture exists where the CO2 amounts solely to a little extent, whereas for the spectroscopic measurements pure CO2 was used. Nevertheless, in the text books for atmospheric physics the Beer-Lambert law is frequently mentioned, however without delivering concrete numerical results about the absorbed radiation.

In both cases solely the absorption degree of the radiation was determined, i.e. the decrease of the radiation intensity due to its run through a gas, but never its heating-up, that means its temperature increase. Instead, it was assumed that a gas is necessarily warmed up when it absorbs thermal radiation. According to this assumption, pure air, or rather a 4:1 mixture of nitrogen and oxygen, is expected to be not warmed up when it is thermally irradiated since it is IR-spectroscopically inactive, in contrast to pure CO2.

However, no physical formula exists which would allow to calculate such an effect, and no respective empirical evidence was given so far. Rather, the measurements which were recently performed by the author delivered converse, surprising results.

2.2. The Impact of Solar Radiation onto the Earth Surface and Its Reflexion

Besides, a further error is implicated in the usual greenhouse theory. It results from the fact that the atmosphere is only partly warmed up by direct solar radiation. In addition, it is warmed up indirectly, namely via Earth surface which is warmed up due to solar irradiation, and which transmits the absorbed heat to the atmosphere either by thermal conduction or by thermal radiation. Moreover, air convection contributes a considerable part. This process is called Anthropogenic Heat Flux (AHF). It has recently been discussed by Lindgren [16]. However, herewith a more fundamental view is outlined.

The thermal radiation corresponds to the radiative emission of a so-called “black body”. Such a body is defined as a body which entirely absorbs electromagnetic radiation in the range from IR to UV light. Likewise, it emits electromagnetic radiation all the more as its temperature grows. Its radiative behaviour is formulated by the law of Stefan and Boltzmann. . . According to this law, the radiation wattage Φ of a black body is proportional to the fourth power of its absolute temperature. Usually, this wattage is related to the area, exhibiting the dimension W/m2.

This formula does not allow making a statement about the wave-length or the frequency of the emitted light. This is only possible by means of Max Planck’s formula which was published in 1900. According to that, the frequencies of the emitted light tend to be the higher the temperature is. At low temperatures, only heat is emitted, i.e. IR-radiation. At higher temperatures the body begins to glow: first of all in red, and later in white, a mixture of different colours. Finally, UV-radiation emerges. The emission spectrum of the sun is in quite good accordance with Planck’s emission spectrum for approx. 6000 K.

Black co2 absorption lines are not to scale.

This model can be applied on Earth surface considering it as a coloured opaque body: On one side, with respect to its thermal emission, it behaves like a black body fulfilling the Stefan/Boltzmann-law. On the other side, it adsorbs only a part βs of the incident solar light, converting it into heat, whereas the complementary part is reflected. However, the intensity of the incident solar light on Earth surface, Φsurface, is not identically equal with its extra-terrestrial intensity beyond the atmosphere, but depends on the sea level since the atmosphere absorbs a part of the sunlight. Remarkably, the atmosphere behaves like a black body, too, but solely with respect to the emission: On one side, it radiates inwards to the Earth surface, and on the other side, it radiates outwards in the direction of the rest of the atmosphere.

However, this method implies three considerable snags:
•  Firstly, TEarth means the constant limiting temperature of the Earth surface which is attained when the sun had constantly shone onto the same parcel and with the same intensity. But this is never the case, except at thin plates which are thermally insulated at the bottom and at the sides, since the position of the sun changes permanently.

•  Secondly, this formula does not allow making a statement about the rate of the warming up-process, which depends on the heat capacity of the involved plate, too. This is solely possible using the author’s approach (see Chapter 3). Nevertheless, it is often attempted (e.g. in [35]), not least within radiative transfer approaches.

•  Thirdly, it is principally impossible to determine the absolute values of the solar reflection coefficient αs with an Albedometer or a similar apparatus, because the intensity of the incident solar light is independent of the distance to the surface, whereas the intensity of the reflected light depends on it. Thus, the herewith obtained values depend on the distance from Earth surface where the apparatus is positioned. So they are not unambiguous but only relative.

In the modern approach of Hansen et al. [5] the Earth is apprehended as a coherent black body, disregarding its segmentation in a solid and a gaseous part, and thus disregarding the contact area between Earth surface and the atmosphere where the reflexion of the sunlight takes place. As a consequence, in Equation (4b) the expression with Tair disappears, whereas a total Earth temperature appears which is not definable and not determinable. This approach has been widely adopted in the textbooks, even though it is wrong (see also [15]).

Altogether, the matter of fact was neglected that the proportionality of the radiation intensity to the absolute temperature to the fourth is solely valid if a constant equilibrium is attained. In contrast, the subsequently described method enables the direct detection of the colour dependent solar absorption coefficient βs = 1 –αs using well-defined plates. Furthermore, the time/temperature-courses are mathematically modelled up to the limiting temperatures. Finally, relative field measurements are possible based on these results.

3. The Measurement of Solar Absorption-Coefficients with Coloured Plates

Within the here described and in [20] published lab-like method, not the reflected but the absorbed sun radiation was determined, namely by measuring the temperature courses of coloured quadratic plates (10 × 10 × 2 cm3) when sunlight of known intensity came vertically onto these plates. The temperatures of the plates were determined by mercury thermometers, while the intensity of the sunlight was measured by an electronic “Solarmeter” (KIMO SL 100). The plates were embedded in Styrofoam and covered with a thin transparent foil acting as an outer window in order to minimize erratic cooling by atmospheric turbulence (Figure 5). Their heat capacities were taken from literature values. The colours as well as the plate material were varied. Aluminium was used as a reference material, being favourable due to its high heat capacity which entails a low heating rate and a homogeneous heat distribution. For comparison, additional measurements were made by wooden plates, bricks and natural stones. For enabling a permanent optimal orientation towards the sun, six plate-modules were positioned on an adjustable panel (Figure 6).

The evaluation of the curves of Figure 7 yielded the colour specific solar absorption-coefficients βs rendered in Figure 9. They were independent of the plate material. Remarkably, the value for green was relatively high.

Figure 7. Warming-up of aluminium plates at 1040 W/m2 [20].

If the sunlight irradiation and thus the warming-up process would be continued, finally constant limiting temperatures are attained. However, when 20 mm thick aluminium plates are used, the hereto needed time would be too long, exceeding the constantly available sunshine period during a day. Instead, separate cooling-down experiments were made, allowing a mathematical modelling of the whole process including the determination of the limiting temperatures.

Figure 10. Cooling-down of different materials (in brackets: ambient temperature) [20]. al = aluminium 20 mm; st = stone 20.5 mm; br = brick 14.5 mm; wo = wood 17.5 mm.

These limiting temperature values are in good accordance with the empirical values reported in [24] and with the Stefan/Boltzmann-values. As obvious from the respective diagrams in Figure 11 and Figure 12, the limiting temperatures are independent of the plate-materials, whereas the heating rates strongly depend on them.  In principal, it is also possible to model combined heating-up and cooling-down processes [20]. However, this presumes constant environmental conditions which normally do not exist.

4. Thermal Gas Absorption Measurements

If the warming-up behaviour of gases has to be determined by temperature measurements, interference by the walls of the gas vessel should be regarded since they exhibit a significantly higher heat capacity than the gas does, which implicates a slower warming-up rate. Since solid materials absorb thermal radiation stronger than gases do, the risk exists that the walls of the vessel are directly warmed up by the radiation, and that they subsequently transfer the heat to the gas. And finally, even the thin glass-walls of the thermometers may disturb the measurements by absorbing thermal radiation.

By these reasons, quadratic tubes with a relatively large profile (20 cm) were used which consisted of 3 cm thick plates from Styrofoam, and which were covered at the ends by thin plastic foils. In order to measure the temperature course along the tube, mercury-thermometers were mounted at three positions (beneath, in the middle, and atop) whose tips were covered with aluminium foils. The test gases were supplied from steel cylinders being equipped with reducing valves. They were introduced by a connecter during approx. one hour, because the tube was not gastight and not enough consistent for an evacuation. The filling process was monitored by means of a hygrometer since the air, which had to be replaced, was slightly humid. Afterwards, the tube was optimized by attaching adhesive foils and thin aluminium foils (see Figure 13). The equipment and the results are reported in [21].

Figure 13. Solar-tube, adjustable to the sun [21].

The initial measurements were made outdoor with twin-tubes in the presence of solar light. One tube was filled with air, and the other one with carbon-dioxide. Thereby, the temperature increased within a few minutes by approx. ten degrees till constant limiting temperatures were attained, namely simultaneously at all positions. Surprisingly, this was the case in both tubes, thus also in the tube which was filled with ambient air. Already this result delivered the proof that the greenhouse theory cannot be true. Moreover, it gave rise to investigate the phenomenon more thoroughly by means of artificial, better defined light.

Figure 14. Heat-radiation tube with IR-spot [21].

Accordingly, the subsequent experiments were made using IR-spots with wattages of 50 W, 100 W and 150W which are normally employed for terraria (Figure 14). Particularly the IR-spot with 150 W lead to a considerably higher temperature increase of the included gas than it was the case when sunlight was applied, since its ratio of thermal radiation was higher. Thereby, variable impacts such as the nature of the gas could be evaluated.

Due to the results with IR-spots at different gases (air, carbon-dioxide, the noble gases argon, neon and helium), essential knowledge could be gained. In each case, the irradiated gas warmed up until a stable limiting temperature was attained. Analogously to the case of irradiated coloured solid plates, the temperature increased until the equilibrium state was attained where the heat absorption rate was identically equal with the heat emission rate.

Figure 15. Time/temperature-curves for different gases [21] (150 W-spot, medium thermometer-position).

As evident from the diagram in Figure 15, the initial observation made with sunlight was approved that pure carbon-dioxide was warmed up almost to the same degree as air does (whereby ambient air only scarcely differed from a 4:1 mixture between nitrogen and oxygen). Moreover, noble gases absorb thermal radiation, too. As subsequently outlined, a theoretical explanation could be found thereto.

Interpretation of the Results

Comparison of the results obtained by the IR-spots, on the one hand, and those obtained with solar radiation, on the other hand, corroborated the conclusion that comparatively short-wave IR-radiation was involved (namely between 0.9 and 1.9 μm). However, subsequent measurements with a hotplate (<90˚C), placed at the bottom of the heat-radiation tube ([15], Figure 16), yielded that long-wave thermal radiation (which is expected at bodies with lower temperatures such as Earth surface) induces also temperature increase of air and of carbon-dioxide, cf. Figure 17.

Thus, the herewith discovered absorption effect at gases proceeds over a relatively wide wave-length range, in contrast to the IR-spectroscopic measurements where only narrow absorption bands appear. This effect is not exceptional, i.e. it occurs at all gases, also at noble gases, and leads to a significant temperature increase, even though it is spectroscopically not detectable. This temperature increase overlays an eventual temperature increase due to the specific IR-absorption since the intensity ratio of the latter one is very small.

This may be explained as follows: In any case, an oscillation of particles, induced by thermal radiation, acts a part. But whereas in the case of the specific IR-absorption the nuclei inside the molecules are oscillating along the chemical bond (which must be polar), in the relevant case here the electronic shell inside the atoms, or rather the electron orbit, is oscillating implicating oscillation energy. Obviously, this oscillation energy can be converted into kinetic translation energy of the entire atoms which correlates to the gas temperature, and vice versa.

5. The Altitude-Paradox of the Atmospheric Temperature

The statement that it’s colder in the mountains than in the lowlands is trivial. Not trivial is the attempt to explain this phenomenon since the reason is not readily evident. The usual explanation is given by the fact that rising air cools down since it expands due to the decreasing air-pressure. However, this cannot be true in the case of plateaus, far away from hillsides which engender ascending air streams. It appears virtually paradoxical in view of the fact that the intensity of the sun irradiation is much greater in the mountains than in the lowlands, in particular with respect to its UV-amount. Thereby, the intensity decrease is due to the scattering and the absorption of sunlight within the atmosphere, not only within the IR-range but also in the whole remaining spectral area. If such an absorption, named Raleigh-scattering, didn’t occur, the sky would not be blue but black.

However, the direct absorption of sunlight is not the only factor which determines the temperature of the atmosphere. Its warming-up via Earth surface,which is warmed up due to absorbed sun-irradiation, is even more important. Thereby, the heat transfer occurs partly by heat conduction and air convection, and partly by thermal radiation. But there is an additional factor which has to be regarded: namely the thermal radiation of the atmosphere. It runs on the one hand towards Earth (as counter-radiation), and on the other hand towards Space. Thus the situation becomes quite complicated, all the more the formal treatment based on the Stefan/Boltzmann-relation would require limiting equilibrated temperature conditions. But in particular, that relation does not reveal an influence of the atmospheric pressure which obviously acts a considerable part.

In order to study the dependency on the atmospheric pressure, it would be desirable solely varying the pressure, whereas the other terms remain constant by varying the altitude of the measuring station above sea level which implicates a variation of the intensity of the sunlight and of the ambient atmosphere temperature, too. The here reported measurements were made at two locations in Switzerland, namely at Glattbrugg (close to Zürich), 430 m above sea level, and at the top of the Furka pass, 2430 m above sea level. Using the barometric height formula, the respective atmospheric pressures were approx. 0.948 and 0.748 bar.  At any position, two measurements were made in the same space of time.

Figure 18. Comparison of the temperature courses during two measurements [24] (continuous lines: Glattbrugg; dotted lines: Furka).

Figure 18 renders the data of one measurement pair. Obviously, the limiting temperatures were not ideally attained within 90 minutes. Moreover, the evaluation of the data didn’t provide strictly invariant values for A. But this is reasonable in view of the fact that the sunlight intensity was not entirely constant during that period, and that its spectrum depends on the altitude over sea level. Nevertheless, for the atmospheric emission constant A an approximate value of 22W·m−2• bar−1• K−0.5 could be found.

These findings indeed confirm that in a way a greenhouse-effect occurs, since the atmosphere thermally radiates back to Earth surface. But this radiation has  nothing to do with trace gases such as CO2. It rather depends on the atmospheric pressure which diminishes at higher altitudes.

If the oxygen content of the air would be considerably reduced, a general reduction of the atmospheric pressure and, as a consequence, of the temperature would proceed. This may be an explanation for the appearance of glacial periods. However, other explanations are possible, in particular the temporary decrease of the sun activity.

Over all it can be stated that climate change cannot be explained by ominous greenhouse gases such as CO2, but mainly by artificial alterations of Earth surface, particularly in urban areas by darkening and by enlargement of the surface (so-called roughness). These urban alterations are not least due to the enormous global population growth, but also to the character of modern buildings tending to get higher and higher, and employing alternative materials such as concrete and glass. As a consequence, respective measures have to be focussed, firstly mentioning the previous work, and then applying the here presented method.

7. Conclusions

The herewith summarized work of the author concerns atmospheric physics with respect to climate change, comprising three specific and interrelated points based on several previous publications: The first one consists in a critical discussion and refutation of the customary greenhouse theory; the second one outlines the method for measuring the thermal-radiative behaviour of gases; and the third one describes a lab-like method for the characterization of the solar-reflective behaviour of solid opaque bodies, in particular for the determination of the colour-specific solar absorption coefficients.

As to the first point, three main flaws were revealed:

•  Firstly, the insufficiency of photometric methods in order to determine the heating-up of gases in the presence of thermal radiation;
•  Secondly, the lack of  causal relationship between the CO2-concentration in the atmosphere and the average global temperature, based on the reasoning that the empiric simultaneous increase of its concentration and of the global temperature would prove a causal relationship instead of an analogous one; and
•  Thirdly, the inadmissible application of the Stefan/Boltzmann-law to the entire Earth (including the atmosphere) versus Space, instead of the application onto the boundary between the Earth surface and the atmosphere.

As to the second point, the discovery has to be taken into account according to which every gas is warmed up when it is thermally irradiated, even noble gases, attaining a limiting temperature where the absorption of radiation is in equilibrium with the emitted radiation. In particular, pure CO2 behaves similarly to pure air. Applying kinetic gas theory, a dependency of the emission intensity on the pressure, on the root of the absolute temperature, and on the particle size could be found and theoretically explained by oscillation of the electron shell.

As to the third point not only a lab-like measuring method for the colour dependent solar absorption coefficient βs was developed, but also a mathematical modelling of the time/temperature-course where coloured opaque plates are irradiated by sunlight. Thereby, the (colour-dependent) warming-up and the (colour-independent) cooling-down are detected separately. Likewise, a limiting temperature occurs where the intensity of the absorbed solar light is identical equal with the intensity of the emitted thermal radiation. In the absence of wind-convection, the so-called heat transfer coefficient B is invariant. Its value was empirically evaluated, amounting to approx. 9 W·m−2•K−1.

Finally, the theoretically suggested dependency of the atmospheric thermal radiation intensity on the atmospheric pressure could be empirically verified by measurements at different altitudes, namely in Glattbrugg (430 m above sea level and on the top of the Furka-pass (2430 m above sea level), both in Switzerland, delivering a so-called atmospheric emission constant A ≈ 22 W·m−2•bar−1•K−0.5. It explained the altitude-paradox of the atmospheric temperature and delivered the definitive evidence that the atmospheric behavior, and thus the climate, does not depend on trace gases such as CO2. However, the atmosphere thermally reradiates indeed, leading to something similar to a Greenhouse effect. But this effect is solely due to the atmospheric pressure.

Therefore, and also considering the results of Seim and Olsen [23], the customary greenhouse doctrine assuming CO2 as the culprit in climate change has to be abandoned and instead replaced by the here recommended concept of improving the albedo by brightening parts of the Earth surface, particularly in cities, unless fatal consequences will be hazarded.

Figure 24. Up-winds induced by an urban heat island.

Polar Bears, Dead Coral and Other Climate Fictions

Bjorn Lomborg calls out climate alarmist nonsense in his WSJ article Polar Bears, Dead Coral and Other Climate Fictions.  Excerpts in italics with my bolds and added images.

Activists’ tales of doom never pan out,
but they leave us poorly informed and feed bad policy.

Whatever happened to polar bears? They used to be all climate campaigners could talk about, but now they’re essentially absent from headlines. Over the past 20 years, climate activists have elevated various stories of climate catastrophe, then quietly dropped them without apology when the opposing evidence becomes overwhelming. The only constant is the scare tactics.

Protesters used to dress up as polar bears. Al Gore’s 2006 film, “An Inconvenient Truth,” depicted a sad cartoon polar bear floating away to its death. The Washington Post warned in 2004 that the species could face extinction, and the World Wildlife Fund’s chief scientist claimed some polar bear populations would be unable to reproduce by 2012.

Then in the 2010s, campaigners stopped talking about them.

After years of misrepresentation, it finally became impossible to ignore the mountain of evidence showing that the global polar-bear population has increased substantially. Whatever negative effect climate change had was swamped by the reduction in hunting of polar bears. The population has risen from around 12,000 in the 1960s to about 26,000.

The same thing has happened with activists’ outcry about Australia’s Great Barrier Reef. For years, they shouted that the reef was being killed off by rising sea temperatures. After a hurricane extensively damaged the reef in 2009, official Australian estimates of the percent of reef covered in coral reached a record low in 2012. The media overflowed with stories about the great reef catastrophe, and scientists predicted the coral cover would be reduced by another half by 2022. The Guardian even published an obituary in 2014.

The percentage of coral cover in the northern and central Great Barrier Reef has increased.(Supplied: Australian Institute of Marine Science)

The latest official statistics show a completely different picture. For the past three years the Great Barrier Reef has had more coral cover than at any point since records began in 1986, with 2024 setting a new record. This good news gets a fraction of the coverage that the panicked predictions did.

More recently, green campaigners were warning that small Pacific islands would drown as sea levels rose. In 2019 United Nations Secretary-General António Guterres flew all the way to Tuvalu, in the South Pacific, for a Time magazine cover shot. Wearing a suit, he stood up to his thighs in the water behind the headline “Our Sinking Planet.” The accompanying article warned the island—and others like it—would be struck “off the map entirely” by rising sea levels.

Hundreds of Pacific Islands are growing, not shrinking. No habitable island got smaller.

About a month ago, the New York Times finally shared what it called“surprising” climate news: Almost all atoll islands are stable or increasing in size. In fact, scientific literature has documented this for more than a decade. While rising sea levels do erode land, additional sand from old coral is washed up on low-lying shores. Extensive studies have long shown this accretion is stronger than climate-caused erosion, meaning the land area of Tuvalu and many other small islands is increasing.

Today, killer heat waves are the new climate horror story. In July President Biden claimed “extreme heat is the No. 1 weather-related killer in the United States.”

He is wrong by a factor of 25. While extreme heat kills nearly 6,000 Americans each year, cold kills 152,000, of which 12,000 die from extreme cold. Even including deaths from moderate heat, the toll comes to less than 10,000. Despite rising temperatures, age-standardized extreme-heat deaths have actually declined in the U.S. by almost 10% a decade and globally by even more, largely because the world is growing more prosperous. That allows more people to afford air-conditioners and other technology that protects them from the heat.

My Mind is Made Up, Don’t Confuse Me with the Facts. H/T Bjorn Lomborg, WUWT

The petrified tone of heat-wave coverage twists policy illogically. Whether from heat or cold, the most sensible way to save people from temperature-related deaths would be to ensure access to cheap, reliable electricity. That way, it wouldn’t be only the rich who could afford to keep safe from blistering or frigid weather. Unfortunately, 

Activists do the world a massive disservice by refusing to acknowledge
facts that challenge their intensely doom-ridden worldview.

There is ample evidence that man-made emissions cause changes in climate, and climate economics generally finds that the costs of these effects outweigh the benefits. But the net result is nowhere near catastrophic. The costs of all the extreme policies campaigners push for are much worse. All told, politicians across the world are now spending more than $2 trillion annually—far more than the estimated cost from climate change that these policies prevent each year.

Yes, those are Trillions of US$ they are projecting to spend.

Scare tactics leave everyone—especially young people—distressed and despondent. Fear leads to poor policy choices that further frustrate the public. And the ever-changing narrative of disasters erodes public trust.

Telling half-truths while piously pretending to “follow the science” benefits activists with their fundraising, generates clicks for media outlets, and helps climate-concerned politicians rally their bases. But it leaves all of us poorly informed and worse off.

Mr. Lomborg is president of the Copenhagen Consensus, a visiting fellow at Stanford University’s Hoover Institution and author of “Best Things First: The 12 Most Efficient Solutions for the World’s Poorest and our Global SDG Promises.”

See Also:

You Won’t Survive “Sustainability” Agenda 2024

Power Density Physics Trump Energy Politics

A plethora of insane energy policy proposals are touted by clueless politicians, including the apparent Democrat candidate for US President.  So all talking heads need reminding of some basics of immutable energy physics.  This post is in service of restoring understanding of fundamentals that cannot be waved away.

The Key to Energy IQ

This brief video provides a key concept in order to think rationally about calls to change society’s energy platform.  Below is a transcript from the closed captions along with some of the video images and others added.

We know what the future of American energy will look like. Solar panels, drawing limitless energy from the sun. Wind turbines harnessing the bounty of nature to power our homes and businesses.  A nation effortlessly meeting all of its energy needs with minimal impact on the environment. We have the motivation, we have the technology. There’s only one problem: the physics.

The history of America is, in many ways, the history of energy. The steam power that revolutionized travel and the shipping of goods. The coal that fueled the railroads and the industrial revolution. The petroleum that helped birth the age of the automobile. And now, if we only have the will, a new era of renewable energy.

Except … it’s a little more complicated than that. It’s not really a matter of will, at least not primarily. There are powerful scientific and economic constraints on where we get our power from. An energy source has to be reliable; you have to know that the lights will go on when you flip the switch. An energy source needs to be affordable–because when energy is expensive…everything else gets more expensive too. And, if you want something to be society’s dominant energy source, it needs to be scalable, able to provide enough power for a whole nation.

Those are all incredibly important considerations, which is one of the reasons it’s so weird that one of the most important concepts we have for judging them … is a thing that most people have never heard of. Ladies and gentlemen, welcome to the exciting world of…power density.

Look, no one said scientists were gonna be great at branding. Put simply, power density is just how much stuff it takes to get your energy; how much land or other physical resources. And we measure it by how many watts you can get per square meter, or liter, or kilogram – which, if you’re like us…probably means nothing to you.

So let’s put this in tangible terms. Just about the worst energy source America has by the standards of power density are biofuels, things like corn-based ethanol. Biofuels only provide less than 3% of America’s energy needs–and yet, because of the amount of corn that has to be grown to produce it … they require more land than every other energy source in the country combined. Lots of resources going in, not much energy coming out–which means they’re never going to be able to be a serious fuel source.

Now, that’s an extreme example, but once you start to see the world in these terms, you start to realize why our choice of energy sources isn’t arbitrary. Coal, for example, is still America’s second largest source of electricity, despite the fact that it’s the dirtiest and most carbon-intensive way to produce it. Why do we still use so much of it? Well, because it’s significantly more affordable…in part because it’s way less resource-intensive.

An energy source like offshore wind, for example, is so dependent on materials like copper and zinc that it would require six times as many mineral resources to produce the same amount of power as coal. And by the way, getting all those minerals out of the ground…itself requires lots and lots of energy.

Now, the good news is that America has actually been cutting way down on its use of coal in recent years, thanks largely to technological breakthroughs that brought us cheap natural gas as a replacement. And because natural gas emits way less carbon than coal, that reduced our carbon emissions from electricity generation by more than 30%.

In fact, the government reports that switching over to natural gas did more than twice as much to cut carbon emissions as renewables did in recent years. Why did natural gas progress so much faster than renewables? It wasn’t an accident.

Energy is a little like money: You’ve gotta spend it to make it. To get usable natural gas, for example, you’ve first gotta drill a well, process and transport the gas, build a power plant, and generate the electricity. But the question is how much energy are you getting back for your investment? With natural gas, you get about 30 times as much power out of the system as you put into creating it.  By contrast, with something like solar power, you only get about 3 1/2 times as much power back.

Replacing the now closed Indian Point nuclear power plant would require covering all of Albany County NY with wind mills.

Hard to fuel an entire country that way. And everywhere you look, you see similarly eye-popping numbers. To replace the energy produced by just one oil well in the Permian Basin of Texas–and there are thousands of those–you’d need to build 10 windmills, each about 330 feet high. To meet just 10% of the country’s electricity needs, you’d have to build a wind farm the size of the state of New Hampshire. To get the same amount of power produced by one typical nuclear reactor, you’d need over three million solar panels, none of which means, by the way, that we shouldn’t be using renewables as a part of our energy future.

But it does mean that the dream of using only renewables is going to remain a dream,
at least given the constraints of current technology. We simply don’t know how
to do it while still providing the amount of energy that everyday life requires.

No energy source is ever going to painlessly solve all our problems. It’s always a compromise – which is why it’s so important for us to focus on the best outcomes that are achievable, because otherwise, New Hampshire’s gonna look like this.

Addendum from Michael J. Kelly

Energy return on investment (EROI)

The debate over decarbonization has focussed on technical feasibility and economics. There is one emerging measure that comes closely back to the engineering and the thermodynamics of energy production. The energy return on (energy) investment is a measure of the useful energy produced by a particular power plant divided by the energy needed to build, operate, maintain, and decommission the plant. This is a concept that owes its origin to animal ecology: a cheetah must get more energy from consuming his prey than expended on catching it, otherwise it will die. If the animal is to breed and nurture the next generation then the ratio of energy obtained from energy expended has to be higher, depending on the details of energy expenditure on these other activities. Weißbach et al. have analysed the EROI for a number of forms of energy production and their principal conclusion is that nuclear, hydro-, and gas- and coal-fired power stations have an EROI that is much greater than wind, solar photovoltaic (PV), concentrated solar power in a desert or cultivated biomass: see Fig. 2.

In human terms, with an EROI of 1, we can mine fuel and look at it—we have no energy left over. To get a society that can feed itself and provide a basic educational system we need an EROI of our base-load fuel to be in excess of 5, and for a society with international travel and high culture we need EROI greater than 10. The new renewable energies do not reach this last level when the extra energy costs of overcoming intermittency are added in. In energy terms the current generation of renewable energy technologies alone will not enable a civilized modern society to continue!

On Energy Transitions

Postscript

Fantasies of Clever Climate Policies

Chris Kenny writes at The Australian Facts at a premium in blustery climate debate. Excerpts in italics from text provided by John Ray at his blog, Greenie Watch.  My bolds and added images.

Collective Idiocy From Intellectual Vanity

We think we are so clever. The conceit of contemporary humankind is often unbearable.  Yet this modern self-regard has generated a collective idiocy, an inane confusion between feelings and facts, and an inability to distinguish between noble aims and hard reality.

This preference for virtue signalling over practical action can be explained only by intellectual vanity, a smugness that over-estimates humankind’s ability to shape the world it inhabits.

As a result we have a tendency to believe we are masters of the universe, that we can control the climate and regulate natural disasters. Too lazy or spoiled to weigh facts and think things through, we are more susceptible than ever to mass delusion.

We have seen this tendency play out in deeply worrying ways, such as the irrational belief in the communal benefits of Covid vaccination despite the distinct lack of scientific evidence. Too many people just wanted to believe the vaccine had this thing beaten.

Still, there is no area of public debate where rational thought is more readily cast aside than in the climate and energy debate. This is where alarmists demand that people “follow the science” while they deploy rhetoric, scare campaigns and policies that turn reality and science on their heads.

This nonsense is so widespread and amplified by so many authoritative figures that we have become inured to it. Teachers and children break from school to draw attention to what the UN calls a “climate emergency” as the world lives through its most populous and prosperous period in history, when people are shielded from the ill-effects of weather events better than they ever have been previously.

Politicians tell us in the same breath that producing clean energy is the most urgent and important task for the planet and reject nuclear energy, the only reliable form of emissions-free energy. The activists argue that reducing emissions is so imperative it is worth lowering living standards, alienating farmland, scarring forests and destroying industries, but it is not worth the challenge of boiling water to create energy-generating steam by using the tried and tested technology of nuclear fission.

Our acceptance of idiocy, unchecked and unchallenged, struck me in one interview this week given by teal MP Zali Steggall. In many ways it was an unexceptional interview; there are politicians and activists saying this sort of thing every day somewhere, usually unchallenged.

Steggall was preoccupied with Australia’s emissions reduction targets. “If we are going to be aligned to a science-based target and keep temperatures as close to 1.5 degrees as we can, we must have a minimum reduction of 75 per cent by 2035 as an interim target,” she said.

Steggall then patronised her audience by comparing meeting emissions targets to paying down a mortgage. The claim about controlling global temperatures is hard to take seriously, but to be fair it is merely aping the lines of the UN, which argues the increase in global average temperatures can be held to 1.5 degrees with emissions reductions of that size – globally.

We could talk all day about the imprecise nature of these calculations, the contested scientific debate about the role of other natural variabilities in climate, and the presumption that humankind, through policy imposed by a supranational authority, can control global climate as if with a thermostat. The simplistic relaying of this agenda as central to Australian policy decisions was not the worst aspect of Steggall’s presentation.

“The Coalition has no policy, so let’s be really clear, they are taking Australia out of the Paris Agreement if they fail to nominate an improvement with a 2035 target,” Steggall lectured, disingenuously.

This was Steggall promulgating the central lie of the national climate debatethat Australia’s emissions reduction policies can alter the climate. It is a fallacy embraced and advocated by Labor, the Greens and the teals, and which the Coalition is loath to challenge for fear of being tagged into a “climate denialism” argument.

It is arrant nonsense to suggest our policies can have any discernible effect on the climate or “climate risk”. Any politician suggesting so, directly or by implication, is part of a contemporary, fake-news-driven dumbing down of the public square, and injecting an urgency into our policy considerations that is hurting citizens already with high electricity prices, diminished reliability and a damaged economy.

Steggall went on to claim we were feeling the consequences of global warming already. “And for people wondering ‘How does that affect me?’, just look at your insurance premiums, our insurance premiums around Australia are going through the roof,” she extrapolated, claiming insurance costs were keeping people out of home ownership. “This is not a problem for the future,” Steggall stressed, “it is problem for now.”

It is a problem all right – it is unmitigated garbage masquerading as a policy debate. Taking it to its logical conclusion, Steggall claims if Australia reduced its emissions further we would lower the risk of natural disasters, leading to lower insurance premiums and improved housing affordability – it is surprising that world peace did not get a mention.

Mind you, these activists do like to talk about global warming as a security issue. They will say anything that heightens fears, escalates the problem and supports their push for more radical deindustrialisation.

Our national contribution to global emissions
is now just over 1 per cent and shrinking.

Australia’s annual emissions total less than 400 megatonnes while China’s are rising by more than that total each year and are now at 10,700Mt or about 30 times Australia’s. While our emissions reduce, global emissions are increasing. We could shut down our country, eliminating our emissions completely, and China’s increase would replace ours in less than a year.

So, whatever we are doing, it is not changing and cannot change the global climate. Our national chief scientist, Alan Finkel, clearly admitted this point in 2018, even though he was embarrassed by its implications in the political debate. Yet the pretence continues.

And before critics suggest I am arguing for inaction, I am not. But clearly, the logical and sensible baseline for our policy consideration should be a recognition that our national actions cannot change the weather. Therefore we should carefully consider adaptation to measured and verified climate change, while we involve ourselves as a responsible nation in global negotiations and action.

Obviously, we should not be leading that action but acting cautiously to protect our own interests and prosperity.

It is madness for us to undermine our cheap energy advantage to embark on a renewables-plus-storage experiment that no other country has dared to even try, when we know it cannot shift the global climate one iota. It is all pain for no gain.

Yet that is what this nation has done. So my question today is what has happened to our media, academia, political class and wider population so that it allows this debate and policy response to occur in a manner that is so divorced from reality?

Are we so complacent and overindulged that we accept post-rational debate to address our post-material concerns? Even when it is delivering material hardship to so many Australians and jeopardising our long-term economic security?

Should public debate accept absurd baseline propositions such as the idea that our energy transition sacrifice will improve the weather and reduce natural disasters, simply because they are being argued by major political groupings or the UN? Or should we not try to impose a dose of reality and stick to the facts?

This feebleness of our public debate has telling ramifications – there is no way this country could have embarked on the risky, expensive and doomed renewables-plus-storage experiment if policies and prognostications had been subject to proper scrutiny and debate.

Our media is now so polarised that the climate activists of Labor, the Greens and the teals are able to ensure their nonsensical advocacy is never challenged, and the green-left media, led by the publicly funded ABC, leads the charge in spreading misinformation.

Clearly, we are not as clever as we think. Our children need us to wise up.

Nine July Days Break Wind Power Bubble

Parker Gallant reports at his blog  Nine July Days Clearly Demonstrate Industrial Wind Turbines Intermittent Uselessness.  Excerpts in italics with my bolds and added image. H/T John Ray

The chart below uses IESO data for nine (9) July days and clearly demonstrates the vagaries of those IWT (Industrial Wind Turbines) which on their highest generation day operated at 39.7% of their capacity and on their lowest at 2.3%!  As the chart also notes, our natural gas plants were available to ramp up or down to ensure we had a stable supply of energy but rest assured IESO would have been busy either selling or buying power from our neighbours to ensure the system didn’t crash. [Independent Electricity System Operator for Ontario, Canada]

The only good news coming out of the review was that IESO did not curtail any wind generation as demand was atypical of Ontario’s summer days with much higher demand then those winter ones.

Days Gone By:         

Back and shortly after the McGuinty led Ontario Liberal Party had directed IESO to contract IWT as a generation source; theirAnnual Planning Outlook would suggest/guess those IWT would generate an average of 15% of their capacity during our warmer months (summer) and 45% of their capacity during our colder months (winter). For the full year they would be projecting an average generation of 30% of their capacity and presumably that assumption was based on average annual Ontario winds!

The contracts for those IWT offered the owners $135/MWh so over the nine days contained in the chart below those 125,275 MWh generated revenue for the owners of $16,912,125 even though they only generated an average of 11.8% of their capacity.  They are paid despite missing the suggested target IESO used because they rank ahead of most of Ontario’s other generation capacity with the exception of nuclear power due to the “first-to-the-grid” rights contained in their contracts at the expense of us ratepayers/taxpayers!

Should one bother to do the math as to the annual costs based on the 15% summer and 45% winter IESO previously used it would mean annual generation from those IWT in the summer would be about 3.9 TWh and 11.7 TWh in the winter with an annual cost of just over $2.1 billion for serving up frequently unneeded generation which is either sold off at a loss or curtailed!

Replacing Natural Gas Plants with BESS:

Anyone who has followed the perceived solution of ridding the electricity grid of fossil fuels such as natural gas will recognize ENGO [Environmental Non-Governmental Organizations] have convinced politicians that battery energy storage systems are the solution!  Well is it, and how much would Ontario have needed over those nine charted July days? One good example is July 9th and 10th and combining the energy generated by natural gas from the chart over those two days is the place to start. To replace that generation of 221,989 MW with BESS units the math is simple as those BESS units are reputed to store four (4) times their rated capacity. Dividing the MWh generated by Ontario’s natural gas generators by four over those two days therefore would mean we would need approximately 55,500 MW of BESS to replace what those natural gas plants generated.  That 55,500 MW of BESS storage is over 27 times what IESO have already contracted for and add huge costs to electricity generation in the province driving up the costs for all ratepaying classes. The BESS 2034 MW IESO already contracted are estimated to cost ratepayers $341 million annually meaning 55,500 MW of BESS to the grid would add over $9 billion annually to our costs to hopefully avoid blackouts!

The other interesting question is how would those 55,500 MW be able to recharge to be ready for future high demand days perhaps driven by EV recharging or those heating and cooling pumps operating?  The wind would have to be blowing strong and the sun would need to be shining but, as we know, both are frequently missing so bring us blackouts seems to be the theme proposed by those ENGO and our out of touch politicians and bureaucrats!

Just one simple example as to where we seem to be headed
based on the insane push to reach that “net-zero” emissions target!

IESO Ontario Electrical Energy Output by Source in 2023

Extreme Examples of Missing IWT generation:

What the chart doesn’t contain, or highlight is how those 4,900 MW of IWT capacity are undoubtedly consuming more power than they are generating on many occasions and the IESO data for those nine days contained some clear examples but less than a dozen are highlighted here!

To wit:

  • July 5th at Hour 11 they managed to deliver only 47 MWh!
  • July 7th at Hours 8, 9, and 10 they respectively generated 17 MWh, 3 MWh and 18 MWh! 
  • July 9th at Hour 9 they delivered 52 MWh!
  • July 12th at Hours 8, 9, 10 and 11 they respectively generated 33 MWh, 13 MWh, 13 MWh and 35 MWh. 
  • July 13th at Hours 9 and 10 they managed to generate 19 MWh and 39 MWh respectively! 

Conclusion:

Why politicians and bureaucrats around the world have been gobsmacked by those peddling the reputed concept of IWT generating cheap, reliable electricity is mind-blowing as the Chart coupled with the facts, clearly shows for just nine days and only looking at Ontario!

Much like the first electric car invented in 1839, by a Scottish inventor named Robert Davidson, the first electricity generated by a wind turbine came from another Scottish inventor, Sir James Blyth who in 1887 did exactly that. Neither of those old “inventions” garnered much global acceptance until those ENGO like Michael Mann and Greta arrived on the scene pontificating about “global warming” being caused by mankind’s use of fossil fuels!

As recent events have demonstrated both EV and IWT are not the panacea to save the world from either “global warming” or “climate change” even though both have “risen from the dead” due to the “net-zero” push by ENGO.

The time has come for our politicians to wake up and recognize they are supporting more then century old technology focused to try and rid the world of CO 2 emissions.  They fail to see without CO 2 mankind will be setback to a time when we had trouble surviving!

Stop the push and stop using ratepayer and taxpayer dollars for the fiction created by those pushing the “net-zero” initiative. That initiative is actually generating more CO 2 such as the 250 tons of concrete used for just one 2 MW IWT installation!   Reality Bites!

Wind Energy Risky Business

The short video above summarizes the multiple engineering challenges involved in relying on wind and/or solar power.  Real Engineering produced The Problem with Wind Energy with excellent graphics.  For those who prefer reading, I made a transcript from the closed captions along with some key exhibits.

The Problem with Wind Energy

This is a map of the world’s wind Resources. With it we can see why the middle Plains of America has by far the highest concentrations of wind turbines in the country. More wind means more power.

However one small island off the mainland of Europe maxes out the average wind speed chart. Ireland is a wind energy Paradise. During one powerful storm wind energy powered the entire country for 3 hours, and it is not uncommon for wind to provide the majority of the country’s power on any single day. This natural resource has the potential to transform Ireland’s future.

But increasing wind energy on an energy grid comes with a lot of logistical problems which are all the more difficult for a small isolated island power grid. Mismanaged wind turbines can easily destabilize a power grid. From Power storage to grid frequency stabilization, wind energy is a difficult resource to build a stable grid upon.

To understand why, we need to take these engineering
Marvels apart and see how they work.

Hidden within the turbine cell is a Wonder of engineering. We cannot generate useful electricity with the low- speed high torque rotation of these massive turbine rotors. They rotate about 10 to 20 times a minute. The generator needs a shaft spinning around 1,800 times per minute to work effectively. So a gearbox is needed between the rotor shaft and the generator shaft.

The gearboxes are designed in stages. Planetary gears are directly attached to the blades to convert the extremely high torque into faster rotations. This stage increases rotational speed by four times. Planetary gears are used for high torque conversion because they have more contact points allowing the load to be shared between more gear teeth.

Moving deeper into the gearbox, a second stage set of helical gears multiplies the rotational speed by six. And the third stage multiplies It again by four to achieve the 1,500 to 1,800 revolutions per minute needed for the generator.

These heavy 15 tonne gearboxes have been a major source of frustration for power companies. Although they’ve been designed to have a 20-year lifespan, most don’t last more than 7 years without extensive maintenance. This is not a problem exclusive to gearboxes in wind turbines, but changing a gearbox in your car is different from having a team climb up over 50 meters to replace a multi-million dollar gearbox. Extreme gusts of wind, salty conditions and difficult to access offshore turbines increases maintenance costs even more. The maintenance cost of wind turbines can reach almost 20% of the levelized cost of energy.

In the grand scheme of things wind is still incredibly cheap. However we don’t know the precise mechanisms causing these gearbox failures. We do know that the wear shows up as these small cracks that form on the bearings,which are called White Edge cracks from the pale material that surrounds the damaged areas. This problem only gets worse when turbines get bigger and more powerful, requiring even more gear stages to convert the incredibly high torque being developed by the large diameter rotors.

One way of avoiding all of these maintenance costs is to skip the gearbox and connect the blades directly to the generator. But a different kind of generator is needed. The output frequency of the generator needs to match the grid frequency. Slower Revolutions in the generator need to be compensated for with a very large diameter generator that has many more magnetic poles meaning a single revolution of the generator passes through more alternating magnetic fields which increases the output frequency.

The largest wind turbine ever made, the Haliade X, uses a direct drive system. You can see the large diameter generator positioned directly behind the blades here. This rotor disc is 10 m wide with 200 poles and weighs 250 tons. But this comes with its own set of issues. Permanent magnets require neodium and dysprosium and China controls 90% of the supply of these rare earth metals. Unfortunately trade negotiations and embargos lead to fluctuating material costs that add extra risk and complexity to direct drive wind turbines. Ireland is testing these new wind turbines here in the Galway Wind Park. The blades were so large that this road passing underneath the Lough Atalia rail Bridge, which I use to walk home from school every day, had to be lowered to facilitate the transport of the blades from the nearby docks. It takes years to assess the benefit of new Energy Technologies like this, but as wind turbines get bigger and more expensive, direct drive systems become more attractive.

The next challenge is getting the electricity created inside these generators to match the grid frequency. When the speed of the wind constantly changes, the frequency of current created by permanent magnet generators matches the speed of the shaft. If we wanted the generator to Output the US Standard 60 HZ we could design a rotor to rotate 1,800 times per minute with four poles two North and two South. This will result in 60 cycles per second. This has to be exact; mismatched frequencies will lead to chaos on the grid, bringing the whole system down.

Managing grid frequency is a 24/7 job. In the UK, grid operators had to watch a popular TV show themselves so they could bring pumped Hydro stations online. Because a huge portion of the population went to turn on kettles to make tea during the ad breaks. This increased the load on the grid and without a matching increase in Supply, the frequency would have dropped. The grid is very sensitive to these shifts; a small 1 Herz change can bring a lot lot of Destruction.

During the 2021 freeze in Texas the grid fell Incredibly close to 59 Hertz. It was teetering on the edge of a full-scale blackout that would have lasted for months. Many people solely blamed wind turbines not running for causing this issue, but they were only partly to blame, as the natural gas stations also failed. Meanwhile the Texas grid also refuses to connect to the wider North American grid to avoid Federal Regulations. Rather oddly Texas is also an isolated power grid that has a large percentage of wind energy.

The problem with wind energy is that it is incapable of raising the grid frequency if it drops. Wind turbines are nonsynchronous and increasing the percentage of wind energy on the grid requires additional infrastructure to maintain a stable grid. To understand what nonsynchronous means, we need to dive into the engineering of wind turbines once again. The first electric wind turbines connected to the grid were designed to spin the generator shaft at exactly 1,800 RPM. The prevailing winds dictated the size and shape of the blades. The aim was to have the tips of the blades move at around seven times the speed of the prevailing wind. The tips of the blades were designed to stall if the wind speed picked up. This let them have a passive control and keep the blades rotating at a constant speed.

While this allowed the wind turbines to be connected straight to the grid, the constant rotational speed did induce large forces onto the blades. Gusts of wind would increase torque rapidly which was a recipe for fatigue failure in the drivetrain. So to extract more power, variable speed wind turbines were introduced. Instead of fixed blades that depended on a stall mechanism for control, the blades were attached to the hub with massive bearings that would allow the blades to change their angle of attack. This provided an active method of speed control, but now another problem emerged.

The rotor operated at different speeds and the frequency coming from the generator was variable. A wind turbine like this cannot be connected directly to the grid. Connecting a varying frequency generator to the grid means the power has to be passed through two inverters. The first converts the varying AC to DC using a rectifier; then the second converter takes the DC current and converts it back to AC at the correct frequency. This is done with electronic switches that rapidly turn on and off to create the oscillating wave.

We lose some power in this process but the larger issue for the grid as a whole is that this removes the benefit of the wind Turbine’s inerti. Slowing something heavy like a train is difficult because it has a lot of inertia. Power grids have inertia too. Huge rotating steam turbines connected directly to the grid are like these trains; they can’t be slowed down easily. So a grid with lots of large turbines like nuclear power and coal power turbines can handle a large load suddenly appearing and won’t experience a sudden drop in Grid frequency. This helps smooth out sudden increases in demand on the grid and gives grid operators more time to bring on new power sources.

Wind turbines of course have inertia, they are large rotating masses. But those inverters mean their masses aren’t connected directly to the grid, and so their inertia can’t help stabilize the grid. Solar panels suffer from the same problem, but they couldn’t add inertia anyway as they don’t move.

This is an issue for Renewables that can become a critical vulnerability when politicians push to increase the percentage of Renewables onto a grid without considering the impacts it can have on grid stability. Additional infrastructure is needed to manage this problem, especially as older energy sources, like coal power plants, that do provide inertia begin to shut down.

Ireland had a creative solution to this problem. In 2023 the world’s largest flywheel, a 120 ton steel shaft that rotates 3,000 times per minute, was installed in the location of a former coal power plant that already had all the infrastructure needed to connect to the grid. This flywheel takes about 20 minutes to get up to speed using grid power but it is kept rotating constantly inside a vacuum to minimize power lost to friction. When needed it can instantly provide power at the exact 50 HZ required by the grid. This flywheel provides the inertia needed to keep the grid stable, but it’s estimated that Ireland will need five more of these flywheels to reach its climate goals with increasing amounts of wind energy.

But they aren’t designed for long-term energy storage, they are purely designed for grid frequency regulation. Ireland’s next problem is more difficult to overcome. It’s an isolated island with few interconnections to other energy grids. Trading energy is one of the best ways to stabilize a grid. Larger grids are just inherently more stable. Ideally Ireland could sell wind energy to France when winds are high and buy nuclear energy when they are low. Instead right now Ireland needs to have redundancy in its grid with enough natural gas power available to ramp up when wind energy is forecasted to drop.

Currently Ireland has two interconnect connections with Great Britain but none to Mainland Europe. That is hopefully about to change with this 700 megawatt interconnection currently planned with France. With Ireland’s average demand at 4,000 megawatts, this interconnection can provide 17.5% of the country’s power needs when wind is low, or sell that wind to France when it is high. This would allow Ireland to remove some of that redundancy from its grid, while making it worthwhile to invest in more wind power as the excess then has somewhere to go.

The final piece of the puzzle is to develop long-term energy storage infrastructure. Ireland now has 1 gigawatt hour of energy storage, but this isn’t anywhere close to the amount needed. Ireland’s government has plans to develop a hydrogen fuel economy for longer term storage and energy export. In the National hydrogen plan they set up a pathway to become Europe’s main producer of green hydrogen, both for home use and for exports. With Ireland’s abundance of fresh water, thanks to our absolutely miserable weather, and our prime location along World shipping routes and being a hub for the third largest airline in the world, Ireland is very well positioned to develop a hydrogen economy.

These transport methods aren’t easily decarbonized and will need some form of renewably sourced synthetic fuel for which hydrogen will be needed, whether that’s hydrogen itself, ammonia or synthetic hydrocarbons. Synthetic hydrocarbons can be created using hydrogen and carbon dioxide captured from the air. Ireland’s winning combination of cheap renewable energy abundant fresh water and its strategically advantageous location positions it well for this future renewable energy economy. Ireland plans to begin the project by generating hydrogen with electrolysis with wind energy that has been shut off due to oversupply which is basically free energy.

As the market matures phase two of the plan is to finally begin tapping into Ireland’s vast offshore wind potential exclusively for hydrogen production with the lofty goal of 39 terrawatt hours of production by 2050 for use in energy storage fuel for transportation and for industrial heating. Ireland is legally Bound by EU law to achieve net zero emissions by 2050 but even without these lofty expectations it’s in Ireland’s best interest to develop these Technologies. Ireland has some of the most expensive electricity prices in Europe due to its Reliance on fossil fuel Imports which increased in price drastically due to the war in Ukraine. Making this transition won’t be easy and there are many challenges to overcome, but Ireland has the potential to not only become more energy secure but has the potential to develop its economy massively. Wind is a valuable resource by itself but in combination with its abundance of fresh water it could become one of the most energy rich countries in the world.

Comment

That’s a surprisingly upbeat finish boosting Irish prospects to be an energy powerhouse, considering all of the technical, logistical and economic issues highlighted along the way.  Engineers know more than anyone how complexity often results in fragility and unreliability in practice. Me thinks they are going to use up every last bit of Irish luck to pull this off. Of course the saddest part is that the whole transition is unnecessary, since more CO2 and warmth has been a boon for the planet and humankind.

See Also:

Replace Carbon Fuels with Hydrogen? Absurd, Exorbitant and Pointless