Fishy Activists Destroying Hydro Dams

AP Photo/Nicholas K. Geranios

John Stossel bring us up to date on the fishy case for removing hydroelectric dams on the Snake River in Washington state.  His Townhall article is A Dam Good Argument.  Excerpts in italics with my bolds.and added images.

Instead of using fossil fuels, we’re told to use “clean” energy: wind, solar or hydropower.  Hydro is the most reliable. Unlike wind and sunlight, it flows steadily.

But now, environmental groups want to destroy dams that create hydro power.

The Klamath River flows by the remaining pieces of the Copco 2 Dam after deconstruction in June 2023. |Located on Oregon/California border.Juliet Grable / JPR

“Breach those dams,” an activist shouts in my new video. “Now is the time, our fish are on the line!

The activists have targeted four dams on the Snake River in Washington State. They claim the dams are driving salmon to extinction.

Walla Walla District Dams on the Snake & Columbia Rivers

It’s true that dams once killed lots of salmon. Pregnant fish need to swim upriver to have babies, and their babies swim downriver to the ocean.  Suddenly, dams were in the way. Salmon population dropped sharply.

But that was in the 1970s.Today, most salmon
make it past the dam without trouble.

How?  Fish-protecting innovations like fish ladders and spillways guide most of the salmon away from the turbines that generate electricity.

Lower Granite fish count station & ladder (left, bottom right); Lower Monumental fish ladder (top right)  Source: Fish Passage Thru the Lower Snake & Columbia Rivers

“Between 96% and 98% of the salmon successfully pass each dam,” says Todd Myers, Environmental Director at the Washington Policy Center.  Even federal scientific agencies now say we can leave dams alone and fish will be fine.

But environmental groups don’t raise money by acknowledging good news. “Snake River Salmon Are in Crisis,” reads a headline from Earthjustice.  Gullible media fall for it. The Snake River is the “most endangered in the country!” claimed the evening news anchor.

“That’s simply not true,” Myers explains. “All you have to do is look at the actual population numbers to know that that’s absurd.”  Utterly absurd. In recent years, salmon populations are higher than they were in the 1980s and 90s.

The fish passage report for 2023 (here) has many results like this for various species. Conversion refers to completing the Snake River run from Ice Harbor through Lower Granite.

“They make these claims,” Myers says, “because they know people will believe them … they don’t want to believe that their favorite environmental group is dishonest.”

But many are. In 1999, environmental groups bought an ad in the New York Times saying “salmon … will be extinct by 2017.” “Did the environmentalists apologize?” I ask Meyers. “No,” he says. “They repeat almost the exact same arguments today, they just changed the dates.

I invited 10 activist groups that want to destroy dams to come to my studio and defend their claims about salmon extinction. Not one agreed. I understand why. They’ve already convinced the public and gullible politicians.  Idaho’s Republican Congressman Mike Simpson says, “There is no viable path that can allow us to keep the dams in place.”

“We keep doing dumb things,” says Myers. “We put money into places where it doesn’t have an environmental impact, and then we wonder 10, 20, 30 years (later) why we haven’t made any environmental progress.”

Politicians and activists want to tear down Snake River dams
even though they generate tons of electricity.

“Almost the same amount as all of the wind and solar turbines in Washington state,” says Myers, “Imagine if I told the environmental community we need to tear down every wind turbine and every solar panel. They would lose their minds. But that’s essentially what they’re advocating by tearing down Snake River dams.”

I push back: “They say, ‘Just build more wind turbines.’”  “The problem is, several times a year, there’s no wind,” he replies. “You could build 10 times as many wind turbines, but if there’s no wind, there’s no electricity.”

Hydro, on the other hand, “can turn on and off whenever it’s needed. Destroying hydro and replacing it with wind makes absolutely no sense. It will do serious damage to our electrical grid.”

“It’s not their money,” I point out.”Exactly,” he says. “If you want to spend $35 billion on salmon, there’s lots of things we can do that would have a real impact.”  Like what?

Reduce the population of) seals and sea lions,” he says, “The Washington Academy of Sciences says that unless we reduce the populations, we will not recover salmon.” “People used to hunt sea lions,” I note. “Yeah, that’s why the populations are higher today.”

But environmentalists don’t want people to hunt sea lions or seals. Instead, they push for destruction of dams. “Because it’s sexy and dramatic, it sells,” says Myers. “It’s more about feeling good than environmental results.”

PostScript

Of course there is a political dimension to this movement.  Left coast woke progressives are targeting Lower Snake River dams located in Eastern Washington state.  Folks there and in Eastern Oregon would rather be governed by common sense leaders like those in Idaho.

The case against the dams is actually about climatism.  The fish are not at risk, as shown by many scientific reports. But climatists do not include hydro in their definition of “renewable.”  And they promote fear of methane, claiming dam reservoirs increase methane emissions.

So here’s the political solution.  Keep the dams open and the fish running to their spawning grounds.  And to appease climatists ban any transmission of electricity from those dams to Seattle and Western Washington state.  Deal?

Background Post

Left Coast Closes the Dam Lights

Signs of Sun Setting on Renewables

News come from renewables trailblazing Australia including signs there and around the world that wind and solar power are losing their momentum. From the Australian by way of John Ray is the article The sun is setting the renewables ‘superpower’ fantasy of the Australian Left. Excerpts in italics with my bolds and added images.

Renewable energy superpower status is supposedly in Australia’s grasp now the government has given Mike Cannon-Brookes the green light to export solar power to Singapore.

Sky News Business Editor Ross Greenwood says Australia’s largest solar farm to date has been given the “green light” by the Environment Minister Tanya Plibersek.  Plibersek announced environmental approval for the tech billionaire’s eccentric proposal last week, taking a swipe at Peter Dutton’s “expensive nuclear fantasy that may never happen”.

By contrast, the Environment Minister would have us believe Cannon-Brookes’s plan to siliconise the NT Outback is a done deal. All that’s left to do is:

♦  raise $35bn in capital,
♦  install 120 square kilometres of solar panels,
♦  build a modest 788km transmission line to Darwin, and
♦  lay a 4200km high-voltage cable on the seabed, and we’re good to go.

The Sun Cable AAPowerLink project feels like it was stolen from a Heath Robinson cartoon: a convoluted, unnecessarily elaborate, and impractical contraption designed to accomplish a mundane task. It may mark the beginning of the end of the renewable romance, the point at which the transition to wind, solar and hydropower collapses under the weight of its own absurdity.

There is increasing evidence the US has reached the point of peak renewables, as the pool of private investors shrinks and winning community approval becomes harder. Research by the Lawrence Berkley National Laboratory showed roughly one-third of utility-scale wind and solar applications submitted over the past five years were cancelled, while about half of wind and solar projects experienced significant delays.

The US Department of Energy says the national electricity network needs to grow by 57 per cent by 2035, the equivalent of approximately 21,000 km a year. Last year’s total was around 200 km, downfrom just over 1000 in 2022.

Meanwhile, the challenges of grid synchronisation and storage remain unresolved, and the technical problems for offshore wind turbines, in particular, are mounting. Last week, turbine manufacturer GE Vernova announced an investigation into a blade failure in the 3.6GW Dogger Bank project in the North Sea off the coast of the UK. It is the third blade failure this year.

In July, a newly installed blade crumbled at the Vineyard Wind offshore plant, creating debris that washed up on Nantucket Island, Massachusetts. At 107 metres long and weighing 55 tonnes, they are the most enormous blades deployed commercially. The failure of three in quick succession suggests the quest to increase output by installing ever-larger blades has reached its natural limits.

Yet the imperative of expanding generating capacity is hardening.

The principal driving force is not electric vehicles but the rapid growth of artificial intelligence. AI requires at least 10 times the power of conventional computing programs.

In the US, data centres account for about 2.5 per cent of power and demand could rise to 7.5 per cent by 2030, according to Boston Consulting Group. In Ireland, data processing and storage use 12 per cent of electricity produced, forcing the authorities to limit the number of connections to the grid.

Silicon Valley has long abandoned the notion it can be powered by silicon photovoltaic panels while burying stray emissions in the Amazon forests.

In April, the tech giant Amazon paid the best part of $1bn ($US650m) for a sizeable block of land next to Pennsylvania’s Susquehanna nuclear power station. It will be the site for a data centre powered by up to 480MW of carbon-free electricity delivered reliably around the clocky.

Shares in US nuclear power companies such as Consolidated Energy, Talem and Vistra have soared by between 80 per cent and 180 per cent in the past year. So-called green energy stocks, on the other hand, are static or falling, while coal is making an unexpected comeback.

In May, the Financial Times reported that the retirement dates for coal-fired power stations are being pushed back as operators become concerned about grid security. Allianz Energy has delayed the conversion of its Wisconsin plant from coal to gas for three years to 2028. Ohio-based FirstEnergy announced in February that it was scrapping its 2030 target to phase out coal, citing “resource adequacy concerns”.

The effect of AI on electricity demand was largely unanticipated at the beginning of the decade. AI chips will undoubtedly become more efficient, but there is no telling how much further the demand for AI will grow since the technology is in its infancy. Nor can we begin to guess what other power-hungry forms of technology might be developed by 2050.

What we do know, however, is that if Australia’s demand for electricity exceeds 313 TWH a year in a 2050, we’re in trouble. That’s the target the Australian Energy Market Operator has set in its updated blueprint for the great electricity transition.

As Chris Bowen points out, that’s going to take a lot solar panels and wind turbines. The Energy Minister says we need 22,000 new solar panels a day and a new 7MW wind turbine every 18 hours just to meet our 2030 target of a mere 202 TWH. For the record, the speed of the rollout in the first two years of Labor government is less than a tenth of that.

One of the hallmarks of the anointed is an unwavering conviction
in the integrity of their analysis and
the effectiveness of their proposed solutions.

They feel no need to hedge their bets by factoring in contingency arrangements should their predictions turn out to be wrong. Nothing in AEMO’s Integrated System Plan indicates its experts have given any thought to scaling up electricity production in line with actual demand, which may well be considerably higher than they’ve anticipated.

If they had, they would have to acknowledge that there are limits to the renewable energy frontier determined by energy density, the demand for land and the requirement for firming. The silicification of northern Australia cannot continue forever, nor can we expect to rely on China for most of the hardware and pretend there are no geopolitical consequences.

As for our nuclear-phobic Prime Minister’s dream of turning Australia into the Saudi Arabia of green hydrogen, while simultaneously sitting at the cutting edge of quantum computing, forget it. In 2006, as the shadow minister for the environment, Anthony Albanese gave a speech at the Swansea RSL on avoiding dangerous climate change.

Why on Earth would we want to take the big health and economic risk of nuclear energy when we have a ready-made power source hovering peacefully in the sky every day?” he asked.

If Albanese doesn’t know the answer to that question 18 years later, he probably never will.

Climate Policies Fail in Fact and in Theory

A recent international analysis of 1500 climate policies around the world concluded that 63 or 4% of them were successful in reducing emissions.  The paper is Climate policies that achieved major emission reductions: Global evidence from two decades published at Science.org.  Excerpts in italics with my bolds.

Abstract

Meeting the Paris Agreement’s climate targets necessitates better knowledge about which climate policies work in reducing emissions at the necessary scale. We provide a global, systematic ex post evaluation to identify policy combinations that have led to large emission reductions out of 1500 climate policies implemented between 1998 and 2022 across 41 countries from six continents. Our approachintegrates a comprehensive climate policy database with a machine learning–based extension ofthe common difference-in-differences approach. We identified 63 successful policy interventions with total emission reductions between 0.6 billion and 1.8 billion metric tonnes CO2 . Our insights on effective but rarely studied policy combinations highlight the important role of price-based instruments in well-designed policy mixes and the policy efforts necessary for closing the emissions gap.

Context

(1). Although the [Paris] agreement seeks to limit global average temperature increase to “well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C,” its success critically hinges on the implementation of effective climate policies at the national level.  However, scenarios from global integrated assessment models suggest that the aggregated mitigation efforts communicated through nationally determined contributions (NDCs) fall short of the required emission reductions.

(2)The United Nations (UN) estimates quantify a median emission gap of 23 billion metric tonnes(Gt) carbon dioxide equivalent (CO2-eq) by 2030

(3). The persistence of this emissions gap is not only caused by an ambition gap but also a gap in the outcomes that adopted policies achieve in terms of emission reductions.

(4). This raises the fundamental question as to which types of policy measures are successfully causing meaningful emission reductions. Despite more than two decades of experience with thousands of diverse climate policy measures gained around the world, there is consensus in neither science nor policy on this question.

The exhibit above shows the scope and complexity of the analysis.  But the bottom line is that 96% of the effort and trillions of $$$ were spent to no avail. It is estimated that on the order of 1.2 Billion tonnes of CO2 were prevented over the last 20 years, with an additional 23 Billion tonnes to be erased by 2030. 

Any enterprise with that performance would be liquidated. 
That is an epic failure in fact. 

And recommending mixing of policies including subsidies and regulations along with pricing goes against economic theory and fails in practice. Ross McKitrick explains the dangers of making climate policies willy-nilly in his Financial Post article Economists’ letter misses the point about the carbon tax revolt.  Excerpts in italics with my bolds and added images.

Yes, the carbon tax works great in a ‘first-best’ world where it’s the
only carbon policy. In the real world, carbon policies are piled high.

An open letter is circulating online among my economist colleagues aiming to promote sound thinking on carbon taxes. It makes some valid points and will probably get waved around in the House of Commons before long. But it’s conspicuously selective in its focus, to the point of ignoring the main problems with Canadian climate policy as a whole.

EV charging sign Electric-vehicle mandates and subsidies are among the mountain of climate policies that have been piled on top of Canada’s carbon tax. PHOTO BY JOSHUA A. BICKEL/THE ASSOCIATED PRESS

There’s a massive pile of boulders blocking the road to efficient policy, including:

    • clean fuel regulations,
    • the oil-and-gas-sector emissions cap,
    • the electricity sector coal phase-out,
    • strict energy efficiency rules for new and existing buildings,
    • new performance mandates for natural gas-fired generation plants,
    • the regulatory blockade against liquified natural gas export facilities,
    • new motor vehicle fuel economy standards,
    • caps on fertilizer use on farms,
    • provincial ethanol production subsidies,
    • electric vehicle mandates and subsidies,
    • provincial renewable electricity mandates,
    • grid-scale battery storage experiments,
    • the Green Infrastructure Fund,
    • carbon capture and underground storage mandates, 
    • subsidies for electric buses and emergency vehicles in Canadian cities,
    • new aviation and rail sector emission limits,
      and many more.

Not one of these occasioned a letter of protest from Canadian economists.

Beside that mountain of boulders there’s a twig labelled “overstated objections to carbon pricing.” At the sight of it, hundreds of economists have rushed forward to sweep it off the road. What a help!

To my well-meaning colleagues I say: the pile of regulatory boulders
long ago made the economic case for carbon pricing irrelevant.

Layering a carbon tax on top of current and planned command-and-control regulations does not yield an efficient outcome, it just raises the overall cost to consumers. Which is why I can’t get excited about and certainly won’t sign the carbon-pricing letter. That’s not where the heavy lifting is needed.

My colleagues object to exaggerated claims about the cost of carbon taxes. Fair enough. But far worse are exaggerated claims about both the benefits of reducing carbon dioxide emissions and the economic opportunities associated with the so-called “energy transition.” Exaggeration about the benefits of emission reduction is traceable to poor-quality academic research, such as continued use of climate models known to have large, persistent warming biases and of the RCP8.5 emissions scenario, long since shown in the academic literature to be grossly exaggerated.

But a lot of it is simply groundless rhetoric. Climate activists, politicians and journalists have spent years blaming Canadians’ fossil fuel use for every bad weather event that comes along and shutting down rational debate with polemical cudgels such as “climate emergency” declarations. Again, none of this occasioned a cautionary letter from economists.

There’s another big issue on which the letter was silent. Suppose we did clear all the regulatory boulders along with the carbon-pricing-costs-too-much twig. How high should the carbon tax be? A few of the letter’s signatories are former students of mine so I expect they remember the formula for an optimal emissions tax in the presence of an existing tax system. If not, they can take their copy of Economic Analysis of Environmental Policy by Prof. McKitrick off the shelf, blow off the thick layer of dust and look it up. Or they can consult any of the half-dozen or so journal articles published since the 1970s that derive it. But I suspect most of the other signatories have never seen the formula and don’t even know it exists.

To be technical for a moment, the optimal carbon tax rate varies inversely with the marginal cost of the overall tax system. The higher the tax burden — and with our heavy reliance on income taxes our burden is high — the costlier it is at the margin to provide any public good, including emissions reductions. Economists call this a “second-best problem”: inefficiencies in one place, like the tax system, cause inefficiencies in other policy areas, yielding in this case a higher optimal level of emissions and a lower optimal carbon tax rate.

Based on reasonable estimates of the social cost of carbon and the marginal costs of our tax system, our carbon price is already high enough. In fact, it may well be too high. I say this as one of the only Canadian economists who has published on all aspects of the question. Believing in mainstream climate science and economics, as I do, does not oblige you to dismiss public complaints that the carbon tax is too costly.

Which raises my final point: the age of mass academic letter-writing has long since passed. Academia has become too politically one-sided. Universities don’t get to spend years filling their ranks with staff drawn from one side of the political spectrum and then expect to be viewed as neutral arbiters of public policy issues. The more signatories there are on a letter like this, the less impact it will have. People nowadays will make up their own minds, thank you very much, and a well-argued essay by an individual willing to stand alone may even carry more weight.

Online conversations today are about rising living costs, stagnant real wages and deindustrialization. Even if carbon pricing isn’t the main cause of all this, climate policy is playing a growing role and people can be excused for lumping it all together. The public would welcome insight from economists about how to deal with these challenges. A mass letter enthusing about carbon taxes doesn’t provide it.

Postscript:  All the Pain for No Gain is Unnecessary

 

There is no charge for content on this site, nor for subscribers to receive email notifications of postings.

The Short Lives of Wind Turbines

In a recent short video (below) John Burgess summarizes why wind farms become unviable long before promoters promised. He explains that after about 15 years wind farms are uneconomic to keep going. Also the far more reliable older smaller under 2 MW turbines have a longer life. All based on the work of one professor – Gordon Hughes.who did some brilliant work on wind farm costs some three years ago. For those preferring to read, I provide a transcript lightly edited from closed captions in italics with my bolds, key exhibits and added images.

Paul Burgess Basics 2 The Lifespan of Wind Turbines

This video is on the lifespan of wind turbines. In this video and quite a few others actually, I’m going to be relying on the work of Professor Gordon Hughes and a document you should all read is this one The link to Hughes’ study is in the title in red below. That video was produced three years ago but had very few views less than a thousand. My job is to bring these stories to the public and his work is extremely valuable, so this video is based on that.

Wind Power Economics – Rhetoric and Reality

Here we go the lifespan of turbines.  Shock Horror. Wind turbines gradually wear out and they do it faster than you think. As I have explained, the load factor for a wind farm is the percentage of the actual electricity you get out of it in the real world compared to a purely theoretical maximum, the maximum being every second of the Year it blows perfectly and everything you get 100%. What percentage of that do we actually get, that’s the load factor.

Typically for onshore wind farms in the UK Island Etc it’s 26 to 30%, in that sort of range. The bigger ones, the higher ones may get into the low30s. So that’s the load Factor but that doesn’t stay the same. It actually deteriorates. These things wear out as they go, and they actually deteriorate at quite a rate, around about 3% per year. And so what matters with load factors– no excuses. If it has to be stopped for maintenance that reduces a load Factor, because it’s a real world measurement of what you produce.

Now Denmark kept really good records of their turbines. And here is a diagram that explains a few things about them. The results are quite remarkable. This graph looks complicated, but it’s a graph to show basically the failure rate over time for wind turbines. And it’s constructed from a large number of wind turbines in Denmark. On this vertical axis is how much of the energy is lost, which affects the load factors. We start off with almost zero so nothing is lost. We’re getting the expected performance, and that seems to be the case here for almost two years. But as you go up that axis and you go to the very top, there’s nothing left at all, There’s no energy output.

Now there are four colors of Curves, The higher two are for offshore, showing Old Generation and New Generation offshore. The lower two are for onshore, again Old Generation and New Generation. The new generation have higher turbine values and this comprises turbines up to 8 megawatts. They’re much worse than the older generation; they deteriorate much faster, and you can see that from the curves. Reading a curve is quite amazing. Let’s look at what point you’ve lost 60% of the energy coming out the wind farm. For Offshore New Generation the answer there is just 60 months or 5 years.

So 60% of those offshore modern turbines have failed within 5 years. Obviously they have to repair them all the time and therefore there’s a big rising cost to all this. But looking quickly at what we get from onshore modern ones which is the orange curve here. Let’s check when 20% of the turbines are failed, that’s one out of five turbines, and that is at about 68 months or about 5 to six years.

You can expect failures so these things they had to be repaired, which puts the costs up. So what are the running costs of these turbines? This graph of one axis shows how many thousands of pounds per megawatt of installed capacity you actually pay out per year. And the bottom scale is the length of time, how much those costs rise over time. And as you can see the lower line is the older generation and the Top Line the newer generation, such as they are putting into the Isle of Man

So let’s take Isle of Man as an example. They’re going to install 20 megawatts worth, so let’s look at the running cost and these are in 2018 prices, so the costs have risen since then. You can see taking the Isle of Man modern turbines we start off at £74,000 a year per megawatt, and we end at about £100,000 a year after 12 years per megawatt of installed capacity. So we start off with 74 times in this case for 20 megawatts for the Isle of Man which is 1.48 million a year and we end up at a neat 2 million a year in running costs. And that keeps rising.

This basically shows that after about 15 years
it’s no longer worth maintaining the wind Farm.

Offshore wind of course is much more expensive starting off at around about £200,000 a year and ending up at £400,000 a year per megawatt, three to four times the price of onshore.

I am aware that that raises lots of questions and they will be answered in following videos. Why is it if it’s about 15 years that you’ve had some forms carry on Beyond? And so on. The whole thing seems to me to be a Ponzi scheme, it really does. And that will be explained in following videos.

See Also

Wind Energy Risky Business

Put Climate Insanity Behind Us

Conrad Black writes at National Post Time for the climate insanity to stop.  Excerpts in italics with my bolds and added images.

We have been racing to destroy our standard of living
to avert a crisis that never materialized

We must by now be getting reasonably close the point where there is a consensus for re-examining the issue of climate change and related subjects. For decades, those of us who had our doubts were effectively shut down by the endless deafening repetition, as if from the massed choir of an operatic catechism school, of the alleged truism: “98 per cent of scientists agree …” (that the world is coming to an end in a few years if we don’t abolish the combustion engine). Decades have gone by in which the polar bears were supposed to become extinct because of the vanishing polar ice cap, the glaciers were supposed to have melted in the rising heat and the impact of melting ice would raise ocean levels to the point that Pacific islands, such as former U.S. vice-president Al Gore’s oratorical dreamworld, the Pacific island state of Tuvalu, would only be accessible to snorkelers. There has been no progress toward any of this. Ocean levels have not risen appreciably, nothing has been submerged and the polar bear population has risen substantially.

A large part of the problem has been the fanaticism of the alarmist forces. This has not been one of those issues where people may equably disagree. There was a spontaneous campaign to denigrate those of us who were opposed to taking drastic and extremely expensive economic steps to reduce carbon emissions on the basis of existing evidence: we could not be tolerated as potentially sensible doubters; we were labelled “deniers,” a reference to Holocaust-deniers who would sweep evidence of horrible atrocities under the rug. For our own corrupt or perverse motives, we were promoting the destruction of the world and unimaginable human misery. There has been climate hysteria like other panics in history, such as those recounted in Charles MacKay’s “Extraordinary Popular Delusions and the Madness of Crowds,” particularly the 1630’s tulip mania, in which a single tulip bulb briefly sold for the current equivalent of $25,000.

In western Europe, and particularly in the United States, where the full panic of climate change prevailed, the agrarian and working echelons of society have rebelled against the onerous financial penalties of the war on carbon emissions. There have been movements in some countries to suppress the population of cows because of the impact of their flatulence on the composition of the atmosphere. This has created an alliance of convenience between the environmental extremists and the dietary authoritarians as they take dead aim at the joint targets of carbon emissions and obesity. Germany, which should be the most powerful and exemplary of Europe’s nations, has blundered headlong into the climate crisis by conceding political power to militant Greens. It has shut down its advanced and completely safe nuclear power program, the ultimate efficient fuel, and has flirted with abolishing leisure automobile drives on the weekends.

Claims that tropical storms have become more frequent are rebutted by meticulously recorded statistics. Claims that forest fires are more frequent and extensive have also been shown not to be true. My own analysis, which is based on observations and makes no pretense to scientific research, as I have had occasion to express here before, is that the honourable, if often tiresome, conservation movement, the zealots of Greenpeace and the Sierra Club, were suddenly displaced as organizers and leaders of the environmental movement by the international left, which was routed in the Cold War. Their western sympathizers demonstrated a genius for improvisation that none of us who knew them in the Cold War would have imagined that they possessed, and they took over the environmental bandwagon and converted it into a battering ram against capitalism in the name of saving the planet.

Everyone dislikes pollution and wants the cleanest air and water possible. All conscientious people want the cleanest environment that’s economically feasible. We should also aspire to the highest attainable level of accurate information before we embark on, or go any further with, drastic and hideously expensive methods of replacing fossil fuels. Large-scale disruptions to our ways of life at immense cost to consumers and taxpayers, mainly borne by those who can least easily afford it, are a mistake. We can all excuse zeal in a sincerely embraced cause, but it is time to de-escalate this discussion from its long intemperate nature of hurling thunderbolts back and forth, and instead focus on serious research that will furnish a genuine consensus. I think this was essentially what former prime minister Stephen Harper and former environment minister John Baird were advocating in what they called a ”Canadian solution” to the climate question. Since then, our policy has been fabricated by fanatics, including the prime minister, who do not wish to be confused by the facts. The inconvenient truth is now the truth that inconveniences them.

Western Europe has effectively abandoned its net-zero carbon emission goals; the world is not deteriorating remotely as quickly as Al Gore, King Charles, Tony Blair and the Liberal Party of Canada predicted. Some of the largest polluters — China, India and Russia — do not seem to care about any of this. Canada should lead the world toward a rational consensus with intensified research aiming at finding an appropriate response to the challenge. What we have had is faddishness and public frenzy. Historians will wonder why the West made war on its own standard of living in pursuit of a wild fantasy, and no immediate chance of accomplishing anything useful. We have been cheered on by the under-developed world because they seek reparations from the advanced countries, although some of them are among the worst climate offenders. It is insane. Canada should help lead the patient back to sanity.

Postscript:

So to be more constructive, let’s consider what should be proposed by political leaders regarding climate, energy and the environment.  IMO these should be the pillars:

♦  Climate change is real, but not an emergency.

♦  We must use our time to adapt to future climate extremes.

♦  We must transition to a diversified energy platform.

♦  We must safeguard our air and water from industrial pollutants.

A Rational Climate Policy

This is your brain on climate alarm. Just say N0!

Scottish Wind Power from Diesel Generators

John Gideon Hartnett writes at Spectator Australia Another climate myth busted  explaining how the Scottish public was scammed about their virtuous green wind power by the public authority.  Excerpts in italics with my bolds and added images from post by John Ray at his blog.

What I like to call ‘climate cult’ wind farms expose the myth that wind can replace hydrocarbon fuels for power generation. The following story is typical of the problems associated with using wind turbines to generate electricity in a cold environment.

Apparently, diesel-fuelled generators are being used to power some wind turbines as a way of de-icing them in cold weather, that is, to keep them rotating. Also, it appears that the wind turbines have been drawing electric power directly from the grid instead of supplying it to the grid.

Scotland’s wind turbines have been secretly using fossil fuels.

The revelation is now fueling environmental, health and safety concerns, especially since the diesel-generated turbines were running for up to six hours a day.

Scottish Power said the company was forced to hook up 71 windmills to the fossil fuel supply after a fault on its grid. The move was an attempt to keep the turbines warm and working during the cold month of December.

South Scotland Labor MSP Colin Smyth said regardless of the reasons, using diesel to deice faulty turbines is “environmental madness”.  Source: Straight Arrow News

Charging system for Teslas at Davos WEF meeting.

Nevertheless, those pushing these technologies are so blind to the physical realities of the world that they are prepared to ignore failures while pretending to efficiently generate electricity. I say ‘failures’ because wind energy was put out of service the day the Industrial Revolution was fired up (pun intended) with carbon-based fuels, from petroleum and coal. And in the case of modern wind turbines, they do not always generate electricity; they sometimes consume it from the grid.

Green energy needs the hydrocarbon-based fuel it claims to replace.

Hydrocarbon-based fuels were provided providentially by the Creator of this planet for our use. That includes coal, which has been demonised in the Western press as some sort of evil. But those who run that line must have forgotten to tell China, because they build two coal-fired power stations every other week. No other source of non-nuclear power is as reliable for baseload generation.

How will wind turbines work in a globally cooling climate as Earth heads into a grand solar minimum and temperatures plummet? This case from Scotland may give us a hint. As cloud cover increases with cooler weather, and more precipitation occurs, how will solar perform? It won’t.
The two worst choices for electricity generation in cold, wet, and stormy environments are solar and wind. Solar is obvious. No sun means no power generation. But you might think wind is a much better choice under those conditions.

However, wind turbine rotors have to be shut down if the wind becomes too strong and/or rapidly changes in strength. They are shut down when too much ice forms or when there is insufficient wind. And now we have learned in Scotland they just turn on the diesel generators when that happens or they draw power directly from the grid.

Where are all the real engineers? Were they fired?

In regards to wind turbines going forward, once their presence in the market has destroyed all the coal or natural gas electricity generators, how are they going to keep the rotors turning and the lights on?

These devices are based on a rotating shaft with a massive bearing, that suffers massive frictional forces. In this case, only a high-quality heavy-duty oil can lubricate this system and I am sure it would need to be regularly replaced.

Wind turbine gearbox accelerates blade rotor (left) up to 1800 rpm output to electical generator (right)

Massive amounts of carbon-based oil are needed for the lubrication of all gears and bearings in a wind turbine system, which is mechanical in its nature. In 2019, wind turbine applications were estimated to consume around 80 per cent of the total supply of synthetic lubricants. Synthetic lubricants are manufactured using chemically modified petroleum components rather than whole crude oil. These are used in the wind turbine gearboxes, generator bearings, and open gear systems such as pitch and yaw gears.

Now we also know that icing causes the rotors to stop turning so diesel power has to be used to keep the bearings warm during cold weather. The diesel generator is needed to get the blades turning on start-up to overcome the limiting friction of the bearing or when the speed of the rotor drops too low.

In this case in Scotland, 71 windmills on the farm were supplied with diesel power. Each windmill has its own diesel generator. Just think of that.

What about the manufacturing of these windmills?

The blades are made from tons of fibreglass. Manufacturing fibreglass requires the mining of silica sand, limestone, kaolin clay, dolomite, and other minerals, which requires diesel-driven machines. These minerals are melted in a furnace at high temperatures (around 1,400°C) to produce the glass. Where does that heat come from? Not solar or wind power, that is for sure. The resin in the fibreglass comes from alcohol or petroleum-based manufacturing processes.

The metal structure is made from steel that requires tons of coking coal (carbon) essential to make pig iron, which is made from iron ore in a blast furnace at temperatures up to 2,200°C. The coal and iron ore is mined from the ground with giant diesel-powered machines and trucks. The steel is made with pig iron and added carbon in another furnace powered by massive electric currents. Carbon is a critical element in steel making, as it reacts with iron to form the desired steel alloy. None of this comes from wind and solar power.

Wind turbine power generation is inherently intermittent and unreliable.
It can hardly called green as the wind turbines require enormous
amounts of hydrocarbons in their manufacture and continued operation.

 

Biden Climate Policies the Greatest Financial Risk

Will Hild writes at Real Clear Policy The Biden Administration Proves Itself Wrong on ‘Climate Risk’.  The report shows how the feds’ own numbers prove their net zero policies pose a far greater financial risk than the climate itself. Excerpts in italics with my bolds and added images.

Environmental activists and left-leaning political bodies have long argued that climate risk is a form of financial risk, constantly pressuring blue states and the Biden Administration to make the issue a central part of their agendas. Recently, a group of those states has started suing oil and gas companies directly in their state courts. California, for example, claims that oil companies have colluded for decades to keep clean energy unavailable. Such lawsuits are ludicrous, and my organization, Consumers’ Research, filed an amicus brief at the Supreme Court supporting an effort to stop this litigation abuse that drives up consumer costs.

These harmful suits are driven in part by the idea that climate risks are inherently financial risks. However, evidence from the Biden Administration’s own study on climate risk shows that these risks are grossly exaggerated and immaterial. In an attempt to justify its radical and onerous climate policies, the Administration inadvertently exposed the fraud behind the environmental movement.

Soon after taking office, President Biden issued an executive order asking executive agencies to assess “the climate-related financial risk, including both physical and transition risks, to … the stability of the U.S. financial system.” Agency officials quickly responded to the order. Treasury Secretary Yellen announced that “climate change is an emerging and increasing threat to U.S. financial stability.” The FDIC declared that climate risk endangered the banking system. The SEC issued controversial climate risk disclosure rules, which impose massive regulatory burdens on Americans.

The problem with these new rules is that the Administration lacked
sufficient evidence to show that “climate-related financial risk” existed.

The SEC and Secretary Yellen relied on a Biden Administration report issued in 2021 by the Financial Stability Oversight Council. However, the report itself admitted that there were “gaps” in the evidence needed to support its speculative assertion that climate change would “likely” present shocks to the financial system. 

John H. Cochrane, a respected Stanford professor, slammed the Biden Administration’s “climate-related financial risk” assertions and highlighted that the Administration has not shown any serious threat to the financial system. “Financial regulators may only act if they think financial stability is at risk,” but “there is absolutely nothing in even the most extreme scientific speculations” to support the type of risk that would allow financial regulators to intervene. [See my synopsis Financial Systems Have Little Risk from Climate]

In response to criticism, the Biden Administration came up with an idea to manufacture its own evidence.  If the Federal Reserve created scenarios in which banks must simulate extreme “physical” and “transition” climate risks, these custom-designed scenarios could show a large impact to the financial system just like federal “stress tests” for banks.

To ensure that the “stresses” were sufficiently severe,
the Biden Administration manipulated the scenarios
to ensure as much stress as possible. 

For example, for “physical risk,” major banks had to simulate the effect of a storm-of-two-centuries-sized hurricane smashing into the heavily populated Northeast United States with no insurance coverage available to pay for the damage.  For “transition risk,” the government demanded a simulation in which “stringent climate policies are introduced immediately,” without any chance for banks to prepare for such policies, along with rapidly rising carbon prices. 

Despite these attempts to make the climate risk as extreme as possible, the tests utterly failed to demonstrate any significant effect.  The Administration’s study demonstrated that even under some of the most extreme climate scenarios imaginable, the probability of default on loans only increased by half a percentage point or less.   In contrast, federal bank stress tests involving true financial stresses, such as a severe recession, have resulted in probabilities of default jumping by 20 to 40 times that amount or more, leading to hundreds of billions in losses. 

The climate analyses also revealed the expected costs
of the Biden Administration’s quixotic net zero quest. 

The Biden Administration employed scenarios from the Network of Central Banks and Supervisors for Greening the Financial System (NGFS). Those climate scenarios envision the cost of carbon emissions steadily rising and reaching over $400 per ton by 2050.  Given that the average American emits about 16 tons of carbon a year, the Biden Administration’s hand-picked climate scenarios would cost the average American around $125k between now and 2050 in government mandated carbon fees.

Thus, the Biden Administration’s own bank stress test proved that climate risk is not a material financial risk, and that the biggest financial risk at issue is that the Administration’s net-zero policies would result in massive financial losses for everyday Americans. The Biden Administration should stop using lies to support their burdensome policies, and blue states should drop their punitive lawsuits against oil and gas companies.  Otherwise, the result of both efforts will be to inflict high costs on everyday Americans without any benefit.

 

 

Ruinous Folly of CO2 Pricing

Dr. Lars Schernikau is an energy economist explaining why CO2 pricing (also falsely called “carbon pricing”) is a terrible idea fit only for discarding.  His blog article is The Dilemma of Pricing CO2.  Excerpts below in italics with my bolds and added images.

1.Understanding CO₂ pricing
2.Economic and environmental impact
3.Global economic impact
4.Alternative solutions
5.Conclusion
6.Additional comments and notes
7.References

Introduction

As an energy economist I am confronted daily with questions about the “energy transition” away from conventional fuels. As we know, the discussion about the “energy transition” stems from concerns about climatic changes.

The source of climatic changes is a widely discussed issue, with numerous policies and measures proposed to mitigate its impact. One such measure is the current and future pricing of carbon dioxide (CO₂) emissions. The logic followed is that if human CO₂ emissions are reduced, future global temperatures will be measurably lower, extreme weather events will be reduced, and sea-levels will rise less or stop rising all together.

Although intended to reduce greenhouse gases, this approach has sparked considerable debate. In this blog post I discuss the controversial topic of CO₂ pricing, examining its economic and environmental ramifications.

However, this article is not about the causes of climatic changes, nor is it about the negative or positive effects of a warming planet and higher atmospheric CO₂ concentrations. It is also not about the scientifically undisputed fact that we don’t know how much warming CO₂ causes (a list of recent academic research on CO₂’s climate sensitivity can be found at the end of this blog).

Nor do I unpack the undisputed and IPCC confirmed fact that each additional ton of CO₂ in the atmosphere has less warming effect than the previous ton as the climate sensitivity of CO₂ is a logarithmic function irrespective of us not knowing what that climate sensitivity is. I also don’t discuss the NASA satellite confirmed greening of the world over the past decades partially driven by higher atmopheric CO₂ concentrations (see sources inc Chen et al. 2024).

Instead, this blog post is about the environmental and economic “sense”, or lack thereof, of pricing CO₂ emissions as currently practiced in most OECD countries and increasingly seen in developing nations. It is about the “none-sense” of measuring practically all human activity with a “CO₂ footprint”, often mistakenly called “carbon footprint”, and having nearly every organization set claims for current or future “Net-Zero” (Figure 1).

1.Understanding CO₂ Pricing

CO₂ pricing aims to internalize the external costs of CO₂ emissions, thereby encouraging businesses and individuals to reduce their “carbon footprint”. The concept is straightforward: by assigning a cost to CO₂ emissions, it becomes financially advantageous to emit less CO₂.

However, this simplistic view overlooks
significant complexities and unintended consequences.

Our entire existence is based on drawing from nature (“renewable” or not), so the “Net-Zero” discussion ignores a fundamental requirement for our survival. I agree that it should be our aim to reduce the environmental footprint as much as possible but only if our lives, health, and wealth don’t deteriorate as a result.

Now, I am sure, some readers and many “activists” may disagree, which I respect but, at a global level, find unrealistic. However, I would assume that most agree that no-one’s life ought to be harmed or shortened for the sake of reducing the environmental impact made. Otherwise, there is little room for a conversation.

BloombergNEFs “New Energy Outlook” from May 2024 should possibly be called “CO₂ Outlook”, as there is little to be found about energy and its economics but rather all about CO₂ emissions and the so called “Net-Zero” (Figure 1), which is in line with media, government, and educational focus on primarily carbon dioxide emissions.

2.Economic and Environmental Impacts

One of the primary criticisms of CO₂ pricing is that it addresses only one environmental externality while ignoring others. This narrow focus can lead to economic distortions, as it fails to account for the full spectrum of environmental and social impacts. For instance, while CO₂ pricing might reduce emissions, it can also drive-up energy costs, disproportionately affecting lower-income populations and hindering economic development in lesser developed countries.

It is by now undisputed amongst energy economists that, large-scale “Net-Zero” intermittent and unpredictable wind and solar power generation increases the total or “full” cost of electricity, primarily because of their low energy density, intermittency, inherent net energy and raw material inefficiency, mounting integration costs for power grids, and the need for a drastic overbuild installation system plus an overbuild backup/storage system because of their intermittency.

CO₂ pricing can also result in environmental trade-offs. For example, the shift towards “renewable” energy sources like wind and solar, incentivized by CO₂ pricing, has its own set of environmental impacts, including land use, resource extraction, energy footprint, and energy storage challenges.

When BloombergNEF (Figure 1) displays
how clean power and electrification
will directly reduce CO₂ emissions to zero,
then they are clearly mistaken.

My native country Germany provides a notable example of the complexities involved in transitioning to “renewable” energy. The country has invested heavily in wind and solar power, leading to the highest electricity costs among larger nations. Germany’s installed wind and solar capacity is now twice the total peak power demand. This variable “renewable” wind and solar power capacity now produces about a third of the country’s electricity and contributes about 6% to Germany’s primary energy supply (Figure 2).

Sources: Schernikau based on Fraunhofer, Agora, AG Energiebilanzen. See also www.unpopular-truth.com/graphs.

3.Global Economic Implications

Higher energy costs, obviously and undisputedly, hurt less affluent people and stifles the development of poorer nations (Figure 3). Thus, a move to more expensive wind and solar energy has “human externalities”. The less fortunate will be “starved of” energy as they wouldn’t be able to afford it, leading to literal reduction in life expectancy.

Source: Eschenbach 2017; Figure 38 in Book “The Unpopular Truth… about Electricity and the Future of Energy”

CO₂ pricing typically focuses only on emissions during operation,
neglecting significant environmental and economic costs
incurred during other stages or by the entire system.

For instance, the production of solar panels involves substantial energy and raw material inputs. Today there is not one single solar panel that is produced without coal. Similarly, the manufacturing and transportation processes of wind turbines and electric vehicles are energy-intensive and environmentally impactful. These stages are rarely accounted for in CO₂ pricing schemes, leading to a distorted view of their true environmental footprint. Also not accounted for are:

a) the required overbuild,
b) short and long duration energy storage,
c) backup facilities, or
d) larger network integration and transmission infrastructure.

Source: Schernikau, adapted from Figure 39 in Book “The Unpopular Truth… about Electricity and the Future of Energy“

Figure 4 illustrates how virtually all CO₂ pricing or taxation happens only at the stage of “operation” or combustion. How else could a “Net-Zero” label be assigned to a solar panel produced from coal and minerals extracted in Africa with diesel-run equipment, transported to China on a vessel powered by fuel-oil, and processed with heat and electricity from coal- or gas-fired power partially using forced labour? All this energy-intensive activity and not a single kilogram of CO₂ is taxed (see my recent article on this subject here) The same applies to wind turbines, hydro power, biofuel, or electric vehicles.

It turns out, CO₂ tax is basically just a means to redistribute wealth, with the collecting agency (government) deciding where the funds go. Yes, a CO₂ tax does incentivize industry to reduce CO₂ emissions at their taxed operations only, but this comes at a cost to economies, the environment, and often people. . . Any economist will confirm that pricing one externality but not others leads to economic distortions and, many would say, worse environmental impacts.

4.Alternative Approaches

Distortion, in this case, is just another word for unintended consequences to the environment, our economies, and the people. Pricing CO₂ only during combustion but failing to price methane, raw material and recycling, inefficiency, or embodied energy, or energy shortages, or land requirement, or greening from CO₂… will cause undesirable outcomes. The world will be worse off economically and environmentally.

Protest if you must, but let me offer a simple example. The leaders of the Western world seem to have united around abandoning coal immediately, because it is the highest CO₂ emitter during combustion (UN 2019). Instead, demanding reliable and affordable energy, Bangladesh, Pakistan, Germany, and so many more nations have embraced liquified natural gas (LNG) as a “bridge” fuel to replace coal. This “switch” is taking place despite questions about LNG’s impact on the environment, including the “climate”. This policy, supported by almost all large consultancies, indirectly caused blackouts affecting over 150 million people in Bangladesh in October 2022 (Reuters and Bloomberg).

So, the world is embarking on an expensive venture
to replace as much coal as possible with
more expensive liquified natural gas LNG.

On top of that, wind and solar are given preference. For example, the IEA recently confirmed that 2024 sparks the first year where investments in solar outstrip the combined investments in all other power generation technologies. As a result, energy costs go up, dependencies increase, lights go off, and, as per the UN’s IPCC, the “climate gets worse.”

Now imagine what would happen if we would truly take into account all environmental and human impacts, both negative and positive, along the entire value chain of energy production, transportation, processing, generation, consumption, and disposal… we would all be surprised! You would look at fossil fuels and certainly nuclear through different eyes. Instead we should simply incentivize resource and energy efficiency which will truly make a positive difference!

From Schernikau et al. 2022.

5.Conclusion

No matter what your view on climate change is, pricing CO₂ is harmful… why?
Answer: … because pricing one externality but not others leads to economic and environmental distortions…causing human suffering.

That is why, even considering the entire value chain, I do not support any CO₂ pricing.. That is why I fight for environmental and economic justice so we can, by avoiding energy starvation and resulting poverty, make a truly positive difference not only for ourselves but also for future generations to come.. . We need INvestment in, not DIvestment from 80% of our energy supply to rationalize our energy systems and to allow people and the planet to flourish.

I strongly support increasing adaptation efforts, which have already been successful in drastically reducing the death rate and GDP adjusted financial damage from natural disasters during the past
100 years (OurWorldInData, Pielke 2022, Economist).

 

 

 

 

 

 

 

 

Experimental Proof Nil Warming from GHGs

Thomas Allmendinger is a Swiss physicist educated at Zurich ETH whose practical experience is in the fields of radiology and elemental particles physics.  His complete biography is here.

His independent research and experimental analyses of greenhouse gas (GHG) theory over the last decade led to several published studies, including the latest summation The Real Origin of Climate Change and the Feasibilities of Its Mitigation, 2023, at Atmospheric and Climate Sciences journal. The paper is a thorough and detailed discussion of which I provide here a synopsis of his methods, findings and conclusions. Excerpts are in italics with my bolds and added images.

Abstract

The actual treatise represents a synopsis of six important previous contributions of the author, concerning atmospheric physics and climate change. Since this issue is influenced by politics like no other, and since the greenhouse-doctrine with CO2 as the culprit in climate change is predominant, the respective theory has to be outlined, revealing its flaws and inconsistencies.

But beyond that, the author’s own contributions are focused and deeply discussed. The most eminent one concerns the discovery of the absorption of thermal radiation by gases, leading to warming-up, and implying a thermal radiation of gases which depends on their pressure. This delivers the final evidence that trace gases such as CO2 don’t have any influence on the behaviour of the atmosphere, and thus on climate.

But the most useful contribution concerns the method which enables to determine the solar absorption coefficient βs of coloured opaque plates. It delivers the foundations for modifying materials with respect to their capability of climate mitigation. Thereby, the main influence is due to the colouring, in particular of roofs which should be painted, preferably light-brown (not white, from aesthetic reasons).

It must be clear that such a drive for brightening-up the World would be the only chance of mitigating the climate, whereas the greenhouse doctrine, related to CO2, has to be abandoned. However, a global climate model with forecasts cannot be aspired to since this problem is too complex, and since several climate zones exist.

Background

The alleged proof for the correctness of this theory was delivered 25 years later by an article in the Scientific American of the year 1982 [4]. Therein, the measurements of C.D. Keeling were reported which had been made at two remote locations, namely at the South Pole and in Hawaii, and according to which a continuous rise of the atmospheric CO2-concentration from 316 to 336 ppm had been detected between the years 1958 and 1978 (cf. Figure 1), suggesting coherence between the CO2 concentration and the average global temperature.

But apart from the fact that these CO2-concentrations are quite minor (400 ppm = 0.04%), and that a constant proportion between the atmospheric CO2-concentration and the average global temperature could not be asserted over a longer period, it should be borne in mind that this conclusion was an analogous one, and not a causal one, since solely a temporal coincidence existed. Rather, other influences could have been effective which happened simultaneously, in particular the increasing urbanisation, influencing the structure and the coloration of large parts of Earth surface.

However, this contingency was, and still is, categorically excluded. Solely the two possibilities are considered as explanation of the climate change: either the anthropogenic influence due to CO2-production, or a natural one which cannot be influenced. A third influence, the one suggested here, namely the one of colours, is a priori excluded, even though nobody denies the influence of colouring on the surface temperature of Earth and the existence of urban heat islands, and although an increase of winds and storms cannot be explained by the greenhouse theory.

However, already well in advance institutions were founded which aimed at mitigating climate change through political measures. Thereby, climate change was equated with the industrial CO2 production, although physical evidence for such a relation was not given. It was just a matter of belief. In this regard, in 1992 the UNFCCC (United Nations Framework Convention on Climate Change) was founded, supported by the IPCC (Intergovernmental Panel on Climate Change). In advance, side by the side with the UNO, numerous so-called COPs (Conferences on the Parties) were hold: the first one in 1985 in Berlin, the most popular one in 1997 in Kyoto, and the most important one in 2015 in Paris, leading to a climate convention which was signed by representatives of 195 nations. Thereby, numerous documents were compiled, altogether more than 40,000. But actually these documents didn’t fulfil the standards of scientific publications since they were not peer reviewed.

Subsequently, intensive research activities emerged, accompanied by a flood of publications, and culminating in several text books. Several climate models were presented with different scenarios and diverging long-term forecasts. Thereby, the fact was disregarded that indeed no global climate exists but solely a plurality of climates, or rather of micro-climates and at best of climate-zones, and that the Latin word “clima” (as well as the English word “clime”) means “region”. Moreover, an average global temperature is not really defined and thus not measurable because the temperature-differences are immense, for instance with regard to the geographic latitude, the altitude, the distinct conditions over sea and over land, and not least between the seasons and between day and night. Moreover, the term “climate” implicates rain and snow as well as winds and storms which, in the long-term, are not foreseeable. In particular, it should be realized that atmospheric processes are energetically determined, whereto the temperature contributes only a part.

2. The Historical Inducement for the Greenhouse Theory and Their Flaws

The scientific literature about the greenhouse theory is so extensive that it is difficult to find a clearly outlined and consistent description. Nevertheless, the publications of James E. Hansen [5] and of V. Ramanathan et al. [6] may be considered as authoritative. Moreover, the textbooks [7] [8] and [9] are worth mentioning. Therein it is assumed that Earth surface, which is heated up by sun irradiation, emits thermal radiation into the atmosphere, warming it up due to heat absorption by “greenhouse gases” such as CO2 and CH4. Thereby, counter-radiation occurs which induces a so-called radiative transfer. This aspect involved the rise of numerous theories (e.g. [10] [11] [12]). But the co-existence of theories is in contrast to the scientific principle that for each phenomenon solely one explanation or theory is admissible.

Already simple thoughts may lead one to question this theory. For instance: Supposing the present CO2-concentration of approx. 400 ppm (parts per million) = 0.04%, one should wonder how the temperature of the atmosphere can depend on such an extremely low gas amount, and why this component can be the predominant or even the sole cause for the atmospheric temperature. This would actually mean that the temperature would be situated near the absolute zero of −273˚C if the air would contain no CO2 or other greenhouse gases.

Indeed, no special physical knowledge is needed in order to realize that this theory cannot be correct. However, the fact that it has settled in the public mind, becoming an important political issue, requires a more detailed investigation of the measuring methods and their results which delivered the foundations of this theory, and why misinterpretations arose. Thereto, the two subsequent points have to be particularly considered: The first point concerns the photometrical measurements on gases in the electromagnetic range of thermal radiation which initially Tyndall had carried out in the 1860s [13], and which had been expanded to IR-measurements evaluated by Plass nineteen years later [14]. The second point concerns the application of the Stefan/Boltzmann-law on the Earth-atmosphere system firstly made by Arrhenius in 1896 [2], and more or less adopted by modern atmospheric physics. Both approaches are deficient and would question the greenhouse theory without requiring the author’s own approaches.

2.1 The Photometric and IR-Measurement Methods for CO2

By variation of the wave length and measuring the respective absorption, the spectrum of a substance can be evaluated. This IR-spectroscopic method is widely used in order to characterize organic chemical substances and chemical bonds, usually in solution. But even there this method is not suited for quantitative measurements, i.e. the absorption of the IR-active substance is not proportional to its concentration as the Beer-Lambert law predicts. It probably will even less be the case in the gaseous phase and, all the more, at high pressures which were applied in order to imitate the large distances in the atmosphere in the range of several (up to 10) kilometres. Thereby it is disregarded that the pressure of the atmosphere depends on the altitude above sea level, which prohibits the assumption of a linear progress.

Moreover, it is disregarded that at IR-spectrographs the effective radiation intensity is not known, and that in the atmosphere a gas mixture exists where the CO2 amounts solely to a little extent, whereas for the spectroscopic measurements pure CO2 was used. Nevertheless, in the text books for atmospheric physics the Beer-Lambert law is frequently mentioned, however without delivering concrete numerical results about the absorbed radiation.

In both cases solely the absorption degree of the radiation was determined, i.e. the decrease of the radiation intensity due to its run through a gas, but never its heating-up, that means its temperature increase. Instead, it was assumed that a gas is necessarily warmed up when it absorbs thermal radiation. According to this assumption, pure air, or rather a 4:1 mixture of nitrogen and oxygen, is expected to be not warmed up when it is thermally irradiated since it is IR-spectroscopically inactive, in contrast to pure CO2.

However, no physical formula exists which would allow to calculate such an effect, and no respective empirical evidence was given so far. Rather, the measurements which were recently performed by the author delivered converse, surprising results.

2.2. The Impact of Solar Radiation onto the Earth Surface and Its Reflexion

Besides, a further error is implicated in the usual greenhouse theory. It results from the fact that the atmosphere is only partly warmed up by direct solar radiation. In addition, it is warmed up indirectly, namely via Earth surface which is warmed up due to solar irradiation, and which transmits the absorbed heat to the atmosphere either by thermal conduction or by thermal radiation. Moreover, air convection contributes a considerable part. This process is called Anthropogenic Heat Flux (AHF). It has recently been discussed by Lindgren [16]. However, herewith a more fundamental view is outlined.

The thermal radiation corresponds to the radiative emission of a so-called “black body”. Such a body is defined as a body which entirely absorbs electromagnetic radiation in the range from IR to UV light. Likewise, it emits electromagnetic radiation all the more as its temperature grows. Its radiative behaviour is formulated by the law of Stefan and Boltzmann. . . According to this law, the radiation wattage Φ of a black body is proportional to the fourth power of its absolute temperature. Usually, this wattage is related to the area, exhibiting the dimension W/m2.

This formula does not allow making a statement about the wave-length or the frequency of the emitted light. This is only possible by means of Max Planck’s formula which was published in 1900. According to that, the frequencies of the emitted light tend to be the higher the temperature is. At low temperatures, only heat is emitted, i.e. IR-radiation. At higher temperatures the body begins to glow: first of all in red, and later in white, a mixture of different colours. Finally, UV-radiation emerges. The emission spectrum of the sun is in quite good accordance with Planck’s emission spectrum for approx. 6000 K.

Black co2 absorption lines are not to scale.

This model can be applied on Earth surface considering it as a coloured opaque body: On one side, with respect to its thermal emission, it behaves like a black body fulfilling the Stefan/Boltzmann-law. On the other side, it adsorbs only a part βs of the incident solar light, converting it into heat, whereas the complementary part is reflected. However, the intensity of the incident solar light on Earth surface, Φsurface, is not identically equal with its extra-terrestrial intensity beyond the atmosphere, but depends on the sea level since the atmosphere absorbs a part of the sunlight. Remarkably, the atmosphere behaves like a black body, too, but solely with respect to the emission: On one side, it radiates inwards to the Earth surface, and on the other side, it radiates outwards in the direction of the rest of the atmosphere.

However, this method implies three considerable snags:
•  Firstly, TEarth means the constant limiting temperature of the Earth surface which is attained when the sun had constantly shone onto the same parcel and with the same intensity. But this is never the case, except at thin plates which are thermally insulated at the bottom and at the sides, since the position of the sun changes permanently.

•  Secondly, this formula does not allow making a statement about the rate of the warming up-process, which depends on the heat capacity of the involved plate, too. This is solely possible using the author’s approach (see Chapter 3). Nevertheless, it is often attempted (e.g. in [35]), not least within radiative transfer approaches.

•  Thirdly, it is principally impossible to determine the absolute values of the solar reflection coefficient αs with an Albedometer or a similar apparatus, because the intensity of the incident solar light is independent of the distance to the surface, whereas the intensity of the reflected light depends on it. Thus, the herewith obtained values depend on the distance from Earth surface where the apparatus is positioned. So they are not unambiguous but only relative.

In the modern approach of Hansen et al. [5] the Earth is apprehended as a coherent black body, disregarding its segmentation in a solid and a gaseous part, and thus disregarding the contact area between Earth surface and the atmosphere where the reflexion of the sunlight takes place. As a consequence, in Equation (4b) the expression with Tair disappears, whereas a total Earth temperature appears which is not definable and not determinable. This approach has been widely adopted in the textbooks, even though it is wrong (see also [15]).

Altogether, the matter of fact was neglected that the proportionality of the radiation intensity to the absolute temperature to the fourth is solely valid if a constant equilibrium is attained. In contrast, the subsequently described method enables the direct detection of the colour dependent solar absorption coefficient βs = 1 –αs using well-defined plates. Furthermore, the time/temperature-courses are mathematically modelled up to the limiting temperatures. Finally, relative field measurements are possible based on these results.

3. The Measurement of Solar Absorption-Coefficients with Coloured Plates

Within the here described and in [20] published lab-like method, not the reflected but the absorbed sun radiation was determined, namely by measuring the temperature courses of coloured quadratic plates (10 × 10 × 2 cm3) when sunlight of known intensity came vertically onto these plates. The temperatures of the plates were determined by mercury thermometers, while the intensity of the sunlight was measured by an electronic “Solarmeter” (KIMO SL 100). The plates were embedded in Styrofoam and covered with a thin transparent foil acting as an outer window in order to minimize erratic cooling by atmospheric turbulence (Figure 5). Their heat capacities were taken from literature values. The colours as well as the plate material were varied. Aluminium was used as a reference material, being favourable due to its high heat capacity which entails a low heating rate and a homogeneous heat distribution. For comparison, additional measurements were made by wooden plates, bricks and natural stones. For enabling a permanent optimal orientation towards the sun, six plate-modules were positioned on an adjustable panel (Figure 6).

The evaluation of the curves of Figure 7 yielded the colour specific solar absorption-coefficients βs rendered in Figure 9. They were independent of the plate material. Remarkably, the value for green was relatively high.

Figure 7. Warming-up of aluminium plates at 1040 W/m2 [20].

If the sunlight irradiation and thus the warming-up process would be continued, finally constant limiting temperatures are attained. However, when 20 mm thick aluminium plates are used, the hereto needed time would be too long, exceeding the constantly available sunshine period during a day. Instead, separate cooling-down experiments were made, allowing a mathematical modelling of the whole process including the determination of the limiting temperatures.

Figure 10. Cooling-down of different materials (in brackets: ambient temperature) [20]. al = aluminium 20 mm; st = stone 20.5 mm; br = brick 14.5 mm; wo = wood 17.5 mm.

These limiting temperature values are in good accordance with the empirical values reported in [24] and with the Stefan/Boltzmann-values. As obvious from the respective diagrams in Figure 11 and Figure 12, the limiting temperatures are independent of the plate-materials, whereas the heating rates strongly depend on them.  In principal, it is also possible to model combined heating-up and cooling-down processes [20]. However, this presumes constant environmental conditions which normally do not exist.

4. Thermal Gas Absorption Measurements

If the warming-up behaviour of gases has to be determined by temperature measurements, interference by the walls of the gas vessel should be regarded since they exhibit a significantly higher heat capacity than the gas does, which implicates a slower warming-up rate. Since solid materials absorb thermal radiation stronger than gases do, the risk exists that the walls of the vessel are directly warmed up by the radiation, and that they subsequently transfer the heat to the gas. And finally, even the thin glass-walls of the thermometers may disturb the measurements by absorbing thermal radiation.

By these reasons, quadratic tubes with a relatively large profile (20 cm) were used which consisted of 3 cm thick plates from Styrofoam, and which were covered at the ends by thin plastic foils. In order to measure the temperature course along the tube, mercury-thermometers were mounted at three positions (beneath, in the middle, and atop) whose tips were covered with aluminium foils. The test gases were supplied from steel cylinders being equipped with reducing valves. They were introduced by a connecter during approx. one hour, because the tube was not gastight and not enough consistent for an evacuation. The filling process was monitored by means of a hygrometer since the air, which had to be replaced, was slightly humid. Afterwards, the tube was optimized by attaching adhesive foils and thin aluminium foils (see Figure 13). The equipment and the results are reported in [21].

Figure 13. Solar-tube, adjustable to the sun [21].

The initial measurements were made outdoor with twin-tubes in the presence of solar light. One tube was filled with air, and the other one with carbon-dioxide. Thereby, the temperature increased within a few minutes by approx. ten degrees till constant limiting temperatures were attained, namely simultaneously at all positions. Surprisingly, this was the case in both tubes, thus also in the tube which was filled with ambient air. Already this result delivered the proof that the greenhouse theory cannot be true. Moreover, it gave rise to investigate the phenomenon more thoroughly by means of artificial, better defined light.

Figure 14. Heat-radiation tube with IR-spot [21].

Accordingly, the subsequent experiments were made using IR-spots with wattages of 50 W, 100 W and 150W which are normally employed for terraria (Figure 14). Particularly the IR-spot with 150 W lead to a considerably higher temperature increase of the included gas than it was the case when sunlight was applied, since its ratio of thermal radiation was higher. Thereby, variable impacts such as the nature of the gas could be evaluated.

Due to the results with IR-spots at different gases (air, carbon-dioxide, the noble gases argon, neon and helium), essential knowledge could be gained. In each case, the irradiated gas warmed up until a stable limiting temperature was attained. Analogously to the case of irradiated coloured solid plates, the temperature increased until the equilibrium state was attained where the heat absorption rate was identically equal with the heat emission rate.

Figure 15. Time/temperature-curves for different gases [21] (150 W-spot, medium thermometer-position).

As evident from the diagram in Figure 15, the initial observation made with sunlight was approved that pure carbon-dioxide was warmed up almost to the same degree as air does (whereby ambient air only scarcely differed from a 4:1 mixture between nitrogen and oxygen). Moreover, noble gases absorb thermal radiation, too. As subsequently outlined, a theoretical explanation could be found thereto.

Interpretation of the Results

Comparison of the results obtained by the IR-spots, on the one hand, and those obtained with solar radiation, on the other hand, corroborated the conclusion that comparatively short-wave IR-radiation was involved (namely between 0.9 and 1.9 μm). However, subsequent measurements with a hotplate (<90˚C), placed at the bottom of the heat-radiation tube ([15], Figure 16), yielded that long-wave thermal radiation (which is expected at bodies with lower temperatures such as Earth surface) induces also temperature increase of air and of carbon-dioxide, cf. Figure 17.

Thus, the herewith discovered absorption effect at gases proceeds over a relatively wide wave-length range, in contrast to the IR-spectroscopic measurements where only narrow absorption bands appear. This effect is not exceptional, i.e. it occurs at all gases, also at noble gases, and leads to a significant temperature increase, even though it is spectroscopically not detectable. This temperature increase overlays an eventual temperature increase due to the specific IR-absorption since the intensity ratio of the latter one is very small.

This may be explained as follows: In any case, an oscillation of particles, induced by thermal radiation, acts a part. But whereas in the case of the specific IR-absorption the nuclei inside the molecules are oscillating along the chemical bond (which must be polar), in the relevant case here the electronic shell inside the atoms, or rather the electron orbit, is oscillating implicating oscillation energy. Obviously, this oscillation energy can be converted into kinetic translation energy of the entire atoms which correlates to the gas temperature, and vice versa.

5. The Altitude-Paradox of the Atmospheric Temperature

The statement that it’s colder in the mountains than in the lowlands is trivial. Not trivial is the attempt to explain this phenomenon since the reason is not readily evident. The usual explanation is given by the fact that rising air cools down since it expands due to the decreasing air-pressure. However, this cannot be true in the case of plateaus, far away from hillsides which engender ascending air streams. It appears virtually paradoxical in view of the fact that the intensity of the sun irradiation is much greater in the mountains than in the lowlands, in particular with respect to its UV-amount. Thereby, the intensity decrease is due to the scattering and the absorption of sunlight within the atmosphere, not only within the IR-range but also in the whole remaining spectral area. If such an absorption, named Raleigh-scattering, didn’t occur, the sky would not be blue but black.

However, the direct absorption of sunlight is not the only factor which determines the temperature of the atmosphere. Its warming-up via Earth surface,which is warmed up due to absorbed sun-irradiation, is even more important. Thereby, the heat transfer occurs partly by heat conduction and air convection, and partly by thermal radiation. But there is an additional factor which has to be regarded: namely the thermal radiation of the atmosphere. It runs on the one hand towards Earth (as counter-radiation), and on the other hand towards Space. Thus the situation becomes quite complicated, all the more the formal treatment based on the Stefan/Boltzmann-relation would require limiting equilibrated temperature conditions. But in particular, that relation does not reveal an influence of the atmospheric pressure which obviously acts a considerable part.

In order to study the dependency on the atmospheric pressure, it would be desirable solely varying the pressure, whereas the other terms remain constant by varying the altitude of the measuring station above sea level which implicates a variation of the intensity of the sunlight and of the ambient atmosphere temperature, too. The here reported measurements were made at two locations in Switzerland, namely at Glattbrugg (close to Zürich), 430 m above sea level, and at the top of the Furka pass, 2430 m above sea level. Using the barometric height formula, the respective atmospheric pressures were approx. 0.948 and 0.748 bar.  At any position, two measurements were made in the same space of time.

Figure 18. Comparison of the temperature courses during two measurements [24] (continuous lines: Glattbrugg; dotted lines: Furka).

Figure 18 renders the data of one measurement pair. Obviously, the limiting temperatures were not ideally attained within 90 minutes. Moreover, the evaluation of the data didn’t provide strictly invariant values for A. But this is reasonable in view of the fact that the sunlight intensity was not entirely constant during that period, and that its spectrum depends on the altitude over sea level. Nevertheless, for the atmospheric emission constant A an approximate value of 22W·m−2• bar−1• K−0.5 could be found.

These findings indeed confirm that in a way a greenhouse-effect occurs, since the atmosphere thermally radiates back to Earth surface. But this radiation has  nothing to do with trace gases such as CO2. It rather depends on the atmospheric pressure which diminishes at higher altitudes.

If the oxygen content of the air would be considerably reduced, a general reduction of the atmospheric pressure and, as a consequence, of the temperature would proceed. This may be an explanation for the appearance of glacial periods. However, other explanations are possible, in particular the temporary decrease of the sun activity.

Over all it can be stated that climate change cannot be explained by ominous greenhouse gases such as CO2, but mainly by artificial alterations of Earth surface, particularly in urban areas by darkening and by enlargement of the surface (so-called roughness). These urban alterations are not least due to the enormous global population growth, but also to the character of modern buildings tending to get higher and higher, and employing alternative materials such as concrete and glass. As a consequence, respective measures have to be focussed, firstly mentioning the previous work, and then applying the here presented method.

7. Conclusions

The herewith summarized work of the author concerns atmospheric physics with respect to climate change, comprising three specific and interrelated points based on several previous publications: The first one consists in a critical discussion and refutation of the customary greenhouse theory; the second one outlines the method for measuring the thermal-radiative behaviour of gases; and the third one describes a lab-like method for the characterization of the solar-reflective behaviour of solid opaque bodies, in particular for the determination of the colour-specific solar absorption coefficients.

As to the first point, three main flaws were revealed:

•  Firstly, the insufficiency of photometric methods in order to determine the heating-up of gases in the presence of thermal radiation;
•  Secondly, the lack of  causal relationship between the CO2-concentration in the atmosphere and the average global temperature, based on the reasoning that the empiric simultaneous increase of its concentration and of the global temperature would prove a causal relationship instead of an analogous one; and
•  Thirdly, the inadmissible application of the Stefan/Boltzmann-law to the entire Earth (including the atmosphere) versus Space, instead of the application onto the boundary between the Earth surface and the atmosphere.

As to the second point, the discovery has to be taken into account according to which every gas is warmed up when it is thermally irradiated, even noble gases, attaining a limiting temperature where the absorption of radiation is in equilibrium with the emitted radiation. In particular, pure CO2 behaves similarly to pure air. Applying kinetic gas theory, a dependency of the emission intensity on the pressure, on the root of the absolute temperature, and on the particle size could be found and theoretically explained by oscillation of the electron shell.

As to the third point not only a lab-like measuring method for the colour dependent solar absorption coefficient βs was developed, but also a mathematical modelling of the time/temperature-course where coloured opaque plates are irradiated by sunlight. Thereby, the (colour-dependent) warming-up and the (colour-independent) cooling-down are detected separately. Likewise, a limiting temperature occurs where the intensity of the absorbed solar light is identical equal with the intensity of the emitted thermal radiation. In the absence of wind-convection, the so-called heat transfer coefficient B is invariant. Its value was empirically evaluated, amounting to approx. 9 W·m−2•K−1.

Finally, the theoretically suggested dependency of the atmospheric thermal radiation intensity on the atmospheric pressure could be empirically verified by measurements at different altitudes, namely in Glattbrugg (430 m above sea level and on the top of the Furka-pass (2430 m above sea level), both in Switzerland, delivering a so-called atmospheric emission constant A ≈ 22 W·m−2•bar−1•K−0.5. It explained the altitude-paradox of the atmospheric temperature and delivered the definitive evidence that the atmospheric behavior, and thus the climate, does not depend on trace gases such as CO2. However, the atmosphere thermally reradiates indeed, leading to something similar to a Greenhouse effect. But this effect is solely due to the atmospheric pressure.

Therefore, and also considering the results of Seim and Olsen [23], the customary greenhouse doctrine assuming CO2 as the culprit in climate change has to be abandoned and instead replaced by the here recommended concept of improving the albedo by brightening parts of the Earth surface, particularly in cities, unless fatal consequences will be hazarded.

Figure 24. Up-winds induced by an urban heat island.

Polar Bears, Dead Coral and Other Climate Fictions

Bjorn Lomborg calls out climate alarmist nonsense in his WSJ article Polar Bears, Dead Coral and Other Climate Fictions.  Excerpts in italics with my bolds and added images.

Activists’ tales of doom never pan out,
but they leave us poorly informed and feed bad policy.

Whatever happened to polar bears? They used to be all climate campaigners could talk about, but now they’re essentially absent from headlines. Over the past 20 years, climate activists have elevated various stories of climate catastrophe, then quietly dropped them without apology when the opposing evidence becomes overwhelming. The only constant is the scare tactics.

Protesters used to dress up as polar bears. Al Gore’s 2006 film, “An Inconvenient Truth,” depicted a sad cartoon polar bear floating away to its death. The Washington Post warned in 2004 that the species could face extinction, and the World Wildlife Fund’s chief scientist claimed some polar bear populations would be unable to reproduce by 2012.

Then in the 2010s, campaigners stopped talking about them.

After years of misrepresentation, it finally became impossible to ignore the mountain of evidence showing that the global polar-bear population has increased substantially. Whatever negative effect climate change had was swamped by the reduction in hunting of polar bears. The population has risen from around 12,000 in the 1960s to about 26,000.

The same thing has happened with activists’ outcry about Australia’s Great Barrier Reef. For years, they shouted that the reef was being killed off by rising sea temperatures. After a hurricane extensively damaged the reef in 2009, official Australian estimates of the percent of reef covered in coral reached a record low in 2012. The media overflowed with stories about the great reef catastrophe, and scientists predicted the coral cover would be reduced by another half by 2022. The Guardian even published an obituary in 2014.

The percentage of coral cover in the northern and central Great Barrier Reef has increased.(Supplied: Australian Institute of Marine Science)

The latest official statistics show a completely different picture. For the past three years the Great Barrier Reef has had more coral cover than at any point since records began in 1986, with 2024 setting a new record. This good news gets a fraction of the coverage that the panicked predictions did.

More recently, green campaigners were warning that small Pacific islands would drown as sea levels rose. In 2019 United Nations Secretary-General António Guterres flew all the way to Tuvalu, in the South Pacific, for a Time magazine cover shot. Wearing a suit, he stood up to his thighs in the water behind the headline “Our Sinking Planet.” The accompanying article warned the island—and others like it—would be struck “off the map entirely” by rising sea levels.

Hundreds of Pacific Islands are growing, not shrinking. No habitable island got smaller.

About a month ago, the New York Times finally shared what it called“surprising” climate news: Almost all atoll islands are stable or increasing in size. In fact, scientific literature has documented this for more than a decade. While rising sea levels do erode land, additional sand from old coral is washed up on low-lying shores. Extensive studies have long shown this accretion is stronger than climate-caused erosion, meaning the land area of Tuvalu and many other small islands is increasing.

Today, killer heat waves are the new climate horror story. In July President Biden claimed “extreme heat is the No. 1 weather-related killer in the United States.”

He is wrong by a factor of 25. While extreme heat kills nearly 6,000 Americans each year, cold kills 152,000, of which 12,000 die from extreme cold. Even including deaths from moderate heat, the toll comes to less than 10,000. Despite rising temperatures, age-standardized extreme-heat deaths have actually declined in the U.S. by almost 10% a decade and globally by even more, largely because the world is growing more prosperous. That allows more people to afford air-conditioners and other technology that protects them from the heat.

My Mind is Made Up, Don’t Confuse Me with the Facts. H/T Bjorn Lomborg, WUWT

The petrified tone of heat-wave coverage twists policy illogically. Whether from heat or cold, the most sensible way to save people from temperature-related deaths would be to ensure access to cheap, reliable electricity. That way, it wouldn’t be only the rich who could afford to keep safe from blistering or frigid weather. Unfortunately, 

Activists do the world a massive disservice by refusing to acknowledge
facts that challenge their intensely doom-ridden worldview.

There is ample evidence that man-made emissions cause changes in climate, and climate economics generally finds that the costs of these effects outweigh the benefits. But the net result is nowhere near catastrophic. The costs of all the extreme policies campaigners push for are much worse. All told, politicians across the world are now spending more than $2 trillion annually—far more than the estimated cost from climate change that these policies prevent each year.

Yes, those are Trillions of US$ they are projecting to spend.

Scare tactics leave everyone—especially young people—distressed and despondent. Fear leads to poor policy choices that further frustrate the public. And the ever-changing narrative of disasters erodes public trust.

Telling half-truths while piously pretending to “follow the science” benefits activists with their fundraising, generates clicks for media outlets, and helps climate-concerned politicians rally their bases. But it leaves all of us poorly informed and worse off.

Mr. Lomborg is president of the Copenhagen Consensus, a visiting fellow at Stanford University’s Hoover Institution and author of “Best Things First: The 12 Most Efficient Solutions for the World’s Poorest and our Global SDG Promises.”

See Also:

You Won’t Survive “Sustainability” Agenda 2024