Climate-Obsessed Pols Blew Canada’s Opportunity

Jamie Sarkonak summarizes the bogus start to Canada Federal elections in his National Post article Liberals pledge to make Canada a superpower after years of preventing it.  Excerpts in italics with my bolds and added images.

A tattered Canadian flag is shown on top of a building in downtown Calgary on Friday, Jan. 17, 2025 where the U.S. Consulate is located. Photo by Jim Wells/Postmedia

 

Sunday’s edition of the Financial Times included the oft-made observation that Canada is brimming with potential, and the oft-made conclusion that this country would be much better off if it simply developed its God-given gifts.

The article, Unlocking Canada’s Superpower Potential by Tej Parikh, made the bullish case for this country’s future prospects: Canada is geographically huge and loaded with natural resources — on paper, at least, it has the makings of an actual global superpower.

“‘Canada absolutely has potential to be a global superpower,’  but the nation has lacked the visionary leadership and policy framework to capitalise on its advantages.”

It was, with gentle vagueness, a condemnation of the federal Liberal government and what is now being called Canada’s “lost decade”: a period of 10 years in which the current government ratcheted up onerous environmental and Indigenous-consultation requirements and, where ministerial approvals are concerned, delayed decisions, all geared at keeping undeveloped parts of Canada in their natural state.

Terms like “circular economy” and “just transition” are the Liberal synonyms for this no-growth agenda, which has delivered us a fraction of a percentage of GDP growth per capita from 2014 to the end of 2024 — a time period in which peer countries have managed double-digits.

For anyone who missed out on all the bad governance robbing Canadians of superpower prosperity, this brief video exposes the crimes against the citizenry.  For those who prefer reading, I provide below a transcript from the closed captions.

Transcript

This is Alberta the fourth largest Province and home to about 4.6 million people. It ranks third in GDP just behind Quebec and first in GDP per capita primarily off the back of oil and gas extraction. While its discovery in the first half of the 20th century has brought Canada riches, for reasons from political to economic it never reached its full potential as an energy superpower, and Canadians as a whole lose out. We’ll be diving into how its energy policies have evolved and the path it is on whether for natural gas, nuclear, hydrogen and more.

Canada has the third largest proven oil reserves and by most estimates in the top 20 in terms of natural gas reserves. It is a top 10 producer of oil and gas, meaning it is engaged in extracting processing and supplying of these resources for domestic production.

Natural Gas

For natural gas exports it is in the top six, all of which goes to the US via pipelines. To export across water requires Investments to build liquid natural gas or LNG facilities to cool the gas into a liquid state in a process called liquefaction. In 2024 the the first export terminal will finally be completed in Kitimat BC called LNG Canada with gas coming through the coastal gas tank pipeline set to complete after 5 years of construction and a price tag that jumped from 6.6 billion to 14.5 billion.

But don’t expect other facilities to be constructed anytime soon. On February 9th 2022, 2 weeks before the Russian invasion of Ukraine, the federal and Quebec governments rejected approval of an LNG plant in Saguenay that would have allowed for the export of Western Natural Gas to European markets.

They cited increased greenhouse gas emissions
and lack of social responsibility.

While most of the natural gas is located in Northern Alberta and BC in the Montney formation, there is also gas in the Atlantic provinces. However New Brunswick, Newfoundland and Labrador, and Nova Scotia have all banned the process of fracking used for shale gas development over safety fears, thereby losing out on tens of billions of economic potential. Ironically the same provinces import a lot of natural gas extracted from the US through the process of fracking, Quebec also has natural gas resources but in April 2022 banned all oil and gas extraction in the province.

This means not only are pipelines from western Canada rejected from going through Quebec, natural gas extraction and export facilities in these provinces have been rejected as well. The demand if not met by Canada will be filled by other countries that might not share the same values nor care about the environment, with the jobs, millions in royalties and taxes going elsewhere. Since 2011, of the 18 proposed LG export projects including five on the East Coast. only the Kitimat project has proceeded with the others being cancelled, blocked or abandoned.

While the US in the same time frame has built seven LG facilities, five more under construction and approved 15, enabling them to go from a net importer to a top three exporter in the world. Australia has 10 LG facilities with the majority built in the 2010s helping to satisfy energy demand from Asian countries and to help them move away from coal. Qatar too has benefited greatly from extracting its resources as European countries look for alternatives to Russian gas.

These three countries have all signed decades-long deals to supply natural gas. Yet when Japan, South Korea, and Germany showed interest in Canadian LG, the Prime Minister said, “There has never been a strong business case.” While critics point out that natural gas is a fossil fuel contributing to greenhouse gas emissions, it emits 40% less than coal and 30% less than oil.

Nuclear Energy

We can’t talk about energy policy without mentioning nuclear, because it does not emit greenhouse gases while being a reliable source of energy, not dependent on the wind blowing or the sun shining. Currently nuclear supplies 58% of Ontario’s electricity needs and 15% Nationwide with all but one of the 19 nuclear reactors. The one located outside of Ontario is in New Brunswick. No new reactors have been completed since 1993. Meanwhile coal is still used to generate 6% of Canada’s electricity needs despite the country having the third largest uranium reserves, the fuel needed for reactors.

But on September 19th 2023, Canada did reach a $3 billion deal to finance nuclear power . . .in Romania. In fairness this deal does support the export of made in Canada Candu style reactors. An industry in which historically Canada has been a leader. Any discussion should include nuclear, as one of the trends in the nuclear industry is small modular reactors or SMRs which should be easier to manufacture and transport enabling its use in remote regions.

Hydrogen

Another Trend that the federal government has prioritized in the 2023 budget relates to hydrogen. 16.4 billion has been allocated over 5 years for “clean” Technologies and “clean” hydrogen tax credits, which are subsidies for costs in setting up equipment to produce green hydrogen. When the German Chancellor Olaf Schultz arrived in Canada in August 2022 asking for LNG, Canada instead offered green hydrogen created by wind turbines generating electricity to perform electrolysis by splitting water to produce hydrogen. It is both inefficient and expensive to produce green hydrogen meaning there is little business case for it without subsidies, since more than 99% of hydrogen is currently produced using fossil fuel. While green hydrogen will likely play a role in industrial processes, such as replacing coal used in steel production or creating ammonia in fertilizer production, its role in transportation is likely negligible. Furthermore using hydroelectricity, nuclear or natural gas to create hydrogen plays into Canada’s strengths in a way that solar or wind does not, as we’ll see shortly.

Solar and Wind

A big part of Canada’s net zero emissions by 2050 plan involves solar and wind energy, yet one of the biggest beneficiaries of that shift would be China given its dominance in the Clean Energy Solution space, whether solar panels, wind turbines or EVS. From the mineral extraction to the processing, refining and Manufacturing, there is much demand for critical minerals like copper cobalt nickel lithium and Rare Earth elements chromium zinc and aluminum. China owns stakes in many mines around the world including Canadian ones extracting these minerals to control the supply chain. According to 2022 data from the International Energy Agency, their share of refining is 35% for nickel, 60% for lithium, 70% for Cobalt and a whopping 90% for Rare Earth.

This dependence on one country means the power to squeeze Supply or raise prices at any moment, which is a big reason why on August 16th 2022 the Biden Administration signed the ironically named Inflation Reduction Act which provides 369 billion of funding for clean energy projects. The intention is to not only reshore to the US but also Near shore or Friend shore to allies like Canada, Whether in mining of critical minerals to manufacturing.

Canada acted decisively a few months later in the same year to force
three Chinese companies to sell their stakes in Canadian mining companies
. . . Oh wait just kidding.

In all seriousness the country and especially Quebec can play a role in the supply chain so long as projects can be approved in a timely manner which really is the underlying theme of this video. Having these minerals also incentivizes battery and auto manufacturing companies to invest in factories, helped massively by subsidies of course. 13 billion over 10 years is what took Volkswagen to commit to a battery plant in Southern Ontario. Likewise 15 billion in subsidies was committed for a Stellantis LG battery plant in Windsor and other projects like this. That’s a lot of money with these two subsidy awards not expected to break even for 20 years according to the Parliamentary budget office. And that’s if these Legacy auto companies like Stellantis and Volkswagen will be relevant by that time.

That’s the kind of energy policy decisions made in Canada in recent times,
and why we haven’t leveraged our natural resources into Superpower.

Mark Carney’s Climate Obsession Worse than Trudeau’s

The future of Canada’s badly governed energy sector is further threatened by replacing Trudeau with Carney. Terry Newman explains in his National Post article Mark Carney’s climate obsessions will put Trudeau to shame.  Excerpts in italics with my bolds and added images.

Don’t trust his pledge to turn Canada into an energy superpower

For all of Carney’s supposed superior knowledge of the world and markets, the art of provincial negotiations and incentives for private investment in natural resources appears to have already escaped his grasp. There’s evidence to suggest this is because, at heart, Carney is likely to be a fully fledged ESG prime minister (ESG being short for environmental, social, and governance principles being imposed on business).Unfortunately, everything Carney’s said and done up until this point suggests not only that he’d fail to unite Canadian provinces to create this energy super-economy, but that’s he’s not actually interested in doing so in the first place.The Liberal party may have a new face, but Carney’s insistence on keeping an emissions cap and industrial carbon tax in place — both products of Justin Trudeau’s Liberal government — doesn’t invoke much confidence in his energy superpower plan.

Since the Liberals came to power in 2015, they implemented the Impact Assessment Act, which slowed approvals, the federal industrial carbon pricing system (2018) and the oil and gas emissions cap (slated for 2026) — all with the goal of reducing greenhouse gas emissions from the oil and gas sector to net zero by 2050.

Since 2015, many projects have been stalled or cancelled, including the Northern Gateway Pipeline (cancelled by government in 2016, citing a federal ban on tanker traffic and Indigenous opposition); the Energy East Pipeline (cancelled by the company in 2017, citing regulatory hurdles and low oil prices); Pacific NorthWest LNG (cancelled in 2017 due to market conditions and regulatory delays); the MacKenzie Valley Pipeline (cancelled in 2017 due to low gas prices and regulatory uncertainty); Énergie Saguenay LNG (cancelled in 2021, rejected by Quebec government over emissions concerns, not challenged by the federal government); Bay du Nord Offshore Oil (shelved in 2022, citing high costs and regulatory uncertainty); Teck Frontier Mine (cancelled in 2020, amid climate policy debates); and the Keystone XL Pipeline (cancelled 2021, due to failure to secure a U.S. permit and Canadian regulatory costs).

The only thing that’s changed about the Liberal party is the addition of Carney, and his record suggests that he will be driven by climate policy, at least as much as the Liberals have been, and potentially much more so. He was, not so long ago, the United Nations’ special envoy on climate action and finance and he founded and co-chaired the Glasgow Financial Alliance for Net Zero (GFANZ), resigning on Jan. 15, the day before he threw his hat into the Liberal leadership race.

These roadblocks long predate Carney’s ascension, and he has yet to explain how the Liberal government suddenly has either the ability or desire to address them.

Where’s the evidence Carney will be less stringent on energy projects and, therefore, better for the Canadian economy than his predecessor? If anything, especially given his longstanding ESG obsessions, all evidence appears to point to the contrary — that Mark Carney could be even more dedicated to strangling Canada’s resource economy than Trudeau.

Can You Trust an AI/ML Model to Forecast?

The latest fashion in model building is adding AI/ML (Artificial Intelligence/Machine Learning) technology to numerical models for weather forecasting.  No doubt soon there will be climate models also claiming improved capability by doing this.  A meteorological example is called Aardvark Weather and a summary is provided at Tallbloke’s Talkshop Scientists say fully AI-driven weather prediction system delivers accurate forecasts faster with less computing power.

Like all inventions there are weaknesses along with the claimed benefits.  Here’s a short list of the things that can go wrong with these new gadgets. The concerns below are listed along with some others in a paper Understanding the Weaknesses of Machine Learning: Challenges and Limitations by Oyo Jude. Excerpts in italics with my bolds.

Introduction

Machine learning (ML) has become a cornerstone of modern technological advancements, driving innovations in areas such as healthcare, finance, and autonomous systems. Despite its transformative potential, ML is not without its flaws. Understanding these weaknesses is crucial for developing more robust and reliable systems. This article delves into the various challenges and limitations faced by ML technologies, providing insights into areas where improvements are needed

Data Quality and Bias

Data Dependency

Machine learning models are highly dependent on the quality and quantity of data used for training. The performance of an ML model is only as good as the data it is trained on. Common issues related to data quality include:

Incomplete Data: Missing or incomplete data can lead to inaccurate models and predictions. Incomplete datasets may not represent the full spectrum of possible inputs, leading to biased or skewed outcomes.
Noisy Data: Noise in data refers to irrelevant or random information that can obscure the underlying patterns the model is supposed to learn. Noisy data can reduce the accuracy of ML models and complicate the learning process.

Data Bias

Bias in data can significantly impact the fairness and accuracy of ML systems. Key forms of data bias include:

Selection Bias: Occurs when the data collected is not representative of the target population. For example, if a model is trained on data from a specific demographic group, it may not perform well for individuals outside that group.
Label Bias: Arises when the labels or categories used in supervised learning are subjective or inconsistent. Label bias can skew the model’s understanding and lead to erroneous predictions.

Model Interpretability and Transparency

Complexity of Models

Many advanced ML models, such as deep neural networks, are often described as “black boxes” due to their complexity. The lack of transparency in these models presents several challenges:

Understanding Model Decisions: It can be difficult to understand how a model arrived at a specific decision or prediction, making it challenging to diagnose errors or biases in the system.
Trust and Accountability: The inability to interpret model decisions can undermine trust in ML systems, particularly in high-stakes applications such as healthcare or criminal justice. Ensuring accountability and fairness becomes challenging when the decision-making process is opaque.
Explainability:  Efforts to improve model interpretability focus on developing techniques and tools to make complex models more understandable. Techniques such as feature importance analysis, surrogate models, and visualization tools aim to provide insights into model behavior and decisions. However, achieving a balance between model performance and interpretability remains an ongoing challenge.

Generalization and Overfitting

Overfitting

Overfitting occurs when a model learns not only the underlying patterns in the training data but also the noise, resulting in poor performance on new, unseen data. This issue can be particularly problematic with complex models and limited data. Strategies to mitigate overfitting include:

Cross-Validation: Using techniques like k-fold cross-validation helps assess model performance on different subsets of the data, reducing the risk of overfitting.
Regularization: Regularization methods, such as L1 and L2 regularization, add penalties to the model’s complexity to prevent it from fitting noise in the training data.

Generalization

Generalization refers to a model’s ability to perform well on unseen data that was not part of the training set. Achieving good generalization is crucial for the practical application of ML models. Challenges related to generalization include:

Domain Shift: When the distribution of the data changes over time or across different domains, a model trained on one dataset may not generalize well to new data. Addressing domain shift requires continuous monitoring and updating of models.
Data Scarcity: In scenarios where limited data is available, models may struggle to generalize effectively. Techniques such as data augmentation and transfer learning can help address data scarcity issues.

Comment:

Many similar issues have been raised against climate models, undermining claims their outputs are valid projections of future climate states.  For example, the issue of detailed and reliable data persists.  It appears that even the AI/ML weather forecasting inventions are dependent on ERA5, which has a record of only ~40 years to use for training purposes.  I’m suspending belief in these things for now–new improved black boxes sound too much like the Sorcerer’s Apprentice.

Disney’s portrayal of the Sorcerer’s Apprentice in over his head.

Beware: Flawed Energy Assumptions Incite Delusional Scenarios

Mark P. Mills and Neil Atkinson blow the whistle on projections written in International Energy Agency’s (IEA) latest report, the World Energy Outlook.  Below is the announcement of the report findings, key exhibits and Executive summary, excerpts in italics with my bolds and added images. Link to full study at the end.

Overview

Industry players consider the International Energy Agency’s signature annual report, the World Energy Outlook, to contain highly credible analyses. However, a new critique from the National Center of Energy Analytics experts finds the IEA’s latest scenarios on future oil demand to be problematic and potentially, dangerously wrong. 

“When it comes to policy or investment planning, there is a distinction with a critical difference when it comes to what constitutes a “forecast” (what is likely to happen) versus a “scenario” (a possibility based on assumptions). The challenge is not in determining whether the scenarios are completely factual per se, but instead whether they are factually complete,” wrote the authors in their report.

The most widely reported WEO scenario is that the world will see peak oil demand by the early 2030s. NCEA co-authors Mark P. Mills and Neil Atkinson believe that this conclusion is a prima facie case; minimally, the IEA should include business as usual (BAU) scenarios, not those based on all “high cases” or unrealistic possibilities.

Mills and Atkinson pinpoint 23 flawed assumptions used in the WEO scenarios to predict future oil demand, including:

  • IEA assumes: Corporate transition policies are real and durable. NCEA counterclaim: Myriad corporations, having earlier proclaimed fealty to “energy-transition” goals, are either failing to meet such pledges or overtly rescinding them.

  • IEA assumes: Transition financing will continue to expand. NCEA counterclaim: Alternative energy projects have become more expensive and difficult to finance, and wealthy nations are increasingly reluctant to gift huge amounts of money to the faster-growing but poorer nations, many of which have governance issues.

  • IEA assumes: China’s actions will follow its pledges. NCEA counterclaim: The scale of China’s role in present and future energy and oil markets requires scenarios that model what China is doing—and will likely do—rather than what China claims or promises.
National Energy Transition Plans

  • IEA’s assumes: The oil growth in emerging markets will be low. NCEA counterclaim: The fact of low demand in some poorer regions—e.g., Africa uses roughly one-tenth the per-capita level in OECD countries—points to the potential for very high, not low, growth in those markets.

  • IEA’s assumption: Governments will stay the course on EV mandatesNCEA’s counterclaim: Recent trends in many countries and U.S. states show policymakers weakening or reducing mandates and subsidies.

Flawed Assumptions Lead to Flawed Conclusions

Listed below is a summary of the flaws in 23 (but far from all) of the assumptions used in the WEO scenarios that are relevant to guessing future oil demand. Meaningful scenarios for planning for future uncertainties should include a range of realistic inputs, not just those that are aspirational.

Assumptions about baseline factors that affect oil forecasts

  1. Assumption: STEPS is a useful baseline.
    Flaw: The baseline scenario, rather than “business as usual,” assumes a future based on countries’ Stated Policies Scenario (STEPS), which not one country is implementing in full.
  2. Corporate transition policies are real and durable.
    Flaw:  Myriad corporations, having earlier proclaimed fealty to “energy-transition” goals, are either failing to meet such pledges or overtly rescinding them.
  3. Higher economic growth is unlikely.
    Flaw: Ignoring the possibility of higher economic growth, based on historical trends and the goals of all nations, leads to scenarios that underestimate future oil demand.
  4. Transition financing will continue to expand.
    Flaw: Alternative energy projects have become more expensive and difficult to finance, and wealthy nations are increasingly reluctant to gift huge amounts of money to the faster-growing but poorer nations, many of which have governance issues.
  5. Efficiency gains and structural changes will lower global demand for energy.
    Flaw: Long-run trends show that energy-efficiency gains make energy-centric products and services more affordable and thus do not reduce, but instead generally stimulate, rising demand.
  6. Solar and wind power are 100% efficient.
    Flaw: The WEO 2024 assertion that “most renewables are considered 100% efficient” contradicts fundamental physics and is, arguably, a silly PR-centric rhetorical flourish.
  7. China’s actions will follow its pledges.
    Flaw: The scale of China’s role in present and future energy and oil markets requires scenarios that model what China is doing—and will likely do, in fact—rather than what China claims or promises.

Assumptions regarding oil’s future

  1. The oil growth in emerging markets will be low.
    Flaw:  The fact of low demand in some poorer regions—e.g., Africa uses roughly one-tenth the per-capita level in OECD countries—points to the potential for very high, not low, growth in those markets.
  2. The EV market share will accelerate.
    Flaw:  Slowing market adoption and retrenchments in automakers’ EV plans or promises are evident, calling for scenarios that model realities that could persist.
  3. Governments will stay the course on EV mandates.
    Flaw:  Recent trends in many countries and U.S. states show policymakers weakening or reducing mandates and subsidies.
  4. China’s EV “success story” leads quickly to lower oil demand.
    Flaw:  Data point to the fact that in the real world, EV sales and gasoline consumption are both rising.

Assumptions about other transportation markets

  1. There will be significant electrification of heavy-duty trucks.
    Flaw:  There is no evidence of market adoption for any fuel option that leads to far higher capital costs and enormous degradation in performance.
  2. There will be significant electrification and fuel alternatives in aviation.
    Flaw:  There are no trends showing non-oil options for even a tiny share of the aviation market, in an industry that forecasts booming demand.
  3. There will be significant electrification and fuel alternatives for ships.
    Flaw:  The only modestly significant change in oil used for global shipping comes from the use of liquefied natural gas, another (and generally more expensive) hydrocarbon.
  4. There will be a rapid decline in oil used for Middle East power generation.
    Flaw:  Despite pledges and pronouncements, the year 2024 saw continued, and even higher, use of oil for electricity generation.
  5. The growth in petrochemicals and plastics will be slow.
    Flaw:  Slower growth is anchored in recycling enthusiasms that markets are not adopting and expectations of new recycling technologies that remain expensive or unproved.
  6. All scenarios lead to peak oil demand by ~2030.
    Flaw:  A WEO core conclusion that “combing all the high cases” leads to “global peaks for oil” by ~2030 is, prima facie, not based on all “high cases” but on unrealistic scenarios.

Assumptions regarding associated industries

  1. The supply of critical minerals will meet transition goals.
    Flaw:  Myriad studies have now documented the fact of a looming shortfall in current and expected production and of the challenges in changing that status quo.
  2. Prices of critical minerals will be low.
    Flaw:  It is fanciful in the annals of economic history to imagine that record-high demands won’t lead to far higher prices for the critical minerals needed to build EVs (as well as for wind and solar hardware).
  3. China won’t exercise minerals dominance as an economic or a geopolitical tool.
    Flaw:  China has already signaled over the past year that it is willing and able to implement export controls, or pricing power on critical minerals, where it holds significant global share.
  4. Oil and gas annual investments are adequate to avoid economic disruptions.
    Flaw:  Current levels of investment are not adequate to meet demands under business-as-usual scenarios, especially when combined with likely decline rates of extant oil fields.
  5. The future decline rate from existing oil fields will continue historical trends.
    Flaw:  The much faster decline rate in output from now-significant U.S. shale fields has altered the global average decline rate, pointing to the need for increasing investments to avoid a shortfall.
  6. OPEC will be a reliable cushion to manage oil-supply disruptions.
    Flaw:  History suggests that scenarios should include alternative possibilities to relying on OPEC to provide a cushion for meeting unexpected shortfalls in production or increases in demand.

Executive Summary: Flawed Assumptions Lead to Dangerous “Forecasts”

For decades, the International Energy Agency (IEA) was the world’s gold standard for energy information and credible analyses. Following the commitment of its member governments to the 2015 Paris Agreement climate accords, the agency radically changed its mission to become a promoter of an energy transition. In 2022, the IEA’s governing board reinforced its mission to “guide countries as they build net-zero emission energy systems to comply with internationally agreed climate goals.”

The IEA’s current preoccupation with promoting an energy transition has resulted in its signature annual report, the World Energy Outlook (WEO), offering policymakers a view of future possibilities that are, at best, distorted and, at worst, dangerously wrong.

The 2024 WEO’s central conclusion, its core “outlook,” has been widely reported as a credible forecast, i.e., something likely to happen: “[T]he continued progress of transitions means that, by the end of the decade, the global economy can continue to grow without using additional amounts of oil, natural gas or coal.”

The WEO itself states that it doesn’t forecast but has scenarios—explorations or models of possibilities, and cautions: “Our scenario analysis is designed to inform decision makers as they consider options…. [N]one of the scenarios should be viewed as a forecast.” Scenarios that usefully “inform” need to be based on realistic possibilities and assumptions. But there is one foundational assumption—one that the IEA has for decades included in its scenarios and that has been banished from the WEO: the possibility of business as usual (BAU).

Instead, the WEO’s baseline scenario now assumes that nations are undertaking their specific energy-transition plans that they promised in order to comply with the 2015 Paris Agreement, i.e., “stated policies scenario” (STEPS). Yet none of the signatories to that Agreement is fully meeting its promises, and most are a long way behind schedule. Believing something that is not true is not just problematic; it meets the definition of a delusion.

It is fanciful to forecast that, over the next half-dozen years, the growth in the world’s population and economy won’t continue a two-century-long trend and lead to increased use of the fossil fuels that today supply over 80% of all energy, only slightly below the share seen 50 years ago. The data show that the global energy system is operating essentially along BAU lines and not only far off the STEPS, but even further away from the more aggressive transition aspirations that the WEO also models.

In this analysis, we focus on highlighting 23 problematic, flawed assumptions that are relevant specifically to the WEO’s oil scenarios and the widely reported “forecast” that the world will see peak oil demand by the early 2030s (see box on pp. 4-5, Flawed Assumptions Lead to Flawed Conclusions). While other scenarios about other energy sources are critical as well, oil remains a geopolitical touchstone and the single biggest source of global energy—10-fold greater than wind and solar combined. At the very least, this analysis points to the need for real-world scenarios in general and, in the case of oil, the much higher probability that demand continues to grow in the foreseeable future and, possibly, quite significantly (below, see Global Oil Demand: Future Scenarios).

Debating the intricacies in flawed assumptions about energy scenarios is no mere theoretical exercise. The IEA’s legacy reputation continues to influence not only trillions of dollars in investment decisions but also government policies with far-reaching geopolitical consequences.

Energy Delusions: Peak Oil Forecasts

 

Greenpeace Punished for Pipeline Vandalism, Look Out Dark Money Agitators

In his Clash Daily report, Wes Walker connects the dots concerning domestic terrorism after the South Dakota jury verdict Why ENORMOUS Judgment Against Greenpeace Should Have Dem Dark Money In A Cold Sweat.  Excerpts in italics with my bolds and added images.

Outsourcing your malicious behavior is no longer
a get-out-of-consequences-free card

This should be especially bad news for any of the dark-money groups that have quietly been ramping up violence against politically expedient targets — say, Tesla, for example.

What could a North Dakota jury judgment handed down against Greenpeace over a pipeline have to do with dark money politics-for-hire across the country? Quite a lot, actually.

At issue were the pipeline protests in North Dakota like the one where environmental activists cared so DAMNED much about the land that it took a state of emergency and the Army Corps of Engineers to avert an environmental catastrophe:

“Warm temperatures have accelerated snowmelt in the area of the Oceti Sakowin protest camp … Due to these conditions, the governor’s emergency order addresses safety concerns to human life as anyone in the floodplain is at risk for possible injury or death,” said the statement.

However, “the order also addresses the need to protect the Missouri River from the waste that will flow into the Cannonball River and Lake Oahe if the camp is not cleared and the cleanup expedited,” the statement read.

…Just how much waste and trash did the environmentally conscious DAPL protesters leave? “Local and federal officials estimate there’s enough trash and debris in the camp to fill about 2,500 pickup trucks,” reported AP.

Not surprisingly, months-long protests are chosen because they can cause both damage and harm, depending on the group, the tactics, and their intent.

The owner and operator of the pipeline, who lost an enormous contract as a result of their actions, took the protesters to court. They suffered serious financial harm and those who caused it should bear the responsibility for making them whole. Modern notions of protest notwithstanding, that’s how the court system was designed.

When they took to court Greenpeace and the Red Warrior Camp
(who the plaintiff claimed was their proxy)
on exactly this principle, the jury agreed.

After two days of deliberation, the New York Times reported, the jury returned the verdict. Energy Transfer, the owner and operator of the pipeline, filed the lawsuit in North Dakota state court against Greenpeace and Red Warrior Camp, which Energy Transfer claimed was a front for Greenpeace, and three individuals.

The lawsuit alleges that Greenpeace had engaged in a misinformation campaign with mass emails falsely claiming that the Dakota Access Pipeline would cross the sovereign land of the Standing Rock Sioux Tribe. In court filings, Energy Transfer claimed protesters engaged in a campaign of “militant direct action,” including trespassing on the company’s property, vandalizing construction equipment, and assaulting employees and contractors. —JustTheNews

This comes at a very bad time for violent leftwing activists. For years, the establishment left has been somewhere between indifferent to, or even happy to see violence on the streets, so long as that violence aligns with causes on the political left.

You never hear the kind of breathless language the establishment left uses when describing, for example ‘the Proud Boys’ when they describe, say, Antifa, BLM, Jayne’s Revenge (violent abortion activists), Palestinian Protesters, trans extremists, or (now) anti-Tesla crowds embracing forms of violence ranging from rioting on the streets, storming a building and threatening a young woman inside it, holding universities hostage, or vandalizing/firebombing Christian pro-life institutions, threatening churches, or most recently attacking anyone or anything with a Tesla connection.

The one thing so many of these movements including the current organized attacks against Tesla — have in common is copious amounts of financial backing. Efforts like what we have seen in DOGE, not to mention an FBI interested in prosecuting such crimes instead of helping them raise bail money — will be a game-changer on the investigation side of this problem.

AG Bondi, and those working with her have made it clear that investigating these fire bombings (and the SWAT-ings) will be treating the use of incendiary devices under statutes listing such actions as a federal crime punishable by up to 20 years in prison.

If the logic of this Greenpeace case is extended to culpability of the Dark Money orgs who have been using third-party agitator groups as arm’s-length shock troops for hire that give them a plausible deniability…

… this North Dakota ruling may set a precedent that says otherwise. One that other groups who have been harmed by political activism over the last number of years might play ‘follow the money’ with in seeking the redress of their harms.

Elon seems to think the breadcrumbs for a lot of the dark money issues will take us back to familiar names like ‘Act Blue’ or ‘Arabella Advisors’. If the early clues at DOGE, and the mayhem unfolding at Act Blue are any indicator, he could be on to something there.

It would take some imaginative thinking to come up with deterrents to a purely mercenary cause-of-the-day agitator group than the twin prongs of drying up the money supply and dropping the perpetrators in a hole where they can be completely forgotten about by society for a decade or two.

And if the feds draw the same inference with criminal culpability
that the jury in North Dakota just did?

Those media establishment types who were publicly giddy about Biden’s use of RICO statutes to take down Trump will soon be choking on their words and looking to bury records of their public statements cheering the Trump team prosecutions.

Low Energy-IQ Politicians, Be Gone!

Power Density Physics Trump Energy Politics

A plethora of insane energy policy proposals are touted by clueless politicians, including the recent Democrat candidate for US President.  So all talking heads need reminding of some basics of immutable energy physics.  This post is in service of restoring understanding of fundamentals that cannot be waved away.

The Key to Energy IQ

This brief video provides a key concept in order to think rationally about calls to change society’s energy platform.  Below is a transcript from the closed captions along with some of the video images and others added. We know what the future of American energy will look like. Solar panels, drawing limitless energy from the sun. Wind turbines harnessing the bounty of nature to power our homes and businesses.  A nation effortlessly meeting all of its energy needs with minimal impact on the environment. We have the motivation, we have the technology. There’s only one problem: the physics. The history of America is, in many ways, the history of energy. The steam power that revolutionized travel and the shipping of goods. The coal that fueled the railroads and the industrial revolution. The petroleum that helped birth the age of the automobile. And now, if we only have the will, a new era of renewable energy. Except … it’s a little more complicated than that. It’s not really a matter of will, at least not primarily. There are powerful scientific and economic constraints on where we get our power from. An energy source has to be reliable; you have to know that the lights will go on when you flip the switch. An energy source needs to be affordable–because when energy is expensive…everything else gets more expensive too. And, if you want something to be society’s dominant energy source, it needs to be scalable, able to provide enough power for a whole nation. Those are all incredibly important considerations, which is one of the reasons it’s so weird that one of the most important concepts we have for judging them … is a thing that most people have never heard of. Ladies and gentlemen, welcome to the exciting world of…power density. Look, no one said scientists were gonna be great at branding. Put simply, power density is just how much stuff it takes to get your energy; how much land or other physical resources. And we measure it by how many watts you can get per square meter, or liter, or kilogram – which, if you’re like us…probably means nothing to you. So let’s put this in tangible terms. Just about the worst energy source America has by the standards of power density are biofuels, things like corn-based ethanol. Biofuels only provide less than 3% of America’s energy needs–and yet, because of the amount of corn that has to be grown to produce it … they require more land than every other energy source in the country combined. Lots of resources going in, not much energy coming out–which means they’re never going to be able to be a serious fuel source. Now, that’s an extreme example, but once you start to see the world in these terms, you start to realize why our choice of energy sources isn’t arbitrary. Coal, for example, is still America’s second largest source of electricity, despite the fact that it’s the dirtiest and most carbon-intensive way to produce it. Why do we still use so much of it? Well, because it’s significantly more affordable…in part because it’s way less resource-intensive. An energy source like offshore wind, for example, is so dependent on materials like copper and zinc that it would require six times as many mineral resources to produce the same amount of power as coal. And by the way, getting all those minerals out of the ground…itself requires lots and lots of energy. Now, the good news is that America has actually been cutting way down on its use of coal in recent years, thanks largely to technological breakthroughs that brought us cheap natural gas as a replacement. And because natural gas emits way less carbon than coal, that reduced our carbon emissions from electricity generation by more than 30%. In fact, the government reports that switching over to natural gas did more than twice as much to cut carbon emissions as renewables did in recent years. Why did natural gas progress so much faster than renewables? It wasn’t an accident. Energy is a little like money: You have to spend it to make it. To get usable natural gas, for example, you’ve first got to drill a well, process and transport the gas, build a power plant, and generate the electricity. But the question is how much energy are you getting back for your investment? With natural gas, you get about 30 times as much power out of the system as you put into creating it.  By contrast, with something like solar power, you only get about 3 1/2 times as much power back.

Replacing the now closed Indian Point nuclear power plant would require covering all of Albany County NY with wind mills.

Hard to fuel an entire country that way. And everywhere you look, you see similarly eye-popping numbers. To replace the energy produced by just one oil well in the Permian Basin of Texas–and there are thousands of those–you’d need to build 10 windmills, each about 330 feet high. To meet just 10% of the country’s electricity needs, you’d have to build a wind farm the size of the state of New Hampshire. To get the same amount of power produced by one typical nuclear reactor, you’d need over three million solar panels, none of which means, by the way, that we shouldn’t be using renewables as a part of our energy future. But it does mean that the dream of using only renewables is going to remain a dream, at least given the constraints of current technology. We simply don’t know how to do it while still providing the amount of energy that everyday life requires. No energy source is ever going to painlessly solve all our problems. It’s always a compromise – which is why it’s so important for us to focus on the best outcomes that are achievable, because otherwise, New Hampshire’s gonna look like this.
Addendum from Michael J. Kelly
Energy return on investment (EROI) The debate over decarbonization has focused on technical feasibility and economics. There is one emerging measure that comes closely back to the engineering and the thermodynamics of energy production. The energy return on (energy) investment is a measure of the useful energy produced by a particular power plant divided by the energy needed to build, operate, maintain, and decommission the plant. This is a concept that owes its origin to animal ecology: a cheetah must get more energy from consuming his prey than expended on catching it, otherwise it will die. If the animal is to breed and nurture the next generation then the ratio of energy obtained from energy expended has to be higher, depending on the details of energy expenditure on these other activities. Weißbach et al. have analysed the EROI for a number of forms of energy production and their principal conclusion is that nuclear, hydro-, and gas- and coal-fired power stations have an EROI that is much greater than wind, solar photovoltaic (PV), concentrated solar power in a desert or cultivated biomass: see Fig. 2. In human terms, with an EROI of 1, we can mine fuel and look at it—we have no energy left over. To get a society that can feed itself and provide a basic educational system we need an EROI of our base-load fuel to be in excess of 5, and for a society with international travel and high culture we need EROI greater than 10. The new renewable energies do not reach this last level when the extra energy costs of overcoming intermittency are added in. In energy terms the current generation of renewable energy technologies alone will not enable a civilized modern society to continue!
On Energy Transitions
Postscript

February 2025 Oceans Keep Cool

The best context for understanding decadal temperature changes comes from the world’s sea surface temperatures (SST), for several reasons:

  • The ocean covers 71% of the globe and drives average temperatures;
  • SSTs have a constant water content, (unlike air temperatures), so give a better reading of heat content variations;
  • A major El Nino was the dominant climate feature in recent years.

HadSST is generally regarded as the best of the global SST data sets, and so the temperature story here comes from that source. Previously I used HadSST3 for these reports, but Hadley Centre has made HadSST4 the priority, and v.3 will no longer be updated.  HadSST4 is the same as v.3, except that the older data from ship water intake was re-estimated to be generally lower temperatures than shown in v.3.  The effect is that v.4 has lower average anomalies for the baseline period 1961-1990, thereby showing higher current anomalies than v.3. This analysis concerns more recent time periods and depends on very similar differentials as those from v.3 despite higher absolute anomaly values in v.4.  More on what distinguishes HadSST3 and 4 from other SST products at the end. The user guide for HadSST4 is here.

Note:  When doing monthly updates of HadSST4, it’s typical that values for the previous month or two will appear with slight adjustments.  However this time there were scores of changed values scattered throughout the set and all values since 1979.  Strangely, the new values were in text format, so I needed to convert them to values in the spreadsheets.  Comparing the new and old datasets showed that the changes were mostly in the third decimal, and mostly negative (i.e. the adjusted value lower than the previous one.)  Overall, the global average anomaly since 1980 was lower by 0.01C.  The charts and analysis below is produced from the current data.

The Current Context

The chart below shows SST monthly anomalies as reported in HadSST3 starting in 2015 through February 2025. A global cooling pattern is seen clearly in the Tropics since its peak in 2016, joined by NH and SH cycling downward since 2016, followed by rising temperatures in 2023 and 2024.

Note that in 2015-2016 the Tropics and SH peaked in between two summer NH spikes.  That pattern repeated in 2019-2020 with a lesser Tropics peak and SH bump, but with higher NH spikes. By end of 2020, cooler SSTs in all regions took the Global anomaly well below the mean for this period.  A small warming was driven by NH summer peaks in 2021-22, but offset by cooling in SH and the tropics, By January 2023 the global anomaly was again below the mean.

Then in 2023-24 came an event resembling 2015-16 with a Tropical spike and two NH spikes alongside, all higher than 2015-16. There was also a coinciding rise in SH, and the Global anomaly was pulled up to 1.1°C last year, ~0.3° higher than the 2015 peak.  Then NH started down autumn 2023, followed by Tropics and SH descending 2024 to the present. After 10 months of cooling in SH and the Tropics, the Global anomaly came back down, led by NH cooling the last 4 months from its peak in August. It’s now about 0.1C higher than the average for this period. Note that the Tropical anomaly has cooled from 1.28C in 2024/01 to 0.72C as of 2025/2.

Comment:

The climatists have seized on this unusual warming as proof their Zero Carbon agenda is needed, without addressing how impossible it would be for CO2 warming the air to raise ocean temperatures.  It is the ocean that warms the air, not the other way around.  Recently Steven Koonin had this to say about the phonomenon confirmed in the graph above:

El Nino is a phenomenon in the climate system that happens once every four or five years.  Heat builds up in the equatorial Pacific to the west of Indonesia and so on.  Then when enough of it builds up it surges across the Pacific and changes the currents and the winds.  As it surges toward South America it was discovered and named in the 19th century  It iswell understood at this point that the phenomenon has nothing to do with CO2.

Now people talk about changes in that phenomena as a result of CO2 but it’s there in the climate system already and when it happens it influences weather all over the world.   We feel it when it gets rainier in Southern California for example.  So for the last 3 years we have been in the opposite of an El Nino, a La Nina, part of the reason people think the West Coast has been in drought.

It has now shifted in the last months to an El Nino condition that warms the globe and is thought to contribute to this Spike we have seen. But there are other contributions as well.  One of the most surprising ones is that back in January of 2022 an enormous underwater volcano went off in Tonga and it put up a lot of water vapor into the upper atmosphere. It increased the upper atmosphere of water vapor by about 10 percent, and that’s a warming effect, and it may be that is contributing to why the spike is so high.

A longer view of SSTs

The graph below  is noisy, but the density is needed to see the seasonal patterns in the oceanic fluctuations.  Previous posts focused on the rise and fall of the last El Nino starting in 2015.  This post adds a longer view, encompassing the significant 1998 El Nino and since.  The color schemes are retained for Global, Tropics, NH and SH anomalies.  Despite the longer time frame, I have kept the monthly data (rather than yearly averages) because of interesting shifts between January and July.

To enlarge image, open in new tab.

The graph above is noisy, but the density is needed to see the seasonal patterns in the oceanic fluctuations.  Previous posts focused on the rise and fall of the last El Nino starting in 2015.  This post adds a longer view, encompassing the significant 1998 El Nino and since.  The color schemes are retained for Global, Tropics, NH and SH anomalies.  Despite the longer time frame, I have kept the monthly data (rather than yearly averages) because of interesting shifts between January and July. 1995 is a reasonable (ENSO neutral) starting point prior to the first El Nino.

The sharp Tropical rise peaking in 1998 is dominant in the record, starting Jan. ’97 to pull up SSTs uniformly before returning to the same level Jan. ’99. There were strong cool periods before and after the 1998 El Nino event. Then SSTs in all regions returned to the mean in 2001-2.

SSTS fluctuate around the mean until 2007, when another, smaller ENSO event occurs. There is cooling 2007-8,  a lower peak warming in 2009-10, following by cooling in 2011-12.  Again SSTs are average 2013-14.

Now a different pattern appears.  The Tropics cooled sharply to Jan 11, then rise steadily for 4 years to Jan 15, at which point the most recent major El Nino takes off.  But this time in contrast to ’97-’99, the Northern Hemisphere produces peaks every summer pulling up the Global average.  In fact, these NH peaks appear every July starting in 2003, growing stronger to produce 3 massive highs in 2014, 15 and 16.  NH July 2017 was only slightly lower, and a fifth NH peak still lower in Sept. 2018.

The highest summer NH peaks came in 2019 and 2020, only this time the Tropics and SH were offsetting rather adding to the warming. (Note: these are high anomalies on top of the highest absolute temps in the NH.)  Since 2014 SH has played a moderating role, offsetting the NH warming pulses. After September 2020 temps dropped off down until February 2021.  In 2021-22 there were again summer NH spikes, but in 2022 moderated first by cooling Tropics and SH SSTs, then in October to January 2023 by deeper cooling in NH and Tropics.

Then in 2023 the Tropics flipped from below to well above average, while NH produced a summer peak extending into September higher than any previous year.  Despite El Nino driving the Tropics January 2024 anomaly higher than 1998 and 2016 peaks, following months cooled in all regions, and the Tropics continued cooling in April, May and June along with SH dropping.  After July and August NH warming again pulled the global anomaly higher, September through January 2025 resumed cooling in all regions, with a slight upward bump in February 2025.

What to make of all this? The patterns suggest that in addition to El Ninos in the Pacific driving the Tropic SSTs, something else is going on in the NH.  The obvious culprit is the North Atlantic, since I have seen this sort of pulsing before.  After reading some papers by David Dilley, I confirmed his observation of Atlantic pulses into the Arctic every 8 to 10 years.

Contemporary AMO Observations

Through January 2023 I depended on the Kaplan AMO Index (not smoothed, not detrended) for N. Atlantic observations. But it is no longer being updated, and NOAA says they don’t know its future.  So I find that ERSSTv5 AMO dataset has current data.  It differs from Kaplan, which reported average absolute temps measured in N. Atlantic.  “ERSST5 AMO  follows Trenberth and Shea (2006) proposal to use the NA region EQ-60°N, 0°-80°W and subtract the global rise of SST 60°S-60°N to obtain a measure of the internal variability, arguing that the effect of external forcing on the North Atlantic should be similar to the effect on the other oceans.”  So the values represent sst anomaly differences between the N. Atlantic and the Global ocean.

The chart above confirms what Kaplan also showed.  As August is the hottest month for the N. Atlantic, its variability, high and low, drives the annual results for this basin.  Note also the peaks in 2010, lows after 2014, and a rise in 2021. Then in 2023 the peak was holding at 1.4C before declining.  An annual chart below is informative:

Note the difference between blue/green years, beige/brown, and purple/red years.  2010, 2021, 2022 all peaked strongly in August or September.  1998 and 2007 were mildly warm.  2016 and 2018 were matching or cooler than the global average.  2023 started out slightly warm, then rose steadily to an  extraordinary peak in July.  August to October were only slightly lower, but by December cooled by ~0.4C.

Then in 2024 the AMO anomaly started higher than any previous year, then leveled off for two months declining slightly into April.  Remarkably, May showed an upward leap putting this on a higher track than 2023, and rising slightly higher in June.  In July, August and September 2024 the anomaly declined, and despite a small rise in October, ended close to where it began.  Now 2025 is starting much lower than the previous year.

The pattern suggests the ocean may be demonstrating a stairstep pattern like that we have also seen in HadCRUT4.

The purple line is the average anomaly 1980-1996 inclusive, value 0.17.  The orange line the average 1980-2024, value 0.38, also for the period 1997-2012. The red line is 2013-2024, value 0.67. As noted above, these rising stages are driven by the combined warming in the Tropics and NH, including both Pacific and Atlantic basins.

Curiosity:  Solar Coincidence?

The news about our current solar cycle 25 is that the solar activity is hitting peak numbers now and higher  than expected 1-2 years in the future.  As livescience put it:  Solar maximum could hit us harder and sooner than we thought. How dangerous will the sun’s chaotic peak be?  Some charts from spaceweatherlive look familar to these sea surface temperature charts.

Summary

The oceans are driving the warming this century.  SSTs took a step up with the 1998 El Nino and have stayed there with help from the North Atlantic, and more recently the Pacific northern “Blob.”  The ocean surfaces are releasing a lot of energy, warming the air, but eventually will have a cooling effect.  The decline after 1937 was rapid by comparison, so one wonders: How long can the oceans keep this up? And is the sun adding forcing to this process?

Footnote: Why Rely on HadSST4

HadSST is distinguished from other SST products because HadCRU (Hadley Climatic Research Unit) does not engage in SST interpolation, i.e. infilling estimated anomalies into grid cells lacking sufficient sampling in a given month. From reading the documentation and from queries to Met Office, this is their procedure.

HadSST4 imports data from gridcells containing ocean, excluding land cells. From past records, they have calculated daily and monthly average readings for each grid cell for the period 1961 to 1990. Those temperatures form the baseline from which anomalies are calculated.

In a given month, each gridcell with sufficient sampling is averaged for the month and then the baseline value for that cell and that month is subtracted, resulting in the monthly anomaly for that cell. All cells with monthly anomalies are averaged to produce global, hemispheric and tropical anomalies for the month, based on the cells in those locations. For example, Tropics averages include ocean grid cells lying between latitudes 20N and 20S.

Gridcells lacking sufficient sampling that month are left out of the averaging, and the uncertainty from such missing data is estimated. IMO that is more reasonable than inventing data to infill. And it seems that the Global Drifter Array displayed in the top image is providing more uniform coverage of the oceans than in the past.

uss-pearl-harbor-deploys-global-drifter-buoys-in-pacific-ocean

USS Pearl Harbor deploys Global Drifter Buoys in Pacific Ocean

 

McKitrick: New PM Carney Tried for Years to Defund Canada

Mark Carney, governor of the Bank of England (BOE), reacts during a news conference at the United Nations COP21 climate summit at Le Bourget in Paris, France, on Friday, Dec. 4, 2015. Photo by Chris Ratcliffe/Bloomberg

Ross McKitrick writes at National Post Carney to lead Canada after trying for years to defund it.  Excerpts in italics with my bolds and added images.

The soon-to-be prime minister’s plan for net-zero banking
would have devastated the country

Conservative leader Pierre Poilievre is very concerned about financial conflicts of interest that new Liberal leader (and our next prime minister) Mark Carney may be hiding. But I’m far more concerned about the one out in the open: Carney is now supposed to act for the good of the country after lobbying to defund and drive out of existence Canada’s oil and gas companies, steel companies, car companies and any other sector dependent on fossil fuels. He’s done this through the Glasgow Financial Alliance for Net Zero (GFANZ), which he founded in 2021.

Carney is a climate zealot. He may try to fool Canadians into thinking he wants new pipelines, liquified natural gas (LNG) terminals and other hydrocarbon infrastructure, but he doesn’t. Far from it. He wants half the existing ones gone by 2030 and the rest soon after.

He has said so, repeatedly and emphatically. He believes that the world “must achieve about a 50 per cent reduction in emissions by 2030” and “rapidly scale climate solutions to provide cleaner, more affordable, and more reliable replacements for unabated fossil fuels.” (By “unabated” he means usage without full carbon capture, which in practice is virtually all cases.) And since societies don’t seem keen on doing this, Carney created GFANZ to pressure banks, insurance companies and investment firms to cut off financing for recalcitrant firms.

“This transition to net zero requires companies across the whole economy to change behaviors through application of innovative technologies and new ways of doing business” he wrote in 2022 with his GFANZ co-chairs, using bureaucratic euphemisms to make his radical agenda somehow seem normal.

The GFANZ plan they articulated that year put companies into four categories. Those selling green technologies or engaged in work that displaces fossil fuels would be rewarded with financing from member institutions. Those still using fossil fuels, or have investments in others that do, but are committed to being “climate leaders” and have set a path to net-zero, would also still be eligible for financing, as would those that do business with “high-emitting firms” but plan to reach net-zero targets on approved timelines. Companies that own or invest in high-emitting assets, however, would operate under a “managed phaseout” regime and could even be cut off from investment capital.

What are “high-emitting assets”? Carney’s group hasn’t released a complete list, but a June 2022 report listed some examples: coal mines, fossil-fuel power stations, oil fields, gas pipelines, steel mills, ships, cement plants and consumer gasoline-powered vehicles. GFANZ envisions a future in which the finance sector either severs all connections to such assets or puts them under a “managed phaseout” regime, which means exactly what it sounds like.

So when Carney jokingly suggested it won’t matter if his climate plan drives up costs for steel mills because people don’t buy steel, he could have added that there likely won’t be any steel mills before long anyway. If his work as prime minister echoes his work as GFANZ chair, we can expect steel mills to be phased out, along with cars, gas-fired power plants, pipelines, oil wells and so forth.

Mark Carney, former Co-Chair of GFANZ, accompanied by (from left) Ravi Menon, Loh Boon Chye, and Yuki Yasui, at the Singapore Exchange, for the GFANZ announcement on the formation of its Asia-Pacific (APAC) Network.

GFANZ boasts at length about its members strong-arming clients into embracing net-zero. For instance, it extols British insurance multinational Aviva for its climate engagement escalation program: “Aviva is prepared to send a message to all companies through voting actions when those companies do not have adequate climate plans or do not act quickly enough.”

To support these coercive goals, Carney’s lobbying helped secure a requirement in Canada for banks, life insurance companies, trust and loan companies and others to develop and file reports disclosing their “climate transition risk,” set out by the federal Office of the Superintendent of Financial Institutions (OSFI).

The rule, Guideline B-15 on Climate Risk Management, was initially published in 2023 and requires federally regulated financial institutions (other than foreign bank branches) to conduct extensive and costly research into their holdings to determine whether value may be at risk from future climate policies. The vagueness and potential liabilities created by this menacing set of expectations could push Canada’s largest investment firms to eventually decide it’s easier to divest altogether from fossil fuel and heavy industry sectors, furthering Carney’s ultimate goal.

Yet Carney will become prime minister just when Canadians face a trade crisis that requires the construction of new coastal energy infrastructure to ensure our fossil fuel commodities can be exported without going through the United States. He has said he would take emergency measures to support “energy projects,” but I assume he means windmills and solar panels. He has not (to my knowledge) said he supports pipelines, LNG terminals, fracking wells or new refineries. Unless he disowns everything he has said for years, we must assume he doesn’t.

Canadian journalists should insist he clear this up. Ask Carney if he supports the repeal of OSFI’s Climate Risk Management guideline. Show Carney his GFANZ report. Ask him, “Do you still endorse the contents of this document?” If he says yes, ask him how we can build new pipelines and LNG terminals, expand our oil and gas sector, run our electricity grid using Canadian natural gas, heat our homes and put gasoline in our cars if banks are to phase out these activities.

If he tries to claim he no longer endorses it,
ask him when he changed his mind,
and why we should believe him now.

The media must not allow Carney to be evasive or ambiguous on these matters. We don’t have time for a bait-and-switch prime minister. If Carney still believes the rhetoric he published through GFANZ, he should say so openly, so Canadians can assess whether he really is the right man to address our current crisis.

02/2025 Update–Temperature Changes, CO2 Follows

Previously I have demonstrated that changes in atmospheric CO2 levels follow changes in Global Mean Temperatures (GMT) as shown by satellite measurements from University of Alabama at Huntsville (UAH). That background post is reprinted later below.

My curiosity was piqued by the remarkable GMT spike starting in January 2023 and rising to a peak in April 2024, and then declining afterward.  I also became aware that UAH has recalibrated their dataset due to a satellite drift that can no longer be corrected. The values since 2020 have shifted slightly in version 6.1, as shown in my recent report Oceans Rapidly Cooling UAH January 2025.

In this post, I test the premise that temperature changes are predictive of changes in atmospheric CO2 concentrations.  The chart above shows the two monthly datasets: CO2 levels in blue reported at Mauna Loa, and Global temperature anomalies in purple reported by UAHv6.1, both through February 2025. Would such a sharp increase in temperature be reflected in rising CO2 levels, according to the successful mathematical forecasting model? Would CO2 levels decline as temperatures dropped following the peak?

The answer is yes: that temperature spike resulted
in a corresponding CO2 spike as expected.
And lower CO2 levels followed the temperature decline.

Above are UAH temperature anomalies compared to CO2 monthly changes year over year.

Changes in monthly CO2 synchronize with temperature fluctuations, which for UAH are anomalies now referenced to the 1991-2020 period. CO2 differentials are calculated for the present month by subtracting the value for the same month in the previous year (for example February 2025 minus February 2024).  Temp anomalies are calculated by comparing the present month with the baseline month. Note the recent CO2 upward spike and drop following the temperature spike and drop.

The final proof that CO2 follows temperature due to stimulation of natural CO2 reservoirs is demonstrated by the ability to calculate CO2 levels since 1979 with a simple mathematical formula:

For each subsequent year, the CO2 level for each month was generated

CO2  this month this year = a + b × Temp this month this year  + CO2 this month last year

The values for a and b are constants applied to all monthly temps, and are chosen to scale the forecasted CO2 level for comparison with the observed value. Here is the result of those calculations.

In the chart calculated CO2 levels correlate with observed CO2 levels at 0.9987 out of 1.0000.  This mathematical generation of CO2 atmospheric levels is only possible if they are driven by temperature-dependent natural sources, and not by human emissions which are small in comparison, rise steadily and monotonically.  For a more detailed look at the recent fluxes, here are the results since 2015, an ENSO neutral year.

For this recent period, the calculated CO2 values match the annual lows, while some annual generated values of CO2 are slightly higher or lower than observed at other months of the year. Still the correlation for this period is 0.9932.

Key Point

Changes in CO2 follow changes in global temperatures on all time scales, from last month’s observations to ice core datasets spanning millennia. Since CO2 is the lagging variable, it cannot logically be the cause of temperature, the leading variable. It is folly to imagine that by reducing human emissions of CO2, we can change global temperatures, which are obviously driven by other factors.

Background Post Temperature Changes Cause CO2 Changes, Not the Reverse

This post is about proving that CO2 changes in response to temperature changes, not the other way around, as is often claimed.  In order to do  that we need two datasets: one for measurements of changes in atmospheric CO2 concentrations over time and one for estimates of Global Mean Temperature changes over time.

Climate science is unsettling because past data are not fixed, but change later on.  I ran into this previously and now again in 2021 and 2022 when I set out to update an analysis done in 2014 by Jeremy Shiers (discussed in a previous post reprinted at the end).  Jeremy provided a spreadsheet in his essay Murray Salby Showed CO2 Follows Temperature Now You Can Too posted in January 2014. I downloaded his spreadsheet intending to bring the analysis up to the present to see if the results hold up.  The two sources of data were:

Temperature anomalies from RSS here:  http://www.remss.com/missions/amsu

CO2 monthly levels from NOAA (Mauna Loa): https://www.esrl.noaa.gov/gmd/ccgg/trends/data.html

Changes in CO2 (ΔCO2)

Uploading the CO2 dataset showed that many numbers had changed (why?).

The blue line shows annual observed differences in monthly values year over year, e.g. June 2020 minus June 2019 etc.  The first 12 months (1979) provide the observed starting values from which differentials are calculated.  The orange line shows those CO2 values changed slightly in the 2020 dataset vs. the 2014 dataset, on average +0.035 ppm.  But there is no pattern or trend added, and deviations vary randomly between + and -.  So last year I took the 2020 dataset to replace the older one for updating the analysis.

Now I find the NOAA dataset starting in 2021 has almost completely new values due to a method shift in February 2021, requiring a recalibration of all previous measurements.  The new picture of ΔCO2 is graphed below.

The method shift is reported at a NOAA Global Monitoring Laboratory webpage, Carbon Dioxide (CO2) WMO Scale, with a justification for the difference between X2007 results and the new results from X2019 now in force.  The orange line shows that the shift has resulted in higher values, especially early on and a general slightly increasing trend over time.  However, these are small variations at the decimal level on values 340 and above.  Further, the graph shows that yearly differentials month by month are virtually the same as before.  Thus I redid the analysis with the new values.

Global Temperature Anomalies (ΔTemp)

The other time series was the record of global temperature anomalies according to RSS. The current RSS dataset is not at all the same as the past.

Here we see some seriously unsettling science at work.  The purple line is RSS in 2014, and the blue is RSS as of 2020.  Some further increases appear in the gold 2022 rss dataset. The red line shows alterations from the old to the new.  There is a slight cooling of the data in the beginning years, then the three versions mostly match until 1997, when systematic warming enters the record.  From 1997/5 to 2003/12 the average anomaly increases by 0.04C.  After 2004/1 to 2012/8 the average increase is 0.15C.  At the end from 2012/9 to 2013/12, the average anomaly was higher by 0.21. The 2022 version added slight warming over 2020 values.

RSS continues that accelerated warming to the present, but it cannot be trusted.  And who knows what the numbers will be a few years down the line?  As Dr. Ole Humlum said some years ago (regarding Gistemp): “It should however be noted, that a temperature record which keeps on changing the past hardly can qualify as being correct.”

Given the above manipulations, I went instead to the other satellite dataset UAH version 6. UAH has also made a shift by changing its baseline from 1981-2010 to 1991-2020.  This resulted in systematically reducing the anomaly values, but did not alter the pattern of variation over time.  For comparison, here are the two records with measurements through December 2023.

Comparing UAH temperature anomalies to NOAA CO2 changes.

Here are UAH temperature anomalies compared to CO2 monthly changes year over year.

Changes in monthly CO2 synchronize with temperature fluctuations, which for UAH are anomalies now referenced to the 1991-2020 period.  As stated above, CO2 differentials are calculated for the present month by subtracting the value for the same month in the previous year (for example June 2022 minus June 2021).   Temp anomalies are calculated by comparing the present month with the baseline month.

The final proof that CO2 follows temperature due to stimulation of natural CO2 reservoirs is demonstrated by the ability to calculate CO2 levels since 1979 with a simple mathematical formula:

For each subsequent year, the co2 level for each month was generated

CO2  this month this year = a + b × Temp this month this year  + CO2 this month last year

Jeremy used Python to estimate a and b, but I used his spreadsheet to guess values that place for comparison the observed and calculated CO2 levels on top of each other.

In the chart calculated CO2 levels correlate with observed CO2 levels at 0.9986 out of 1.0000.  This mathematical generation of CO2 atmospheric levels is only possible if they are driven by temperature-dependent natural sources, and not by human emissions which are small in comparison, rise steadily and monotonically.

Comment:  UAH dataset reported a sharp warming spike starting mid year, with causes speculated but not proven.  In any case, that surprising peak has not yet driven CO2 higher, though it might,  but only if it persists despite the likely cooling already under way.

Previous Post:  What Causes Rising Atmospheric CO2?

nasa_carbon_cycle_2008-1

This post is prompted by a recent exchange with those reasserting the “consensus” view attributing all additional atmospheric CO2 to humans burning fossil fuels.

The IPCC doctrine which has long been promoted goes as follows. We have a number over here for monthly fossil fuel CO2 emissions, and a number over there for monthly atmospheric CO2. We don’t have good numbers for the rest of it-oceans, soils, biosphere–though rough estimates are orders of magnitude higher, dwarfing human CO2.  So we ignore nature and assume it is always a sink, explaining the difference between the two numbers we do have. Easy peasy, science settled.

What about the fact that nature continues to absorb about half of human emissions, even while FF CO2 increased by 60% over the last 2 decades? What about the fact that in 2020 FF CO2 declined significantly with no discernable impact on rising atmospheric CO2?

These and other issues are raised by Murray Salby and others who conclude that it is not that simple, and the science is not settled. And so these dissenters must be cancelled lest the narrative be weakened.

The non-IPCC paradigm is that atmospheric CO2 levels are a function of two very different fluxes. FF CO2 changes rapidly and increases steadily, while Natural CO2 changes slowly over time, and fluctuates up and down from temperature changes. The implications are that human CO2 is a simple addition, while natural CO2 comes from the integral of previous fluctuations.  Jeremy Shiers has a series of posts at his blog clarifying this paradigm. See Increasing CO2 Raises Global Temperature Or Does Increasing Temperature Raise CO2 Excerpts in italics with my bolds.

The following graph which shows the change in CO2 levels (rather than the levels directly) makes this much clearer.

Note the vertical scale refers to the first differential of the CO2 level not the level itself. The graph depicts that change rate in ppm per year.

There are big swings in the amount of CO2 emitted. Taking the mean as 1.6 ppmv/year (at a guess) there are +/- swings of around 1.2 nearly +/- 100%.

And, surprise surprise, the change in net emissions of CO2 is very strongly correlated with changes in global temperature.

This clearly indicates the net amount of CO2 emitted in any one year is directly linked to global mean temperature in that year.

For any given year the amount of CO2 in the atmosphere will be the sum of

  • all the net annual emissions of CO2
  • in all previous years.

For each year the net annual emission of CO2 is proportional to the annual global mean temperature.

This means the amount of CO2 in the atmosphere will be related to the sum of temperatures in previous years.

So CO2 levels are not directly related to the current temperature but the integral of temperature over previous years.

The following graph again shows observed levels of CO2 and global temperatures but also has calculated levels of CO2 based on sum of previous years temperatures (dotted blue line).

Summary:

The massive fluxes from natural sources dominate the flow of CO2 through the atmosphere.  Human CO2 from burning fossil fuels is around 4% of the annual addition from all sources. Even if rising CO2 could cause rising temperatures (no evidence, only claims), reducing our emissions would have little impact.

Atmospheric CO2 Math

Ins: 4% human, 96% natural
Outs: 0% human, 98% natural.
Atmospheric storage difference: +2%
(so that: Ins = Outs + Atmospheric storage difference)

Balance = Atmospheric storage difference: 2%, of which,
Humans: 2% X 4% = 0.08%
Nature: 2% X 96 % = 1.92%

Ratio Natural : Human =1.92% : 0.08% = 24 : 1

Resources
For a possible explanation of natural warming and CO2 emissions see Little Ice Age Warming Recovery May be Over
Resources:

CO2 Fluxes, Sources and Sinks

Who to Blame for Rising CO2?

Fearless Physics from Dr. Salby

2025 The Poisonous Tree of Climate Change

Now that Trump’s EPA is determined to reconsider its past GHG Endangerment Finding, it’s important to understand how we got here.  First of all there was the EPA’s theory basis for the finding:

The 3 Lines of Evidence can all be challenged by scientific studies since the 2009 ruling.  The temperature records have been adjusted over time and the validity of the measurements are uncertain.  The issues with climate models give many reasons to regard them as unfit for policy making.  And the claim that rising CO2 caused rising Global Average Surface Temperature (GAST) is dubious, both on grounds that CO2 Infrared activity declines with higher levels, and that temperature changes precede CO2 changes on all time scales from last month’s observations to ice core proxies spanning millennia.

Thus all the arrows claiming causal relations are flawed.  The rise of atmospheric CO2 is mostly nature’s response to warming, rather than the other way around. And the earth warming since the Little Ice Age (LIA) is a welcome recovery from the coldest period in the last 10,000 years.  Claims of extreme weather  and rising sea levels ignore that such events are ordinary in earth history.  And the health warnings are contrived in attributing them to barely noticeable warming temperatures.

Background on the Legal Precedents

This post was triggered by noticing an event some years ago.  Serial valve turner Ken Ward was granted a new trial by the Washington State Court of Appeals, and he was allowed to present a “necessity defense.”  This astonishingly bad ruling is reported approvingly by Kelsey Skaggs at Pacific Standard Why the Necessity Defense is Critical to the Climate Struggle. Excerpt below with my bolds.

A climate activist who was convicted after turning off an oil pipeline won the right in April to argue in a new trial that his actions were justified. The Washington State Court of Appeals ruled that Ken Ward will be permitted to explain to a jury that, while he did illegally stop the flow of tar sands oil from Canada into the United States, his action was necessary to slow catastrophic climate change.

The Skaggs article goes on to cloak energy vandalism with the history of civil disobedience against actual mistreatment and harm.  Nowhere is it recognized that the brouhaha over climate change concerns future imaginary harm.  How could lawyers and judges get this so wrong?  It can only happen when an erroneous legal precedent can be cited to spread a poison in the public square.  So I went searching for the tree producing all of this poisonous fruit. The full text of the April 8, 2019, ruling is here.

A paper at Stanford Law School (where else?) provides a good history of the necessity defense as related to climate change activism The Climate Necessity Defense: Proof and Judicial Error in Climate Protest Cases Excerpts in italics with my bolds.

My perusal of the text led me to the section where the merits are presented.

The typical climate necessity argument is straightforward. The ongoing effects of climate change are not only imminent, they are currently occurring; civil disobedience has been proven to contribute to the mitigation of these harms, and our political and legal systems have proven uniquely ill-equipped to deal with the climate crisis, thus creating the necessity of breaking the law to address it. As opposed to many classic political necessity defendants, such as anti-nuclear power protesters, climate activists can point to the existing (rather than speculative) nature of the targeted harm and can make a more compelling case that their protest activity (for example, blocking fossil fuel extraction) actually prevents some quantum of harm produced by global warming. pg.78

What?  On what evidence is such confidence based?  Later on (page 80), comes this:

Second, courts’ focus on the politics of climate change distracts from the scientific issues involved in climate necessity cases. There may well be political disagreement over the realities and effects of climate change, but there is little scientific disagreement, as the Supreme Court has noted.131

131 Massachusetts v. E.P.A., 549 U.S. 497, 499 (2007) (“The harms associated with climate change are serious and well recognized . . . [T]he relevant science and a strong consensus among qualified experts indicate that global warming threatens, inter alia, a precipitate rise in sea levels by the end of the century, severe and irreversible changes to natural ecosystems, a significant reduction in water storage in winter snowpack in mountainous regions with direct and important economic consequences, and an increase in the spread of disease and the ferocity of weather events.”).

The roots of this poisonous tree are found in citing the famous Massachusetts v. E.P.A. (2007) case decided by a 5-4 opinion of Supreme Court justices (consensus rate: 56%).  But let’s see in what context lies that reference and whether it is a quotation from a source or an issue addressed by the court.  The majority opinion was written by Justice Stevens, with dissenting opinions from Chief Justice Roberts and Justice Scalia.  All these documents are available at sureme.justia.com Massachusetts v. EPA, 549 U.S. 497 (2007)

From the Majority Opinion:

A well-documented rise in global temperatures has coincided with a significant increase in the concentration of carbon dioxide in the atmosphere. Respected scientists believe the two trends are related. For when carbon dioxide is released into the atmosphere, it acts like the ceiling of a greenhouse, trapping solar energy and retarding the escape of reflected heat. It is therefore a species—the most important species—of a “greenhouse gas.” Source: National Research Council:

National Research Council 2001 report titled Climate Change: An Analysis of Some Key Questions (NRC Report), which, drawing heavily on the 1995 IPCC report, concluded that “[g]reenhouse gases are accumulating in Earth’s atmosphere as a result of human activities, causing surface air temperatures and subsurface ocean temperatures to rise. Temperatures are, in fact, rising.” NRC Report 1.

Calling global warming “the most pressing environmental challenge of our time,”[Footnote 1] a group of States,[Footnote 2] local governments,[Footnote 3] and private organizations,[Footnote 4] alleged in a petition for certiorari that the Environmental Protection Agency (EPA) has abdicated its responsibility under the Clean Air Act to regulate the emissions of four greenhouse gases, including carbon dioxide.  Specifically, petitioners asked us to answer two questions concerning the meaning of §202(a)(1) of the Act: whether EPA has the statutory authority to regulate greenhouse gas emissions from new motor vehicles; and if so, whether its stated reasons for refusing to do so are consistent with the statute.

EPA reasoned that climate change had its own “political history”: Congress designed the original Clean Air Act to address local air pollutants rather than a substance that “is fairly consistent in its concentration throughout the world’s atmosphere,” 68 Fed. Reg. 52927 (emphasis added); declined in 1990 to enact proposed amendments to force EPA to set carbon dioxide emission standards for motor vehicles, ibid. (citing H. R. 5966, 101st Cong., 2d Sess. (1990)); and addressed global climate change in other legislation, 68 Fed. Reg. 52927. Because of this political history, and because imposing emission limitations on greenhouse gases would have even greater economic and political repercussions than regulating tobacco, EPA was persuaded that it lacked the power to do so. Id., at 52928. In essence, EPA concluded that climate change was so important that unless Congress spoke with exacting specificity, it could not have meant the agency to address it.

Having reached that conclusion, EPA believed it followed that greenhouse gases cannot be “air pollutants” within the meaning of the Act. See ibid. (“It follows from this conclusion, that [greenhouse gases], as such, are not air pollutants under the [Clean Air Act’s] regulatory provisions …”).

Even assuming that it had authority over greenhouse gases, EPA explained in detail why it would refuse to exercise that authority. The agency began by recognizing that the concentration of greenhouse gases has dramatically increased as a result of human activities, and acknowledged the attendant increase in global surface air temperatures. Id., at 52930. EPA nevertheless gave controlling importance to the NRC Report’s statement that a causal link between the two “ ‘cannot be unequivocally established.’ ” Ibid. (quoting NRC Report 17). Given that residual uncertainty, EPA concluded that regulating greenhouse gas emissions would be unwise. 68 Fed. Reg. 52930.

The harms associated with climate change are serious and well recognized. Indeed, the NRC Report itself—which EPA regards as an “objective and independent assessment of the relevant science,” 68 Fed. Reg. 52930—identifies a number of environmental changes that have already inflicted significant harms, including “the global retreat of mountain glaciers, reduction in snow-cover extent, the earlier spring melting of rivers and lakes, [and] the accelerated rate of rise of sea levels during the 20th century relative to the past few thousand years … .” NRC Report 16.

In sum—at least according to petitioners’ uncontested affidavits—the rise in sea levels associated with global warming has already harmed and will continue to harm Massachusetts. The risk of catastrophic harm, though remote, is nevertheless real. That risk would be reduced to some extent if petitioners received the relief they seek. We therefore hold that petitioners have standing to challenge the EPA’s denial of their rulemaking petition.[Footnote 24]

In short, EPA has offered no reasoned explanation for its refusal to decide whether greenhouse gases cause or contribute to climate change. Its action was therefore “arbitrary, capricious, … or otherwise not in accordance with law.” 42 U. S. C. §7607(d)(9)(A). We need not and do not reach the question whether on remand EPA must make an endangerment finding, or whether policy concerns can inform EPA’s actions in the event that it makes such a finding. Cf. Chevron U. S. A. Inc. v. Natural Resources Defense Council, Inc., 467 U. S. 837, 843–844 (1984). We hold only that EPA must ground its reasons for action or inaction in the statute.

My Comment: Note that the citations of scientific proof were uncontested assertions by petitioners.  Note also that the majority did not rule that EPA must make an endangerment finding:  “We hold only that EPA must ground its reasons for action or inaction in the statute.”

From the Minority Dissenting Opinion

It is not at all clear how the Court’s “special solicitude” for Massachusetts plays out in the standing analysis, except as an implicit concession that petitioners cannot establish standing on traditional terms. But the status of Massachusetts as a State cannot compensate for petitioners’ failure to demonstrate injury in fact, causation, and redressability.

When the Court actually applies the three-part test, it focuses, as did the dissent below, see 415 F. 3d 50, 64 (CADC 2005) (opinion of Tatel, J.), on the State’s asserted loss of coastal land as the injury in fact. If petitioners rely on loss of land as the Article III injury, however, they must ground the rest of the standing analysis in that specific injury. That alleged injury must be “concrete and particularized,” Defenders of Wildlife, 504 U. S., at 560, and “distinct and palpable,” Allen, 468 U. S., at 751 (internal quotation marks omitted). Central to this concept of “particularized” injury is the requirement that a plaintiff be affected in a “personal and individual way,” Defenders of Wildlife, 504 U. S., at 560, n. 1, and seek relief that “directly and tangibly benefits him” in a manner distinct from its impact on “the public at large,” id., at 573–574. Without “particularized injury, there can be no confidence of ‘a real need to exercise the power of judicial review’ or that relief can be framed ‘no broader than required by the precise facts to which the court’s ruling would be applied.’ ” Warth v. Seldin, 422 U. S. 490, 508 (1975) (quoting Schlesinger v. Reservists Comm. to Stop the War, 418 U. S. 208, 221–222 (1974)).

The very concept of global warming seems inconsistent with this particularization requirement. Global warming is a phenomenon “harmful to humanity at large,” 415 F. 3d, at 60 (Sentelle, J., dissenting in part and concurring in judgment), and the redress petitioners seek is focused no more on them than on the public generally—it is literally to change the atmosphere around the world.

If petitioners’ particularized injury is loss of coastal land, it is also that injury that must be “actual or imminent, not conjectural or hypothetical,” Defenders of Wildlife, supra, at 560 (internal quotation marks omitted), “real and immediate,” Los Angeles v. Lyons, 461 U. S. 95, 102 (1983) (internal quotation marks omitted), and “certainly impending,” Whitmore v. Arkansas, 495 U. S. 149, 158 (1990) (internal quotation marks omitted).

As to “actual” injury, the Court observes that “global sea levels rose somewhere between 10 and 20 centimeters over the 20th century as a result of global warming” and that “[t]hese rising seas have already begun to swallow Massachusetts’ coastal land.” Ante, at 19. But none of petitioners’ declarations supports that connection. One declaration states that “a rise in sea level due to climate change is occurring on the coast of Massachusetts, in the metropolitan Boston area,” but there is no elaboration. Petitioners’ Standing Appendix in No. 03–1361, etc. (CADC), p. 196 (Stdg. App.). And the declarant goes on to identify a “significan[t]” non-global-warming cause of Boston’s rising sea level: land subsidence. Id., at 197; see also id., at 216. Thus, aside from a single conclusory statement, there is nothing in petitioners’ 43 standing declarations and accompanying exhibits to support an inference of actual loss of Massachusetts coastal land from 20th century global sea level increases. It is pure conjecture.

The Court ignores the complexities of global warming, and does so by now disregarding the “particularized” injury it relied on in step one, and using the dire nature of global warming itself as a bootstrap for finding causation and redressability.

Petitioners are never able to trace their alleged injuries back through this complex web to the fractional amount of global emissions that might have been limited with EPA standards. In light of the bit-part domestic new motor vehicle greenhouse gas emissions have played in what petitioners describe as a 150-year global phenomenon, and the myriad additional factors bearing on petitioners’ alleged injury—the loss of Massachusetts coastal land—the connection is far too speculative to establish causation.

From Justice Scalia’s Dissenting Opinion

Even on the Court’s own terms, however, the same conclusion follows. As mentioned above, the Court gives EPA the option of determining that the science is too uncertain to allow it to form a “judgment” as to whether greenhouse gases endanger public welfare. Attached to this option (on what basis is unclear) is an essay requirement: “If,” the Court says, “the scientific uncertainty is so profound that it precludes EPA from making a reasoned judgment as to whether greenhouse gases contribute to global warming, EPA must say so.” Ante, at 31. But EPA has said precisely that—and at great length, based on information contained in a 2001 report by the National Research Council (NRC) entitled Climate Change Science:

“As the NRC noted in its report, concentrations of [greenhouse gases (GHGs)] are increasing in the atmosphere as a result of human activities (pp. 9–12). It also noted that ‘[a] diverse array of evidence points to a warming of global surface air temperatures’ (p. 16). The report goes on to state, however, that ‘[b]ecause of the large and still uncertain level of natural variability inherent in the climate record and the uncertainties in the time histories of the various forcing agents (and particularly aerosols), a [causal] linkage between the buildup of greenhouse gases in the atmosphere and the observed climate changes during the 20th century cannot be unequivocally established. The fact that the magnitude of the observed warming is large in comparison to natural variability as simulated in climate models is suggestive of such a linkage, but it does not constitute proof of one because the model simulations could be deficient in natural variability on the decadal to century time scale’ (p. 17).

“The NRC also observed that ‘there is considerable uncertainty in current understanding of how the climate system varies naturally and reacts to emissions of [GHGs] and aerosols’ (p. 1). As a result of that uncertainty, the NRC cautioned that ‘current estimate of the magnitude of future warming should be regarded as tentative and subject to future adjustments (either upward or downward).’ Id. It further advised that ‘[r]educing the wide range of uncertainty inherent in current model predictions of global climate change will require major advances in understanding and modeling of both (1) the factors that determine atmospheric concentrations of [GHGs] and aerosols and (2) the so-called “feedbacks” that determine the sensitivity of the climate system to a prescribed increase in [GHGs].’ Id.

“The science of climate change is extraordinarily complex and still evolving. Although there have been substantial advances in climate change science, there continue to be important uncertainties in our understanding of the factors that may affect future climate change and how it should be addressed. As the NRC explained, predicting future climate change necessarily involves a complex web of economic and physical factors including: Our ability to predict future global anthropogenic emissions of GHGs and aerosols; the fate of these emissions once they enter the atmosphere (e.g., what percentage are absorbed by vegetation or are taken up by the oceans); the impact of those emissions that remain in the atmosphere on the radiative properties of the atmosphere; changes in critically important climate feedbacks (e.g., changes in cloud cover and ocean circulation); changes in temperature characteristics (e.g., average temperatures, shifts in daytime and evening temperatures); changes in other climatic parameters (e.g., shifts in precipitation, storms); and ultimately the impact of such changes on human health and welfare (e.g., increases or decreases in agricultural productivity, human health impacts). The NRC noted, in particular, that ‘[t]he understanding of the relationships between weather/climate and human health is in its infancy and therefore the health consequences of climate change are poorly understood’ (p. 20). Substantial scientific uncertainties limit our ability to assess each of these factors and to separate out those changes resulting from natural variability from those that are directly the result of increases in anthropogenic GHGs.

“Reducing the wide range of uncertainty inherent in current model predictions will require major advances in understanding and modeling of the factors that determine atmospheric concentrations of greenhouse gases and aerosols, and the processes that determine the sensitivity of the climate system.” 68 Fed. Reg. 52930.

I simply cannot conceive of what else the Court would like EPA to say.

Conclusion

Justice Scalia laid the axe to the roots of this poisonous tree.  Even the scientific source document relied on by the majority admits that claims of man made warming are conjecture without certain evidence.  This case does not prove CAGW despite it being repeatedly cited as though it did.

2025 The Legal Landscape Has Shifted For EPA

But much has changed in the legal landscape in recent years that will give opponents to Zeldin’s effort an uphill battle to fight. First is the changed make-up of the Supreme Court. When the Massachusetts v. EPA case was decided in 2007, the Court was evenly divided, consisting of four conservatives, four liberals, and Anthony Kennedy, a moderate who served as the Court’s “swing vote” in many major decisions. Kennedy was the deciding vote in that case, siding with the four liberal justices.

But conservatives hold an overwhelming 6-3 majority on today’s Supreme Court. While Chief Justice John Roberts and Associate Justice Amy Coney Barrett have occasionally sided with the Court’s three liberal justices in a handful of decisions, there is little reason to think that would happen in a reconsideration of the Massachusetts v. EPA case. That seems especially true for Justice Roberts, who wrote the dissenting opinion in the 2007 decision.

The Supreme Court’s 2024 decision in the Loper Bright Industries v. EPA case could present another major challenge for Zeldin’s opponents to overcome. In a 6-3 decision in that case, the Court reversed the longstanding Chevron Deference legal doctrine.

As I wrote at the time, [w]hen established in 1984 in a unanimous, 6-0 decision written by Justice John Paul Stevens, Chevron instructed federal courts to defer to the judgment of legal counsel for the regulatory agencies when such regulations were challenged via litigation. Since that time, agencies focused on extending their authority well outside the original intents of the governing statutes have relied on the doctrine to ensure they will not be overturned.

The existence of the Chevron deference has worked to ensure the judiciary branch of government has also been largely paralyzed to act decisively to review and overrule elements of the Biden agenda whenever the EPA, Bureau of Land Management or other agencies impose regulations that may lie outside the scope and intent of the governing statutes. In effect, this doctrine has served as a key enabler of the massive growth of what has come to be known as the US administrative state.

The question now becomes whether the current Supreme Court with its strong conservative majority will uphold its reasoning in Massachusetts v. EPA in the absence of the Chevron Deference.

The Bottom Line For Zeldin And EPA

Opponents of the expansion of EPA air regulations by the Obama and Biden presidencies have long contended that the underpinnings for those actions – Massachusetts v. EPA and the 2009 endangerment finding – were a classic legal house of cards that would ultimately come falling down when the politics and makeup of the Supreme Court shifted.

Trump and Zeldin are betting that both factors are now in favor of these major actions at EPA. Only time, and an array of major court battles to come, will tell.  [Source: David Blackmon at Forbes]

Footnote:  

Taking the sea level rise projected by Sea Change Boston, and through the magic of CAI (Computer-Aided Imagining), we can compare to tidal gauge observations at Boston:

 

 

Don’t Fall for Carney’s Carbon Tax Trick

Kenneth Green explains in his Toronto Sun article Carney’s climate plan will keep costing Canadians money.  Excerpts in italics with my bolds and added images.

Mark Carney, our next prime minister, has floated a climate policy plan that he says will be better for Canadians than the “divisive (read: widely hated) consumer carbon tax.”

But in reality, Carney’s plan is an exercise in misdirection. Instead of paying the “consumer carbon tax” directly and receiving carbon rebates, Canadians will pay more via higher prices for products that flow from Canada’s “large industrial emitters,” who Carney plans to saddle with higher carbon taxes, indirectly imposing the consumer carbon tax by passing those costs onto Canadians.

Carney also wants to shift government subsidies to consumer products of so-called “clean technologies.”    As Carney told the National Observer, “We’re introducing changes so that if you decide to insulate your home, install a heat pump, or switch to a fuel-efficient car, those companies will pay you — not the taxpayer, not the government, but those companies.”

What Carney does not mention is that much of the costs imposed on “those companies” will also be folded into the costs of the products consumers buy, while the cause of rising prices will be less distinguishable and attributable to government action.

Moreover, Carney says he wants to make Canada a “clean energy superpower” and “expand and modernize our energy infrastructure so that we are less dependent on foreign suppliers, and the United States as a customer.” But this too is absurd. Far from being in any way poised to become a “clean energy superpower,” Canada likely won’t meet its own projected electricity demand by 2050 under existing environmental regulations.

For example, to generate the electricity needed through 2050 solely with solar power, Canada would need to build 840 solar-power generation stations the size of Alberta’s Travers Solar Project, which would take about 1,700 construction years to accomplish.

If we went with wind power to meet future demand, Canada would need to build 574 wind power installations the size of Quebec’s Seigneurie de Beaupre wind-power station, which would take about 1,150 construction years to accomplish. And if we relied solely on hydropower, we’d need to build 134 hydro-power facilities the size of the Site C power station in British Columbia, which would take 938 construction years to accomplish.

Finally, if we relied solely on nuclear power, we’d need to construct 16 new nuclear plants the size of Ontario’s Bruce Nuclear Generating Station, taking “only” 112 construction years to accomplish.

Again, Mark Carney’s climate plan is an exercise in misdirection — a rhetorical sleight of hand to convince Canadians that he’ll lighten the burden on taxpayers and shift away from the Trudeau government’s overzealous climate policies of the past decade. But scratch the surface of the Carney plan and you’ll see climate policies that will hit Canadian consumers harder, with likely higher prices for goods and services.

As a federal election looms, Canadians should demand from all candidates — no matter their political stripe — a detailed plan to rekindle Canada’s energy sector and truly lighten the load for Canadians and their families.