Modern Politics Seen as Classes Power Game

Joel Kotkin makes sense of the confusing US politics around the 2020 presidential campaigning. He writes A class guide to the 2020 presidential election in Orange County Register. Excerpts in italics with my bolds.

America’s electorate in 2020 has been dissected by race, region, cultural attitudes and gender. But the most important division may well be, in a nation that has become profoundly unequal, along class lines. All politicians, from Donald Trump to Elizabeth Warren, portray themselves as “fighting for the middle class” and “working families.”

Yet our increasingly neo-feudal America is best broken down into four broad groups — the oligarchs, the clerisy, the yeomanry and the serfs. The oligarchs dominate the economic realm, including control of information media. Below them are sometimes allied members of the clerisy, the well-educated middle class who set the country’s intellectual and cultural context.

Below them are the two most numerous classes — the property-owning yeomanry and, most numerous of all, the expanding new serfdom. Understanding these groups provides a valuable insight into 2020’s realities.

The candidates of the oligarchy

The oligarchs, roughly the top .01 percent, now own the highest share of wealth in almost a century. They can fund nonprofits, media outlets, campaigns and political action committees with almost unlimited largesse. The oligarchy’s wealthiest and most influential members hail from the tech sector, Wall Street and Hollywood. In recent decades they have created a plutocrat-funded Democratic Party backing economically non-threatening but culturally and environmentally liberal figures like Bill Clinton and Barack Obama.


At first Joe Biden seemed to be winning the battle for oligarchal support. But his poor performance has opened the field for Kamala Harris, who enjoys long-standing financial ties, both political and through her husband’s law practice, to big media companies, telecom providers, Hollywood and, most of all, Silicon Valley. Harris offers gentry liberal delight — telegenic, smart, female, non-white but without posing the threat to oligarchal power represented by Elizabeth Warren and, even worse, Bernie Sanders.

Trump, of course, also boasts oligarchal supporters from older sectors of the corporate elite — retail chain owners, builders of single-family homes, manufacturing and energy executives. Given the Democratic embrace of the Green New Deal, massive redistribution of income and reversing corporate tax cuts, a lot of old economy money will flow into Dr. Demento’s coffers this time around.

The clerisy’s favorite

What analyst Michael Lind calls the “overclass” — made up of academics, the media and well-paid professionals — represents some 15 percent of the American workforce. This group has done better than the traditional middle class, let alone the working class, but over the past few decades has lost much ground against the oligarchs, who have reaped the vast majority of the economic gains.

Like the rising professional classes of the gilded age, many in the clerisy are offended by the huge wealth of the oligarchs. Harvard’s Elizabeth Warren reprises the role performed by Princeton’s Woodrow Wilson over a century ago. Her most radical proposals target not the affluent middle class but the super-rich, notably through anti-trust, while her wealth tax impacts only people with over $50 million. Most of her financial support, not surprisingly, comes from women’s groups and academics. Only Pete Buttigieg, with his base of gay support, comes close to competing in the intersectional sweepstakes.

Warren’s insistence on calling herself a “capitalist” separates her from Bernie Sanders’ full-throated socialism, with its odd Soviet nostalgia. It helps her appealing to those who still have something to protect. Sanders also loses by dint of his race and sex; Warren may have to failed to prove her Native American credentials, but her gender remains an asset at a time when being old, white and male is not the preferred brand among progressives.

The Yeomanry: Trump’s to lose

Most of America sees itself as middle class. But there’s a growing gap between the yeomanry — small business and property owners — and the clerisy as well as a vast, expanding class of permanently landless permanent serfs. Most members of the yeomanry work in the private sector; unlike the clerisy, for them government regulation provides not employment, but a burden.

They gained little from the largely asset-based prosperity of the Obama years but have done far better under Trump Many suburban dwellers and property owners may find Trump personally abhorrent (which is easy to do) but are directly threatened by a Democratic Party anxious to force up worker wages, control rents, boost regulations and raise taxes.

Many of these voters also would not like to give up their private health insurance, which Warren, Sanders and, intermittently, Harris have demanded. As the Democrats go further left, this constituency is likely to line up largely with Trump or simply abstain, given the awfulness of the choices.

Serfs and the “blue tidal wave”

The property-less working class does not tend to vote as much as the yeomanry, but their numbers are growing. Some are déclassé millennials unable to launch full careers or afford to buy houses. Unlike previous generations, they also have been reluctant to start businesses.

Many of the new serf class inhabit the precariat, a modern proletariat lacking the protections of steady work and trade unions. Many participate in the gig economy as Uber drivers, trainers, personal assistants and contract technicians. Most depend on their gigs for their livelihood income, and they are often lowly paid; according to one study nearly half of gig workers in California are under the poverty line.

With little stake in the capitalist economy, the youthful members of the precariat have been drawn to the socialist appeals of Sanders and, increasingly, Warren. The leftist American Prospect sees them driving a potential new “blue tidal wave.”

Yet if economics may impel this class toward the Democrats, two factors may work against them, particularly those who did not attend college and are older. First, low-income workers, including minorities, generally have done better under Trump than under Obama, something the president’s handlers will no doubt emphasize.

The other is support for such things as reparations, health care for the undocumented, open borders and virtually unlimited right to abortion. These positions may not play well in blue-collar communities, particularly in the Midwest, Great Plains and the south. Whoever wins the Democratic nomination cannot win based only on support from the clerical and oligarchal elites but also by winning over the serf vote, which they now are in danger of squandering.

Joel Kotkin is the R.C. Hobbs Presidential Fellow in Urban Futures at Chapman University in Orange and executive director of the Houston-based Center for Opportunity Urbanism (

NYT Does More Weird Science: Heat Waves

Ronald Bailey writes at Reason The New York Times Says Heat Waves Are Getting Worse. The National Climate Assessment Disagrees.  Excerpts in italics with my bolds.

Americans east of the Rockies are sweltering as daytime temperatures soar toward 100 degrees or more. It is now customary for journalists covering big weather events to speculate on how man-made climate change may be affecting them, and the current heat wave is no exception. Take this headline in The New York Times: “Heat Waves in the Age of Climate Change: Longer, More Frequent and More Dangerous.”

As evidence, the Times cites the U.S. Global Change Research Program, reporting that “since the 1960s the average number of heat waves—defined as two or more consecutive days where daily lows exceeded historical July and August temperatures—in 50 major American cities has tripled.” That is indeed what the numbers show. But it seems odd to highlight the trend in daily low temperatures rather than daily high temperatures.

As it happens, chapter six of 2017’s Fourth National Climate Assessment reports that heat waves measured as high daily temperatures are becoming less common in the contiguous U.S., not more frequent.

Here, from the report, are the “observed changes in the coldest and warmest daily temperatures (°F) of the year for each National Climate Assessment region in the contiguous United States.” The “changes,” it explains, “are the difference between the average for present-day (1986–2016) and the average for the first half of the last century (1901–1960).”

And here is the Heat Wave Magnitude Index, which shows the maximum magnitude of a year’s heat waves. (The report defines a heat wave as a period of at least three consecutive days where the maximum temperature is above the appropriate threshold.)

The maps below, from the Fourth Assessment, illustrate the trends in the warmest (generally daytime) and coldest (generally nighttime) temperatures in the contiguous U.S.:

According to the Intergovernmental Panel on Climate Change, climate models tend to significantly underestimate the decrease in the diurnal temperature range—that is, the difference between minimum and maximum daily temperatures—over the last 50 years. The panel’s latest report notes that there is “medium confidence” that “the length and frequency of warm spells, including heat waves, has increased since the middle of the 20th century” around the world. Medium confidence means there is about a 50 percent chance of the finding being correct. (The report does deem it “likely that heatwave frequency has increased during this period in large parts of Europe, Asia and Australia.”)

Big tip of the hat to the University of Colorado’s invaluable Roger Pielke Jr.

Footnote: Since June 2019 was only the 24th warmest in the US, alarmists will be playing catch up this summer.  A previous post explains how to mine the data to produce the bias you want from the billions of measurements recorded. Clear Thinking about Heat Records

Ocean SSTs Mixed in June

The best context for understanding decadal temperature changes comes from the world’s sea surface temperatures (SST), for several reasons:

  • The ocean covers 71% of the globe and drives average temperatures;
  • SSTs have a constant water content, (unlike air temperatures), so give a better reading of heat content variations;
  • A major El Nino was the dominant climate feature in recent years.

HadSST is generally regarded as the best of the global SST data sets, and so the temperature story here comes from that source, the latest version being HadSST3.  More on what distinguishes HadSST3 from other SST products at the end.

The Current Context

The chart below shows SST monthly anomalies as reported in HadSST3 starting in 2015 through June 2019.
A global cooling pattern is seen clearly in the Tropics since its peak in 2016, joined by NH and SH cycling downward since 2016.  2018 started with slow warming after the low point of December 2017, led by steadily rising NH, which peaked in September and cooled since.  The Tropics rose steadily until November, then cooled before returning to the same level.

In 2019 all regions had been converging to reach nearly the same value in April.  Now in June, NH rose sharply, while SH dropped by the same amount while the Tropics SSTs are holding steady.  As a result the Global average anomaly is up 0.04 to an anomaly of 0.56C  All regions are about the same as 05/2017 which led to a cooling period despite NH warming at the time

Note that higher temps in 2015 and 2016 were first of all due to a sharp rise in Tropical SST, beginning in March 2015, peaking in January 2016, and steadily declining back below its beginning level. Secondly, the Northern Hemisphere added three bumps on the shoulders of Tropical warming, with peaks in August of each year.  A fourth NH bump was lower and peaked in September 2018.  Also, note that the global release of heat was not dramatic, due to the Southern Hemisphere offsetting the Northern one.

The annual SSTs for the last five years are as follows:

Annual SSTs Global NH SH  Tropics
2014 0.477 0.617 0.335 0.451
2015 0.592 0.737 0.425 0.717
2016 0.613 0.746 0.486 0.708
2017 0.505 0.650 0.385 0.424
2018 0.480 0.620 0.362 0.369

2018 annual average SSTs across the regions are close to 2014, slightly higher in SH and much lower in the Tropics.  The SST rise from the global ocean was remarkable, peaking in 2016, higher than 2011 by 0.32C.

A longer view of SSTs

The graph below  is noisy, but the density is needed to see the seasonal patterns in the oceanic fluctuations.  Previous posts focused on the rise and fall of the last El Nino starting in 2015.  This post adds a longer view, encompassing the significant 1998 El Nino and since.  The color schemes are retained for Global, Tropics, NH and SH anomalies.  Despite the longer time frame, I have kept the monthly data (rather than yearly averages) because of interesting shifts between January and July.

Hadsst1995 to 062019Open image in new tab to enlarge.

1995 is a reasonable starting point prior to the first El Nino.  The sharp Tropical rise peaking in 1998 is dominant in the record, starting Jan. ’97 to pull up SSTs uniformly before returning to the same level Jan. ’99.  For the next 2 years, the Tropics stayed down, and the world’s oceans held steady around 0.2C above 1961 to 1990 average.

Then comes a steady rise over two years to a lesser peak Jan. 2003, but again uniformly pulling all oceans up around 0.4C.  Something changes at this point, with more hemispheric divergence than before. Over the 4 years until Jan 2007, the Tropics go through ups and downs, NH a series of ups and SH mostly downs.  As a result the Global average fluctuates around that same 0.4C, which also turns out to be the average for the entire record since 1995.

2007 stands out with a sharp drop in temperatures so that Jan.08 matches the low in Jan. ’99, but starting from a lower high. The oceans all decline as well, until temps build peaking in 2010.

Now again a different pattern appears.  The Tropics cool sharply to Jan 11, then rise steadily for 4 years to Jan 15, at which point the most recent major El Nino takes off.  But this time in contrast to ’97-’99, the Northern Hemisphere produces peaks every summer pulling up the Global average.  In fact, these NH peaks appear every July starting in 2003, growing stronger to produce 3 massive highs in 2014, 15 and 16.  NH July 2017 was only slightly lower, and a fifth NH peak still lower in Sept. 2018.  Note also that starting in 2014 SH plays a moderating role, offsetting the NH warming pulses. (Note: these are high anomalies on top of the highest absolute temps in the NH.)

What to make of all this? The patterns suggest that in addition to El Ninos in the Pacific driving the Tropic SSTs, something else is going on in the NH.  The obvious culprit is the North Atlantic, since I have seen this sort of pulsing before.  After reading some papers by David Dilley, I confirmed his observation of Atlantic pulses into the Arctic every 8 to 10 years.

But the peaks coming nearly every summer in HadSST require a different picture.  Let’s look at August, the hottest month in the North Atlantic from the Kaplan dataset.
AMO August 2018

The AMO Index is from from Kaplan SST v2, the unaltered and not detrended dataset. By definition, the data are monthly average SSTs interpolated to a 5×5 grid over the North Atlantic basically 0 to 70N. The graph shows warming began after 1992 up to 1998, with a series of matching years since. Because the N. Atlantic has partnered with the Pacific ENSO recently, let’s take a closer look at some AMO years in the last 2 decades.
This graph shows monthly AMO temps for some important years. The Peak years were 1998, 2010 and 2016, with the latter emphasized as the most recent. The other years show lesser warming, with 2007 emphasized as the coolest in the last 20 years. Note the red 2018 line is at the bottom of all these tracks. The short black line shows that 2019 began slightly cooler and is now tracking last year closely.


The oceans are driving the warming this century.  SSTs took a step up with the 1998 El Nino and have stayed there with help from the North Atlantic, and more recently the Pacific northern “Blob.”  The ocean surfaces are releasing a lot of energy, warming the air, but eventually will have a cooling effect.  The decline after 1937 was rapid by comparison, so one wonders: How long can the oceans keep this up? If the pattern of recent years continues, NH SST anomalies may rise slightly in coming months, but once again, ENSO which has weakened will probably determine the outcome.


In the most recent GWPF 2017 State of the Climate report, Dr. Humlum made this observation:

“It is instructive to consider the variation of the annual change rate of atmospheric CO2 together with the annual change rates for the global air temperature and global sea surface temperature (Figure 16). All three change rates clearly vary in concert, but with sea surface temperature rates leading the global temperature rates by a few months and atmospheric CO2 rates lagging 11–12 months behind the sea surface temperature rates.”

Footnote: Why Rely on HadSST3

HadSST3 is distinguished from other SST products because HadCRU (Hadley Climatic Research Unit) does not engage in SST interpolation, i.e. infilling estimated anomalies into grid cells lacking sufficient sampling in a given month. From reading the documentation and from queries to Met Office, this is their procedure.

HadSST3 imports data from gridcells containing ocean, excluding land cells. From past records, they have calculated daily and monthly average readings for each grid cell for the period 1961 to 1990. Those temperatures form the baseline from which anomalies are calculated.

In a given month, each gridcell with sufficient sampling is averaged for the month and then the baseline value for that cell and that month is subtracted, resulting in the monthly anomaly for that cell. All cells with monthly anomalies are averaged to produce global, hemispheric and tropical anomalies for the month, based on the cells in those locations. For example, Tropics averages include ocean grid cells lying between latitudes 20N and 20S.

Gridcells lacking sufficient sampling that month are left out of the averaging, and the uncertainty from such missing data is estimated. IMO that is more reasonable than inventing data to infill. And it seems that the Global Drifter Array displayed in the top image is providing more uniform coverage of the oceans than in the past.


USS Pearl Harbor deploys Global Drifter Buoys in Pacific Ocean

Spaceship Earth Ideology Officers

The image above is from the Hunt for Red October (1990). Sean Connery played Marko Alexandrovich Ramius, a Soviet submarine captain, here in a confrontation with the on board Political Officer Ivan Putin, responsible to ensure the crew conforms to the Communist Party Line and Directives.

The real life parallel to the submarine drama is reported at New Scientist Journal criticised for study claiming sun is causing global warming. Excerpts in italics with my bolds. (H/T GWPF)

A high profile scientific journal is investigating how it came to publish a study suggesting that global warming is down to natural solar cycles. The paper was criticised by scientists for containing “very basic errors” about how Earth moves around the sun.

The study was published online on 24 June by Scientific Reports, an open access journal run by Nature Research, which also lists the prestigious Nature journal among its titles. A spokesperson told New Scientist that it is aware of concerns raised over the paper, which was authored by four academics based at Northumbria University, the University of Bradford and the University of Hull, all in the UK, plus the Nasir al-Din al-Tusi Shamakhi Astrophysical Observatory in Azerbaijan.

The authors suggest that Earth’s 1°C temperature rise over the past two centuries could largely be explained by the distance between Earth and the sun changing over time as the sun orbits around our solar system’s barycentre, its centre of mass. The phenomenon would see temperatures rise a further 3°C by 2600, they say.

Ken Rice of the University of Edinburgh, UK, criticised the paper for an “elementary” mistake about celestial mechanics. “It’s well known that the sun moves around the barycentre of the solar system due to the influence of the other solar system bodies, mainly Jupiter,” he says. “This does not mean, as the paper is claiming, that this then leads to changes in the distance between the sun and the Earth.”

“The claim that we will see warming in the coming centuries because the sun will move closer to the Earth as it moves around the solar system barycentre is very simply wrong,” adds Rice. He is urging the journal to withdraw the paper, and says it is embarrassing it was published.

Gavin Schmidt of the NASA Goddard Institute for Space Studies says the paper contains egregious errors. “The sun-Earth distance does not vary with the motion of the sun-Earth system around the barycentre of the sun-Jupiter system, nor the sun-galactic centre system or any other purely mathematical reference point,” he says. He says the journal must retract the paper if it wants to retain any credibility.

The Dispute

Michael Brown of Monash University in Australia lamented uncritical media coverage of the paper in Australia.

Following criticism of the paper, lead author Valentina Zharkova, of Northumbria University, described Rice as a “climate alarmist” in an online discussion.

“The close links between oscillations of solar baseline magnetic field, solar irradiance and temperature are established in our paper without any involvement of solar inertial motion,” Zharkova told New Scientist.

Scientific Reports says it has begun an “established process” to investigate the paper it has published. “This process is ongoing and we cannot comment further at this stage,” a spokesperson said.

Ken Rice has form as a Climate Ideology Officer having led a successful take down of Hermann Harde’s paper showing human CO2 emissions are only 4% of atmospheric CO2, which is only 0.04% of the air. Rice declared at the outset: “Any paper concluding that humans are not causing the rise in CO2 is obviously wrong.” He and fellow ideology officers quickly cobbled together an attack paper which was immediately published in the journal. Harde wrote a paper describing the errors and misconstructions in the attack paper, but his response was denied publication, sealing the issue in favor of the party line. This saga of censorship can be read at No Tricks Zone article AGW Gatekeepers Censor The CO2-Climate Debate By Refusing To Publish Author’s Response To Criticism

Now Rice and his gang are at it again, this time targeting lead author Valentina Zharkova. The tactics are familiar, starting with outrage against findings deviating from their beliefs, in this case the notion that the sun could in any way influence the climate. The comment thread shows the intensity and venom applied to digging up any mistake, no matter how trivial or peripheral to the central argument. These are declared “egregious” and justification to ignore and censor the contrary understanding of nature.

The comment thread started 9 days ago: Oscillations of the baseline of solar magnetic field and solar irradiance on a millennial timescale

Zharkova stands her ground, though always on the defensive and surrounded by a pack of attackers. Rice, as usual, displays his mastery of the English language to demean and undermine while appearing to be reasonable. His Russian opponent is clearly at a disadvantage when it comes to putdowns.

I don’t know enough to judge the substance of the claims, or the pertinence of the details, but can observe that the intensity shows how much is at stake for the attackers. A major irony is that Zharkova forecasts significant warming in the future, which would seem to confirm the warmists’ expectations. However, since she puts the sun as the cause, the finding pulls the rug from under anti-fossil fuel activists, hence the outrage. This also shows the dispute is not about climate or temperature, but about politics.


Michael Mann has been the most aggressive Inquisitor against climate heretics. In his book, “The Hockey Stick and the Climate Wars,” he presented an analogy to explain why he and other researchers have become the objects of such fierce public scrutiny and vilification, which he terms “the Serengeti strategy.” Likening climate scientists to zebras, he writes, “The climate change deniers isolate individual scientists just as predators on the Serengeti Plain of Africa hunt their prey: picking off vulnerable individuals from the rest of the herd.” He asserts that he and others have become targets because their findings challenge the entrenched fossil-fuel industries, which have tried to discredit them.

No one has more chutzpah than M. Mann, he and his pack applying the Serengeti strategy repeatedly against scientists finding other factors than CO2 driving climate changes, all the while claiming to be a victim rather than a predator.

Science 101: Null Test All Claims

Francis Menton provides some essential advice for non-scientists in his recent essay at Manhattan Contrarian You Don’t Need To Be A Scientist To Know That The Global Warming Alarm “Science” Is Fake. Excerpts in italics with my bolds.

When confronted with a claim that a scientific proposition has been definitively proven, ask the question: What was the null hypothesis, and on what basis has it been rejected?

As Menton explains, you don’t need the skills to perform yourself the null test, just the boldness to check how they dismissed the null hypothesis.

Consider first a simple example, the question of whether aspirin cures headaches. Make that our scientific proposition: aspirin cures headaches. How would this proposition be established? You yourself have taken aspirin many times, and your headache always went away. Doesn’t that prove that the aspirin worked? Absolutely not. The fact that you took aspirin 100 times and the headache went away 100 times proves nothing. Why? Because there is a null hypothesis that must first be rejected. Here the null hypothesis is that headaches will go away just as quickly on their own. How do you reject that? The standard method is to take some substantial number of people with headaches, say 2000, and give half of them the aspirin and the other half a placebo. Two hours later, of the 1000 who took the aspirin, 950 feel better and only 50 still have the headache; and of the 1000 who took the placebo, 500 still have the headache. Now you have very, very good proof that aspirin cured the headaches.

The point to focus on is that the most important evidence — the only evidence that really proves causation — is the evidence that requires rejection of the null hypothesis.

Over to climate science. Here you are subject to a constant barrage of information designed to convince you of the definitive relationship between human carbon emissions and global warming. The world temperature graph is shooting up in hockey stick formation! Arctic sea ice is disappearing! The rate of sea level rise is accelerating! Hurricanes are intensifying! June was the warmest month EVER! And on and on and on. All of this is alleged to be “consistent” with the hypothesis of human-caused global warming.

But, what is the null hypothesis, and on what basis has it been rejected? Here the null hypothesis is that some other factor, or combination of factors, rather than human carbon emissions, was the dominant cause of the observed warming.

Once you pose the null hypothesis, you immediately realize that all of the scary climate information with which you are constantly barraged does not even meaningfully address the relevant question. All of that information is just the analog of your 100 headaches that went away after you took aspirin. How do you know that those headaches wouldn’t have gone away without the aspirin? You don’t know unless someone presents data that are sufficient to reject the null hypothesis. Proof of causation can only come from disproof of the null hypothesis or hypotheses, that is, disproof of other proposed alternative causes. This precept is fundamental to the scientific method, and therefore fully applies to “climate science” to the extent that that field wishes to be real science versus fake science.

Now, start applying this simple check to every piece you read about climate science. Start looking for the null hypothesis and how it was supposedly rejected. In mainstream climate literature — and I’m including here both the highbrow media like the New York Times and also the so-called “peer reviewed” scientific journals like Nature and Scienceyou won’t find that. It seems that people calling themselves “climate scientists” today have convinced themselves that their field is such “settled science” that they no longer need to bother with tacky questions like worrying about the null hypothesis.

When climate scientists start addressing the alternative hypotheses seriously, then it will be real science. In the meantime, it’s fake science.


The null test can be applied to any scientific claim.  If there is no null hypothesis considered, then you can add the report  to the file “Unproven Claims,” or “Unfounded Suppositions.”  Some researchers call them SWAGs: Scientific Wild Ass Guesses.  These are not useless, since any discovery starts with a SWAG.  But you should avoid believing that they describe the way the world works until alternative explanations have been tested and dismissed.

See Also: No “Gold Standard” Climate Science

No GHG Warming Fingerprints in the Sky

No Causation Without Correlation (FF↔GMT)


Previous posts addressed the claim that fossil fuels are driving global warming. This post updates that analysis with the latest (2018) numbers from BP Statistics and compares World Fossil Fuel Consumption (WFFC) with three estimates of Global Mean Temperature (GMT). More on both these variables below.


2018 statistics are now available from BP for international consumption of Primary Energy sources. 2019 Statistical Review of World Energy. 

The reporting categories are:
Natural Gas
Renewables (other than hydro)

This analysis combines the first three, Oil, Gas, and Coal for total fossil fuel consumption world wide. The chart below shows the patterns for WFFC compared to world consumption of Primary Energy from 1965 through 2018.

The graph shows that Primary Energy consumption has grown continuously for more than 5 decades. Over that period oil, gas and coal (sometimes termed “Thermal”) averaged 89% of PE consumed, ranging from 94% in 1965 to 85% in 2018.  MToe is millions of tons of oil equivalents.

Global Mean Temperatures

Everyone acknowledges that GMT is a fiction since temperature is an intrinsic property of objects, and varies dramatically over time and over the surface of the earth. No place on earth determines “average” temperature for the globe. Yet for the purpose of detecting change in temperature, major climate data sets estimate GMT and report anomalies from it.

UAH record consists of satellite era global temperature estimates for the lower troposphere, a layer of air from 0 to 4km above the surface. HadSST estimates sea surface temperatures from oceans covering 71% of the planet. HADCRUT combines HadSST estimates with records from land stations whose elevations range up to 6km above sea level.

Both GISS LOTI (land and ocean) and HADCRUT4 (land and ocean) use 14.0 Celsius as the climate normal, so I will add that number back into the anomalies. This is done not claiming any validity other than to achieve a reasonable measure of magnitude regarding the observed fluctuations.

No doubt global sea surface temperatures are typically higher than 14C, more like 17 or 18C, and of course warmer in the tropics and colder at higher latitudes. Likewise, the lapse rate in the atmosphere means that air temperatures both from satellites and elevated land stations will range colder than 14C. Still, that climate normal is a generally accepted indicator of GMT.

Correlations of GMT and WFFC

The next graph compares WFFC to GMT estimates over the 5+ decades from 1965 to 2018 from HADCRUT4, which includes HadSST3.

Over the last five decades the increase in fossil fuel consumption is dramatic and monotonic, steadily increasing by 234% from 3.5B to 11.7B oil equivalent tons.  Meanwhile the GMT record from Hadcrut shows multiple ups and downs with an accumulated rise of 0.74C over 53 years, 5% of the starting value.

The second graph compares WFFC to GMT estimates from UAH6, and HadSST3 for the satellite era from 1979 to 2018, a period of 39 years.

In the satellite era WFFC has increased at a compounded rate of nearly 2% per year, for a total increase of 91% since 1979. At the same time, SST warming amounted to 0.42C, or 3% of the starting value.  UAH warming was 0.44C, or 3% up from 1979.  The temperature compounded rate of change is 0.1% per year, an order of magnitude less.  Even more obvious is the 1998 El Nino peak and flat GMT since.


The climate alarmist/activist claim is straight forward: Burning fossil fuels makes measured temperatures warmer. The Paris Accord further asserts that by reducing human use of fossil fuels, further warming can be prevented.  Those claims do not bear up under scrutiny.

It is enough for simple minds to see that two time series are both rising and to think that one must be causing the other. But both scientific and legal methods assert causation only when the two variables are both strongly and consistently aligned. The above shows a weak and inconsistent linkage between WFFC and GMT.

Going further back in history shows even weaker correlation between fossil fuels consumption and global temperature estimates:


Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (ΔT), 1861-2000. The thin dashed line represents annual ΔT, the bold line—its 13-year smoothing, and the line constructed from rectangles—WFC (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009

In legal terms, as long as there is another equally or more likely explanation for the set of facts, the claimed causation is unproven. The more likely explanation is that global temperatures vary due to oceanic and solar cycles. The proof is clearly and thoroughly set forward in the post Quantifying Natural Climate Change.

Background context for today’s post is at Claim: Fossil Fuels Cause Global Warming.

LA Times Misreports Mexican Energy Realism


Emily Green writes at LA Times Alternative energy efforts in Mexico slow as Lopez Obrador prioritizes oil. Excerpts in italics with my bolds.

The title of the article is not wrong, as we shall see below. But as usual climatists leave out the reality so obvious in the pie chart above. Seeing which energy sources are driving his nation’s prosperity provides the missing context for understanding the priorities of Mexican President Andres Lopez Obrador

The alarmist/activist hand-wringing is in full display:

With its windy valleys and wide swaths of desert, Mexico has some of the best natural terrain to produce wind and solar energy. And, in recent years, the country has attracted alternative energy investors from across the globe.

An aerial view of the Villanueva photovoltaic power plant in the municipality of Viesca, Coahuila state, Mexico. The plant covers an area the size of 40 football fields making it the largest solar plant in the Americas. (Alfredo Estrella / AFP/Getty Images)

But the market has taken a step back under Mexico’s new president, who has made clear his priority is returning Mexico’s oil company to its former dominance.

Since taking office Dec. 1, President Andres Manuel Lopez Obrador has canceled a highly anticipated electricity auction, as well as two major transmission-line projects that would have transported power generated by renewable energy plants around the country. He has also called for more investment in coal, and stood by as his director of Mexico’s electric utility dismissed wind and solar energy as unreliable and expensive.

It’s too soon to forecast the long-term consequences, but business leaders and energy consultants are seeing a trend: a chilling in the country’s up-and-coming renewable energy market.

Further on we get the usual distortions and misdirection: Renewables capacities and low prices are cited ignoring the low actual production and intermittancy mismatch with actual needs.

Energy and oil remain sensitive topics in Mexico, where people still recall the glory days of state-owned oil company Pemex, when it was the country’s economic lifeblood. There’s even a day commemorating Mexico’s 1938 nationalization of its oil and mineral wealth.

In recent years, however, Mexico’s energy market has undergone a transformation and reached out to investors. In 2014, Lopez Obrador’s predecessor, Enrique Peña Nieto, fully opened up the country’s oil, gas and electricity sector to private investment for the first time in 70 years.

The effects were immediate. In the oil sector, companies such as ExxonMobil and Chevron clamored to explore large deposits that had once been the sole purview of Pemex.

On the electricity side, the reform led to billions of dollars in private investment in Mexico’s power sector, both in renewable energy and traditional sources such as natural gas.

Through a series of auctions, Mexico’s state-owned utility awarded long-term power contracts to private developers. Although the auctions were open to all energy technologies, wind and solar companies won the bulk of the contracts because they offered among the lowest prices in the world. Solar developers won contracts to generate electricity in Mexico at around $20 per megawatt-hour, according to the government. Industry sources said that is about half the going price for coal and gas.

The country’s wind generation capacity jumped from 2,360 megawatts at the end of 2014 to 5,382 megawatts this April, according to the Mexican wind energy association. The numbers were even more stark in solar, which soared from 166 megawatts of capacity in 2014 to 2,900 megawatts in April, according to the Mexican solar energy association.

Virtue Signalling is an Expensive Way to Run an Economy

The electricity auctions were also seen as the main vehicle for Mexico to reach its clean energy commitments made as part of the Paris climate accord to produce 35% of its electricity from clean energy sources by 2024, and 50% by 2050. Under Mexico’s definition, clean energy sources include solar and wind generation, as well as sources that some critics say aren’t environmentally friendly — such as hydroelectric dams, nuclear energy and efficient natural gas plants. Currently, 24% of Mexico’s electricity comes from clean energy sources.


Note that for true believers, no energy is “clean” except wind and solar. And Mexico is another example of how renewables cannibalize your electrical grid while claiming to be cheaper than FF sources and saving the planet from the plant food gas CO2. Meanwhile those two “zero carbon” sources provide only 2% of the energy consumed, despite the billions invested.

I get the impression that ALO is much smarter than AOC.
See Also

Exaggerating Green Energy Supply

Cutting Through the Fog of Renewable Power Costs

Superhuman Renewables Targets




More 2019 Evidence of Nature’s Sunscreen

Greenhouse with adjustable sun screens to control warming.

Update July 12, 2019

A paper was just published by an IPCC reviewer No Empirical Evidence for Significant Anthropogenic Climate Change by J. Kauppinen and P. Malmi. Excerpts in italics with my bolds. H/T WUWT

An analysis by Finnish researchers adds to the chain of studies supporting the Cosmoclimatology theory first proposed by Svensmark. Their focus is on the relation between the changes in temperatures and the changes in low cloud cover.  Their findings are consistent with the global brightening and dimming research centered at ETH Zurich, which is elaborated later on.

Figure 2. [2] Global temperature anomaly (red) and the global low cloud cover changes (blue) according to the observations. The anomalies are between summer 1983 and summer 2008. The time resolution of the data is one month, but the seasonal signal is removed. Zero corresponds about 15°C for the temperature and 26 % for the low cloud cover.

It turns out that the changes in the relative humidity and in the low cloud cover depend on each other [4]. So, instead of low cloud cover we can use the changes of the relative humidity in order to derive the natural temperature anomaly. According to the observations 1 % increase of the relative humidity decreases the temperature by 0.15°C, and consequently the last term in the above equation can be approximated by −15°C∆φ, where ∆φ is the change of the relative humidity at the altitude of the low clouds. Figure 4 shows the sum of the temperature changes due to the natural and CO2 contributions compared with the observed temperature anomaly. The natural component has been calculated using the changes of the relative humidity. Now we see that the natural forcing does not explain fully the observed temperature anomaly. So we have to add the contribution of CO2 (green line), because the time interval is now 40 years (1970–2010). The concentration of CO2 has now increased from 326 ppm to 389 ppm. The green line has been calculated using the sensitivity 0.24°C, which seems to be correct. In Fig. 4 we see clearly how well a change in the relative humidity can model the strong temperature minimum around the year 1975. This is impossible to interpret by CO2 concentration.

The IPCC climate sensitivity is about one order of magnitude too high, because a strong negative feedback of the clouds is missing in climate models. If we pay attention to the fact that only a small part of the increased CO2 concentration is anthropogenic, we have to recognize that the anthropogenic climate change does not exist in practice. The major part of the extra CO2 is emitted from oceans [6], according to Henry‘s law. The low clouds practically control the global average temperature. During the last hundred years the temperature is increased about 0.1°C because of CO2. The human contribution was about 0.01°C.

We have proven that the GCM-models used in IPCC report AR5 cannot compute correctly the natural component included in the observed global temperature. The reason is that the models fail to derive the influences of low cloud cover fraction on the global temperature. A too small natural component results in a too large portion for the contribution of the greenhouse gases like carbon dioxide. That is why IPCC represents the climate sensitivity more than one order of magnitude larger than our sensitivity 0.24°C. Because the anthropogenic portion in the increased CO2 is less than 10 %, we have practically no anthropogenic climate change. The low clouds control mainly the global temperature.

Previous Update  Hard Evidence of Solar Impact upon Earth Cloudiness

Later on is a reprinted discussion of global dimming and brightness resulting from fluctuating cloud cover.  This is topical because of new empirical research findings coming out of Asia.  H/T GWPF.  A study published by Kobe University research center is Revealing the impact of cosmic rays on the Earth’s climate.  Excerpts in italics with my bolds.

New evidence suggests that high-energy particles from space known as galactic cosmic rays affect the Earth’s climate by increasing cloud cover, causing an “umbrella effect”.

When galactic cosmic rays increased during the Earth’s last geomagnetic reversal transition 780,000 years ago, the umbrella effect of low-cloud cover led to high atmospheric pressure in Siberia, causing the East Asian winter monsoon to become stronger. This is evidence that galactic cosmic rays influence changes in the Earth’s climate. The findings were made by a research team led by Professor Masayuki Hyodo (Research Center for Inland Seas, Kobe University) and published on June 28 in the online edition of Scientific Reports.

The Svensmark Effect is a hypothesis that galactic cosmic rays induce low cloud formation and influence the Earth’s climate. Tests based on recent meteorological observation data only show minute changes in the amounts of galactic cosmic rays and cloud cover, making it hard to prove this theory. However, during the last geomagnetic reversal transition, when the amount of galactic cosmic rays increased dramatically, there was also a large increase in cloud cover, so it should be possible to detect the impact of cosmic rays on climate at a higher sensitivity.

(The Svenmark Effect is explained in essay The cosmoclimatology theory)

How Nature’s Sunscreen Works (from Previous Post)

A recent post Planetary Warming: Back to Basics discussed a recent paper by Nikolov and Zeller on the atmospheric thermal effect measured on various planets in our solar system. They mentioned that an important source of temperature variation around the earth’s energy balance state can be traced to global brightening and dimming.

This post explores the fact of fluctuations in the amount of solar energy reflected rather than absorbed by the atmosphere and surface. Brightening refers to more incoming solar energy from clear and clean skies. Dimming refers to less solar energy due to more sunlight reflected in the atmosphere by the presence of clouds and aerosols (air-born particles like dust and smoke).

The energy budget above from ERBE shows how important is this issue. On average, half of sunlight is either absorbed in the atmosphere or reflected before it can be absorbed by the surface land and ocean. Any shift in the reflectivity (albedo) impacts greatly on the solar energy warming the planet.

The leading research on global brightening/dimming is done at the Institute for Atmospheric and Climate Science of ETH Zurich, led by Martin Wild, senior scientist specializing in the subject.

Special instruments have been recording the solar radiation that reaches the Earth’s surface since 1923. However, it wasn’t until the International Geophysical Year in 1957/58 that a global measurement network began to take shape. The data thus obtained reveal that the energy provided by the sun at the Earth’s surface has undergone considerable variations over the past decades, with associated impacts on climate.

The initial studies were published in the late 1980s and early 1990s for specific regions of the Earth. In 1998 the first global study was conducted for larger areas, like the continents Africa, Asia, North America and Europe for instance.

Now ETH has announced The Global Energy Balance Archive (GEBA) version 2017: A database for worldwide measured surface energy fluxes. The title is a link to that paper published in May 2017 explaining the facility and some principal findings. The Archive itself is at

For example, Figure 2 below provides the longest continuous record available in GEBA: surface downward shortwave radiation measured in Stockholm since 1922. Five year moving average in blue, 4th order regression model in red. Units Wm-2. Substantial multidecadal variations become evident, with an increase up to the 1950s (“early brightening”), an overall decline from the 1950s to the 1980s (“dimming”), and a recovery thereafter (“brightening”).
Figure 5. Composite of 56 European GEBA time series of annual surface downward shortwave radiation (thin line) from 1939 to 2013, plotted together with a 21 year Gaussian low-pass filter ((thick line). The series are expressed as anomalies (in Wm-2) from the 1971–2000 mean. Dashed lines are used prior to 1961 due to the lower number of records for this initial period. Updated from Sanchez-Lorenzo et al. (2015) including data until December 2013.
Martin Wild explains in a 2016 article Decadal changes in radiative fluxes at land and ocean surfaces and their relevance for global warming. From the Conclusion (SSR refers to solar radiation incident upon the surface)

However, observations indicate not only changes in the downward thermal fluxes, but even more so in their solar counterparts, whose records have a much wider spatial and temporal coverage. These records suggest multidecadal variations in SSR at widespread land-based observation sites. Specifically, declining tendencies in SSR between the 1950s and 1980s have been found at most of the measurement sites (‘dimming’), with a partial recovery at many of the sites thereafter (‘brightening’).

With the additional information from more widely measured meteorological quantities which can serve as proxies for SSR (primarily sunshine duration and DTR), more evidence for a widespread extent of these variations has been provided, as well as additional indications for an overall increasing tendency in SSR in the first part of the 20th century (‘early brightening’).

It is well established that these SSR variations are not caused by variations in the output of the sun itself, but rather by variations in the transparency of the atmosphere for solar radiation. It is still debated, however, to what extent the two major modulators of the atmospheric transparency, i.e., aerosol and clouds, contribute to the SSR variations.

The balance of evidence suggests that on longer (multidecadal) timescales aerosol changes dominate, whereas on shorter (decadal to subdecadal) timescales cloud effects dominate. More evidence is further provided for an increasing influence of aerosols during the course of the 20th century. However, aerosol and clouds may also interact, and these interactions were hypothesized to have the potential to amplify and dampen SSR trends in pristine and polluted areas, respectively.

No direct observational records are available over ocean surfaces. Nevertheless, based on the presented conceptual ideas of SSR trends amplified by aerosol–cloud interactions over the pristine oceans, modeling approaches as well as the available satellite-derived records it appears plausible that also over oceans significant decadal changes in SSR occur.

The coinciding multidecadal variations in SSTs and global aerosol emissions may be seen as a smoking gun, yet it is currently an open debate to what extent these SST variations are forced by aerosol-induced changes in SSR, effectively amplified by aerosol– cloud interactions, or are merely a result of unforced natural variations in the coupled ocean atmosphere system. Resolving this question could state a major step toward a better understanding of multidecadal climate change.

Another paper co-authored by Wild discusses the effects of aerosols and clouds The solar dimming/brightening effect over the Mediterranean Basin in the period 1979 − 2012. (NSWR is Net Short Wave Radiation, that is equal to surface solar radiation less reflected)

The analysis reveals an overall increasing trend in NSWR (all skies) corresponding to a slight solar brightening over the region (+0.36 Wm−2per decade), which is not statistically significant at 95% confidence level (C.L.). An increasing trend(+0.52 Wm−2per decade) is also shown for NSWR under clean skies (without aerosols), which is statistically significant (P=0.04).

This indicates that NSWR increases at a higher rate over the Mediterranean due to cloud variations only, because of a declining trend in COD (Cloud Optical Depth). The peaks in NSWR (all skies) in certain years (e.g., 2000) are attributed to a significant decrease in COD (see Figs. 9 and 10), whilethe two data series (NSWRall and NSWRclean) are highly correlated(r=0.95).

This indicates that cloud variation is the major regulatory factor for the amount and multi-decadal trends in NSWR over the Mediterranean Basin. (Note: Lower cloud optical depth is caused by less opaque clouds and/or decrease in overall cloudiness)

On the other hand, the results do not reveal a reversal from dimming to brightening during 1980s, as shown in several studies over Europe (Norris and Wild, 2007;Sanchez-Lorenzoet al., 2015), but a rather steady slight increasing trend in solar radiation, which, however, seems to be stabilized during the last years of the data series, in agreement with Sanchez-Lorenzo et al. (2015). Similarly, Wild (2012) reported that the solar brightening was less distinct at European sites after 2000 compared to the 1990s.

In contrast, the NSWR under clear (cloudless) skies shows a slight but statistically significant decreasing trend (−0.17 Wm−2per decade,P=0.002), indicating an overall decrease in NSWR over the Mediterranean due to water-vapor variability suggesting a transition to more humid environment under a warming climate.

Other researchers find cloudiness more dominant than aerosols. For example, The cause of solar dimming and brightening at the Earth’s surface during the last half century: Evidence from measurements of sunshine duration by Gerald Stanhill et al.

Analysis of the Angstrom-Prescott relationship between normalized values of global radiation and sunshine duration measured during the last 50 years made at five sites with a wide range of climate and aerosol emissions showed few significant differences in atmospheric transmissivity under clear or cloud-covered skies between years when global dimming occurred and years when global brightening was measured, nor in most cases were there any significant changes in the parameters or in their relationships to annual rates of fossil fuel combustion in the surrounding 1° cells. It is concluded that at the sites studied changes in cloud cover rather than anthropogenic aerosols emissions played the major role in determining solar dimming and brightening during the last half century and that there are reasons to suppose that these findings may have wider relevance.


The final words go to Martin Wild from Enlightening Global Dimming and Brightening.

Observed Tendencies in surface solar radiation
Figure 2.  Changes in surface solar radiation observed in regions with good station coverage during three periods.(left column) The 1950s–1980s show predominant declines (“dimming”), (middle column) the 1980s–2000 indicate partial recoveries (“brightening”) at many locations, except India, and (right column) recent developments after 2000 show mixed tendencies. Numbers denote typical literature estimates for the specified region and period in W m–2 per decade.  Based on various sources as referenced in Wild (2009).

The latest updates on solar radiation changes observed since the new millennium show no globally coherent trends anymore (see above and Fig. 2). While brightening persists to some extent in Europe and the United States, there are indications for a renewed dimming in China associated with the tremendous emission increases there after 2000, as well as unabated dimming in India (Streets et al. 2009; Wild et al. 2009).

We cannot exclude the possibility that we are currently again in a transition phase and may return to a renewed overall dimming for some years to come.

One can’t help but see the similarity between dimming/brightening and patterns of Global Mean Temperature, such as HadCrut.

Footnote: For more on clouds, precipitation and the ocean, see Here Comes the Rain Again

The End of Wind and Solar Parasites

Norman Rogers writes at American Thinker What It Will Take for the Wind and Solar Industries to Collapse. Excerpts in italics with my bolds.

The solar electricity industry is dependent on federal government subsidies for building new capacity. The subsidy consists of a 30% tax credit and the use of a tax scheme called tax equity finance. These subsidies are delivered during the first five years.

For wind, there is subsidy during the first five to ten years resulting from tax equity finance. There is also a production subsidy that lasts for the first ten years.

The other subsidy for wind and solar, not often characterized as a subsidy, is state renewable portfolio laws, or quotas, that require that an increasing portion of a state’s electricity come from renewable sources. Those state mandates result in wind and solar electricity being sold via profitable 25-year power purchase contracts. The buyer is generally a utility with good credit. The utilities are forced to offer these terms in order to cause sufficient supply to emerge to satisfy the renewable energy quotas.

The rate of return from a wind or solar investment can be low and credit terms favorable because the investors see the 25-year contract by a creditworthy utility as a guarantee of a low risk of default. If the risk were to be perceived as higher, then a higher rate of return and a higher interest rate on loans would be demanded. That in turn would increase the price of the electricity generated.

The bankruptcy of PG&E, the largest California utility, has created some cracks in the façade. A bankruptcy judge has ruled that cancellation of up to $40 billion in long-term energy contracts is a possibility. These contracts are not essential or needed to preserve the supply of electricity because they are mostly for wind or solar electricity supply that varies with the weather and can’t be counted on. As a consequence, there has to exist and does exist the necessary infrastructure to supply the electricity needs without the wind or solar energy.

Probably the judge will be overruled for political reasons, or the state will step in with a bailout. Utilities have to keep operating, no matter what. Ditching wind and solar contracts would make California politicians look foolish because they have long touted wind and solar as the future of energy.

PG&E is in bankruptcy because California applies strict liability for damages from forest fires started by electric lines, no matter who is really at fault. Almost certainly the government is at fault for not anticipating the danger of massive fires and for not enforcing strict fire prevention and protection. Massive fire damage should be protected by insurance, not by the utility, even if the fire was started by a power line. The fire in question could just as well have been started by lightning or a homeless person. PG&E previously filed bankruptcy in 2001, also a consequence of abuse of the utility by the state government.

By far the most important subsidy is the renewable portfolio laws. Even if the federal subsidies are reduced, the quota for renewable energy will force price increases to keep the renewable energy industry in business, because it has to stay in business to supply energy to meet the quota. Other plausible methods of meeting the quota have been outlawed by the industry’s friends in the state governments. Nuclear and hydro, neither of which generates CO2 emissions, are not allowed. Hydro is not strictly prohibited — only hydro that involves dams and diversions. That is very close to all hydro. Another reason hydro is banned is that environmental groups don’t like dams.

For technical reasons, an electrical grid cannot run on wind or solar much more than 50% of the time. The fleet of backup plants must be online to provide adjustable output to compensate for erratic variations in wind or solar. Output has to be ramped up to meet early-evening peaks. Wind suffers from a cube power law, meaning that if the wind drops by 10%, the electricity drops by 30%. Solar suffers from too much generation in the middle of the day and not enough generation to meet early evening peaks in consumption.

When a “too much generation” situation happens, the wind or solar has to be curtailed. That means that the operators are told to stop delivering electricity. In many cases, they are not paid for the electricity they could have delivered. Some contracts require that they be paid according to a model that figures out how much they could have generated according to the recorded weather conditions. The more wind and solar, the more curtailments as the amount of erratic electricity approaches the allowable limits. Curtailment is an increasing threat, as quotas increase, to the financial health of wind and solar.

There is a movement to include batteries with solar installations to move excessive middle-of-the-day generation to the early evening. This is a palliative to extend the time before solar runs into the curtailment wall. The batteries are extremely expensive and wear out every five years.

Neither wind nor solar is competitive without subsidies. If the subsidies and quotas were taken away, no wind or solar operation outside very special situations would be built. Further, the existing installations would continue only as long as their contracts are honored and they are cash flow–positive. In order to be competitive, without subsidies, wind or solar would have to supply electricity for less than $20 per megawatt-hour, the marginal cost of generating the electricity with gas or coal. Only the marginal cost counts, because the fossil fuel plants have to be there whether or not there is wind or solar. Without the subsidies, quotas, and 25-year contracts, wind or solar would have to get about $100 per megawatt-hour for its electricity. That gap, between $100 and $20, is a wide chasm only bridged by subsidies and mandates.

The cost of using wind and solar for reducing CO2 emissions is very high. The most authoritative and sincere promoters of global warming loudly advocate using nuclear, a source that is not erratic, does not emit CO2 or pollution, and uses the cheapest fuel. One can buy carbon offsets for 10 or 20 times less than the cost of reducing CO2 emissions with wind or solar. A carbon offset is a scheme where the buyer pays the seller to reduce world emissions of CO2. This is done in a variety of ways by the sellers.

The special situations where wind and solar can be competitive are remote locations using imported oil to generate electricity. In those situations, the marginal cost of the electricity may be $200 per megawatt-hour or more. Newfoundland comes to mind — for wind, not solar.

Maintenance costs for solar are low. For wind, maintenance costs are high, and major components, such as propeller blades and gearboxes, may fail, especially as the turbines age. These heavy and awkward objects are located hundreds of feet above ground. There exists a danger that wind farms will fail once the inflation-protected subsidy of $24 per megawatt-hour runs out after ten years. At that point, turbines that need expensive repairs may be abandoned. Wind turbine graveyards from the first wind fad in the 1970s can be seen near Palm Springs, California. Wind farms can’t receive the production subsidy unless they can sell the electricity. That has resulted paying customers to “buy” the electricity.

Tehachapi’s dead turbines.

A significant financial risk is that the global warming narrative may collapse. If belief in the reality of the global warming threat collapses, then the major intellectual support for renewable energy will collapse. It is ironic that the promoters of global warming are campaigning to require companies to take into account the threat of global warming in their financial projections. If the companies do this in an honest manner, they also have to take into account the possibility that the threat will evaporate. My own best guess, after considerable technical study, is that it is near a sure thing that the threat of global warming is imaginary and largely invented by the people who benefit. Adding CO2 to the atmosphere has well understood positive effects for the growth of crops and the greening of deserts.

The conservative investors who make long-term investments in wind or solar may be underestimating the risks involved. For example, an article in Chief Investment Officer magazine stated that CalPERS, the giant California public employees retirement fund, is planning to expand investments in renewable energy, characterized as “stable cash flowing assets.” That article was written before the bankruptcy of PG&E. The article also stated that competition among institutional investors for top yielding investments in the alternative energy space is fierce.

Wind and solar are not competitive and never will be. They have been pumped up into supposedly solid investments by means of ill advised subsidies and mandates. At some point, the governments will wake up to the waste and foolishness involved. At that point, the value of these investments will collapse. It won’t be the first time that investment experts made bad investments because they don’t really understand what is going on.

Footnote:  There is also a report from GWPF on environmental degradation from industrial scale wind and solar:

N. Atlantic Staying Cool

RAPID Array measuring North Atlantic SSTs.

For the last few years, observers have been speculating about when the North Atlantic will start the next phase shift from warm to cold. Given the way 2018 went and 2019 is following, this may be the onset.  First some background.

. Source: Energy and Education Canada

An example is this report in May 2015 The Atlantic is entering a cool phase that will change the world’s weather by Gerald McCarthy and Evan Haigh of the RAPID Atlantic monitoring project. Excerpts in italics with my bolds.

This is known as the Atlantic Multidecadal Oscillation (AMO), and the transition between its positive and negative phases can be very rapid. For example, Atlantic temperatures declined by 0.1ºC per decade from the 1940s to the 1970s. By comparison, global surface warming is estimated at 0.5ºC per century – a rate twice as slow.

In many parts of the world, the AMO has been linked with decade-long temperature and rainfall trends. Certainly – and perhaps obviously – the mean temperature of islands downwind of the Atlantic such as Britain and Ireland show almost exactly the same temperature fluctuations as the AMO.

Atlantic oscillations are associated with the frequency of hurricanes and droughts. When the AMO is in the warm phase, there are more hurricanes in the Atlantic and droughts in the US Midwest tend to be more frequent and prolonged. In the Pacific Northwest, a positive AMO leads to more rainfall.

A negative AMO (cooler ocean) is associated with reduced rainfall in the vulnerable Sahel region of Africa. The prolonged negative AMO was associated with the infamous Ethiopian famine in the mid-1980s. In the UK it tends to mean reduced summer rainfall – the mythical “barbeque summer”.Our results show that ocean circulation responds to the first mode of Atlantic atmospheric forcing, the North Atlantic Oscillation, through circulation changes between the subtropical and subpolar gyres – the intergyre region. This a major influence on the wind patterns and the heat transferred between the atmosphere and ocean.

The observations that we do have of the Atlantic overturning circulation over the past ten years show that it is declining. As a result, we expect the AMO is moving to a negative (colder surface waters) phase. This is consistent with observations of temperature in the North Atlantic.

Cold “blobs” in North Atlantic have been reported, but they are usually winter phenomena. For example in April 2016, the sst anomalies looked like this

But by September, the picture changed to this

And we know from Kaplan AMO dataset, that 2016 summer SSTs were right up there with 1998 and 2010 as the highest recorded.

As the graph above suggests, this body of water is also important for tropical cyclones, since warmer water provides more energy.  But those are annual averages, and I am interested in the summer pulses of warm water into the Arctic. As I have noted in my monthly HadSST3 reports, most summers since 2003 there have been warm pulses in the north atlantic.
amo december 2018
The AMO Index is from from Kaplan SST v2, the unaltered and not detrended dataset. By definition, the data are monthly average SSTs interpolated to a 5×5 grid over the North Atlantic basically 0 to 70N.  The graph shows the warmest month August beginning to rise after 1993 up to 1998, with a series of matching years since.  December 2016 set a record at 20.6C, but note the plunge down to 20.2C for  December 2018, matching 2011 as the coldest years  since 2000.  Because McCarthy refers to hints of cooling to come in the N. Atlantic, let’s take a closer look at some AMO years in the last 2 decades.

June begins the summer, and does serve to show the pattern of North Atlantic pulse related to the ENSO events. In the last two decades, there were four El Nino events peaking in 1998, 2005, 2010 and 2016.  Three of those years appear in the June AMO record as over 21.8C, a level not previously reached in the North Atlantic. Note the dropoff to 21.4C last year, and a rebound this year.

This graph shows monthly AMO temps for some important years. The Peak years were 1998, 2010 and 2016, with the latter emphasized as the most recent. The other years show lesser warming, with 2007 emphasized as the coolest in the last 20 years. Note the red 2018 line is at the bottom of all these tracks.  The short black line shows that 2019 began slightly cooler than January 2018, then tracking closely before rising slightly in June.  The average the first six months is virtually the same, with 2019 0.02C higher.

With all the talk of AMOC slowing down and a phase shift in the North Atlantic, it seems the annual average for 2018, matched so far by 2019, confirms that cooling has set in.  Through December the momentum is certainly heading downward, despite the band of warming ocean  that gave rise to European heat waves last summer.

amo annual122018